[developers] Questions about the MRS algebra from Seattle
aac10 at cl.cam.ac.uk
Thu Jul 2 15:44:50 CEST 2015
I'll be on (non-emailing responding!) vacation for three weeks from this
weekend, so thoughtful discussion would have to wait till after that.
But quick comments below again.
>> The UW group has been reading and discussing Copestake et
>> al 2001
>> and Copestake 2007, trying to get a better understanding
>> of the MRS
>> algebra. We have a few questions---I think some of these
>> issues have been
>> proposed for the Summit, but I'm impatient, so I thought
>> I'd try to get
>> a discussion going over email. UW folks: Please feel
>> free to chime in
>> with other questions I haven't remembered just now.
>> The two big ones are:
>> (1) Copestake et al 2001 don't explicitly state what the
>> purpose of the
>> algebra is. My understanding is that it provides a
>> guarantee that the MRSs
>> produced by a grammar are well-formed, so long as the
>> grammar is
>> algebra-compliant. Well-formed MRS (in this sense)
>> would necessarily
>> have an interpretation because the algebra shows how to
>> compose the
>> interpretation for each sement. Is this on track? Are
>> there other reasons
>> to want an algebra?
>> We have never managed to prove that MRSs constructed
>> according to the algebra will be scopable, but I think that
>> is the case. But more generally, the algebra gives some more
>> constraints to the idea of compositionality which isn't the
>> case if you simply use feature structures. It excludes some
>> possible ways of doing semantic composition and therefore
>> constitutes a testable hypothesis about the nature of the
>> syntax-semantics interface. It also allows one to do the
>> same semantic composition with grammars in formalisms other
>> than typed feature structures.
>> I'm still trying to understand why it's important (or maybe
>> interesting is
>> the goal, rather than important?) to exclude some possible ways
>> of doing
>> semantic composition. Are there ways that are problematic for
>> some reason?
>> Is it a question of constraining the space of possible grammars
>> (e.g. for
>> learnability concerns)? Related to issues of incremental processing?
> The following may sound snarky but I really don't mean it this way
> - would you have the same type of questions about syntactic
> formalism? Because I can answer in two ways - one is about why I
> think it's important to build testable formal models for language
> (models which aren't equivalent to general programming languages,
> so more constrained thn typed feature structures) and the other is
> about what the particular issues are for compositional semantics.
> I don't want to reach the conclusion that semantic composition
> requires arbitrary programs without looking at more constrained
> alternatives. So yes: learnability, processing efficiency (human
> and computational), incrementality and so on, but one can't really
> look at these in detail without first having an idea of plausible
> Not snarky and totally a fair question. Our syntactic formalism in
> fact isn't constrained.
> Part of what's appealing about HPSG is that the formalism is flexible
> enough to state
> different theories in.
I guess for me, the typed feature structure formalism was developed
(largely) independently of HPSG and what's attractive about it is that
it seems to be a particularly good programming language for grammar
There are restrictions on what people are willing to call HPSG, but
these are flexible/inconsistent, and the community (or subparts of the
community) changes its mind sometimes. I'm not at all unhappy with
this, as long as there is progress. Anyway, some of these restrictions
can be elegantly encoded in typed feature structures and others can't.
Working out what the restrictions amount to formally should be an
objective, even if we don't then change formalism.
> (As opposed to certain other approaches that have to hard code
> theoretical claims into the formalism.)
they could all be encoded in typed feature structures, of course, but in
some cases the claims/constraints can't be represented particularly
nicely in TFSs. for example, if we're representing a categorial
grammar, we have to implement forward and backward application in
feature structures, and there's an implicit claim there are no
additional rules. We do a similar thing with our grammar rules, of
course, but HPSG allows for differing number of constructions in a
grammar without a claim that the framework has changed, while in
categorial grammar the formalism and framework is defined by the rule
inventory. However, if at some point we really do decide we know how to
do syntactic subcategorization (say), it would make sense to work out
precisely what we're doing and relate it to other frameworks.
> So at that point, I think it makes sense to see the
> algebra as a theory of composition that can be implemented in the HPSG
> (or others).
except that we don't formally constrain composition in TFS in the way
that the algebra assumes ...
> I'm totally with you on building testable formal models (and testable at
> scale), so the program of formalizing the general approach of the ERG
> and then
> seeing whether the whole grammar can be made consistent with that set
> of 'best practices'
> makes a lot of sense. I guess the piece that's new to me is why this
> should be considered
> for composition separate from the rest of the grammar.
The first point would be that different bits of the syntax need
different types of restriction - e.g., subcategorization is different
from agreement, although they obviously interact. The second point is
that what we're trying to accomplish with semantics is different,
specifically that the primary interest is relating structures for
sentences/phrases to structures for words and that we never filter
structures by the compositional semantics. This is actually something
that has a reflex in the implementations, since we don't need to compute
the compositional semantics until we have a complete parse.
> (Also, for the record, I'm always skeptical about arguments grounded
> in learnability,
> since I think they require a leap that I'm not ready to make that our
> models are actually
> relatable to what's 'really' going on in wet-ware.
actually that wouldn't follow - learnability results are fundamentally
about information rather than implementation. But I don't actually have
anything useful to say about learnability.
> But processing efficiency, incrementality,
> etc are still interesting to me.)
>> (1a) I was a bit surprised to see the positing of labels
>> in the model. What
>> would a label correspond to in the world? Is this akin
>> to reification of propositions?
>> Are we really talking about all the labels here, or just
>> those that survive once
>> an MRS is fully scoped?
>> the model here is not a model of the world - it's a model of
>> semantic structures (fully scoped MRSs).
>> Oh, I missed that in the paper. So are we then talking about
>> because the fully scoped MRSs are assumed to have interpretations
>> a second layer of modeling?
>> (1b) How does this discussion relate to what Ann was
>> talking about at IWCS
>> regarding the logical fragment of the ERG and the rest of
>> the ERG? That is,
>> if all of the ERG were algebra-compliant, does that mean
>> that all of the ERSs
>> it can produce are compositional in their interpretation?
>> Or does that require
>> a model that can "keep up"?
>> it's really orthogonal - what I was talking about at IWCS was
>> about the complete MRSs.
>> Got it.
>> (2) Copestake et al state: "Since the constraints [=
>> constraints on grammar rules
>> that make them algebra-compliant] need not be checked at
>> runtime, it seems
>> better to regard them as metalevel conditions on the
>> description of the grammar,
>> which can anyway easily be checked by code which converts
>> the TFS into the
>> algebraic representation." What is the current thinking
>> on this? Is it in fact
>> possible convert TFS (here I assume that means lexical
>> entries & rules?) to
>> algebraic representation? Has this been done?
>> `easily' might be an exaggeration, but the code is in the
>> LKB, though it has to be parameterised for the grammar and
>> may not work with the current ERG. You can access it via the
>> menu on the trees, if I remember correctly. The small
>> mrscomp grammar is algebra compliant, the ERG wasn't entirely
>> when I tested it.
>> In the spirit of keeping the discussion going without delays, I
>> haven't actually
>> played with this yet. But: accessible from the trees seems to
>> suggest that the
>> testing takes place over particular analyses of particular
>> inputs, and not directly
>> on the grammar as static code analysis. Is that right?
> I think one could use static analysis based on that code on a
> grammar which was very careful about not having pointers into the
> semantics other than those licensed by the algebra. Either no
> such pointers at all, or ones in which there was a clear locality,
> so it was never possible for information to sneak back into the
> semantics bypassing the official composition. Proving that can't
> happen with a grammar like the ERG is difficult.
> Would it be possible at least to detect pointers into the semantics,
> for hand-investigation?
well, yes, you'd just have to traverse the graph, I guess. I think
there actually is some code that might do what is needed, related to the
check that ensures the semantics can be separated off from the syntax.
>> Thanks again,
> you're welcome!
> All best,
> Emily M. Bender
> Professor, Department of Linguistics
> Check out CLMS on facebook! http://www.facebook.com/uwclma
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the developers