[Fwd: Re: [developers] processing of lexical rules]

Emily M. Bender ebender at u.washington.edu
Thu Feb 17 01:50:52 CET 2005


Dear Ann & everyone else,

Here is a slightly delayed reply to Ann's recent (long) message.
I'm going to start with Choice 3, since I'd like to be able to
put my answer there in the common ground before answering the others:

> Choice 3 - how do we formalise the spelling part of this?  This is
> the bit I'm really not interested in - I think we should support
> alternatives to the current string unification but I don't want to
> implement them ...

The conclusion that Jeff Good and I (as well as the students
in my seminar last quarter) came to in thinking about this is
that morphophonology (the spelling part) and morphosyntax (how
the feature structures change in light of the affixes present)
should really be handled separately.  Rather than repeat the arguments
for that, I'm going to attach the "script" of the talk we gave on
it at the LSA to this message.  The slides for that talk can be
found here:

http://faculty.washington.edu/ebender/slides.html

In Jeff's apt turn of phrase, what we're essentially proposing
is to turn language with complex, icky phonology into Japanese
or Turkish: i.e., simple concatenations of abstract morphemes.

> So, Choice 1: a) affixes as rules or b) affixes as lexical items?

I think that affixes as rules makes more sense linguistically (having
done the affixes as lexical items thing in JACY), but others differ.
How much work would it be to support both?

> So, Choice 2: a) morphemes as a new tokenisation or b) morphemes as partial 
> specification of a derivation tree?

Choice 1a seems to imply 2b, and as Ann points out, 2b is potentially
compatible with 1b as well.  So I'd say 2b here.  If someone really wants
2a, they can presumably tweak their external morphological analyzer
(or take it off the shelf as with ChaSen...) to present the morphemes
as the tokenization.

> As I currently see it, Choice 2a allows for some options that can't be done 
> with 2b.  For instance, we could instantiate the chart with
> 
> `derivational' `grammar' `ian'
> 
> and bracket as 
> 
> (`derivational' `grammar') `ian'
> 
> Question 1a: Are there phenomena like this that people really want to deal 
> with?
> Question 1b: If not, should we claim it as a principled restriction that we 
> disallow this?!

My first reaction to that bracketing is that it seems counter to the
lexical integrity hypothesis.  But then I wonder about clitics ('s, others?).
Perhaps it would be useful to have the flexibility...  Although, perhaps
a more principled approach would be to have the morphology/morphophonology
recognize clitics and split them off.

> Question 2 (probably mostly to Emily): what about incorporation?  Could we 
> handle this on the full 2b strategy?

Yes, I think we could.  A typical Slave example, in your partial
derivation notation would look like this:

(incorporation adverb 
               (aspect-marker (incorporation noun 
                                             (agreement-marker 
                                                (aspect-marker verb)))))

> Question 3: could we restrict the 2b strategy?  As far as
> compounding goes, could it be restricted to the bottom of the tree
> or does it need to be fully interleaved with affixation?  Are there
> any reasons to allow more than binary branching?

I don't think the incorporation facts would require more than
binary branching.  In Slave they clearly need to be interleaved with
other kinds of affixation.

Emily


-------------- next part --------------
A non-text attachment was scrubbed...
Name: script.pdf
Type: application/pdf
Size: 56092 bytes
Desc: not available
URL: <http://lists.delph-in.net/archives/developers/attachments/20050216/d49e9827/attachment.pdf>


More information about the developers mailing list