<div dir="ltr">I wound up putting the project I needed this for on hold for a little while, but have just recently been trying to get this working -- in particular, the approach involving checking for p-tokens with the same vertices in the chart as the candidate token and then using this token if the orthography differs from that of the candidate token.<div>
<br></div><div>I've discovered two gotchas with this approach. One is when the candidate token is not actually downcased so just checking for differences in orthography will match with downcased p-tokens. (Apparently some tokens in the derivation are not downcased for whatever reason -- the second word in a two word all-capitalized proper name is one example I noticed). I solved this by downcasing the token myself before doing the comparison. </div>
<div><br></div><div>The second is that unfortunately it seems capitalization is not always preserved in the p-tokens. Some sentence initial tokens in particular only seem to have all lower case characters. This one from DeepBank's 20003004 (wsj00a) for instance where the i-input began with 'Although':</div>
<div><div><br></div><div>(407, 0, 1, <0:8>, 1, "although", 0, "null") (324, 1, 2, <9:20>, 1, "preliminary", 0, "null") (391, 1, 2, <9:20>, 1, "preliminary", 0, "null", "JJ" 1.0000) (326, 2, 3, <21:29>, 1, "findings", 0, "null") (392, 2, 3, <21:29>, 1, "findings", 0, "null", "NNS" 1.0000) (328, 3, 4, <30:34>, 1, "were", 0, "null") (393, 3, 4, <30:34>, 1, "were", 0, "null", "VBD" 1.0000) (365, 4, 5, <35:43>, 1, "reported", 0, "null") (382, 4, 5, <35:43>, 1, "reported", 0, "null", "VBN" 0.9416) .......</div>
</div><div><br></div><div>Unless there are more gotchas I haven't noticed, this is pretty close so maybe close enough is good enough (or I could just fudge it and uppercase the first token). But if anyone has any further ideas regarding improving this approach, they would be most welcome. </div>
<div><br></div><div><br></div><div>Cheers,</div><div>Ned</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 13, 2014 at 6:16 PM, Ned Letcher <span dir="ltr"><<a href="mailto:nletcher@gmail.com" target="_blank">nletcher@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks everyone for the suggestions and input; it's been most helpful. I think that in my current setup it would be easier if I didn't have to invoke a separate tool, so it sounds like Angelina's approach of using the internal tokens and then comparing with the orthography of the tokens in the chart that occupy the same position -- while still somewhat fiddly -- might be the way to go. <div>
<br></div><div>Ned</div><div class="gmail_extra"><div><div><br><br><div class="gmail_quote">On Thu, Feb 6, 2014 at 8:41 PM, Stephan Oepen <span dir="ltr"><<a href="mailto:oe@ifi.uio.no" target="_blank">oe@ifi.uio.no</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr">indeed, following PTB conventions, the splitting of 'constraint-based' or '1/2' happens in token mapping, and hence i believe working off internal tokens (where there is a one-to-one correspondence to leaf nodes of the derivation) should be less fiddly. recovering capitalization, i think is the only challenge at this level, and looking in the same chart cell for another token that was not downcased (as angelina does) seems relative straightforward to me.</p>
<p dir="ltr">ned, i forgot: you could also just use the dependency converter, to obtain a token sequence corresponding to the derivation leafs. a little round-about but probably easy to pull off. interested in instructions?</p>
<p dir="ltr">oe</p><div><div>
<div class="gmail_quote">On Feb 6, 2014 9:33 AM, "Woodley Packard" <<a href="mailto:sweaglesw@sweaglesw.org" target="_blank">sweaglesw@sweaglesw.org</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word"><div>You're right Bec that I overlooked the question of where to put spaces. My suggestion also doesn't work for cases where the p-input tokens get split in half by token mapping (e.g. "the blue-colored dog"). I believe both problems are solvable, but it starts to get fiddly. Probably best to scratch that idea.</div>
<div><br></div><div>Woodley</div><br><div><div>On Feb 6, 2014, at 12:10 AM, Bec Dridan <<a href="mailto:bec.dridan@gmail.com" target="_blank">bec.dridan@gmail.com</a>> wrote:</div><br><blockquote type="cite"><div dir="ltr">
<div><div>Concatenating the p-input tokens will mostly get you what you want. I think you might run in to issues with leaves with spaces though, if you always concatenate. You'll need some extra checking of the span between tokens, possibly just the immediately adjacent characters. I can still imagine some combinations of wiki mark up, punctuation and words-with-spaces that will cause problems, but I believe they would be rare.<br>
<br></div>I still think it could be useful to not downcase at the end of token mapping, but just at the time of lexicon lookup. It wouldn't solve all these problems, but it would retain useful information in a more accessible way.<br>
<br></div>bec<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Feb 6, 2014 at 6:56 AM, Woodley Packard <span dir="ltr"><<a href="mailto:sweaglesw@sweaglesw.org" target="_blank">sweaglesw@sweaglesw.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div>Hi Ned and Stephan,</div><div><br></div><div>Actually, I think you may want to look at the p-input field of the parse relation. These are the tokens that come out of REPP, i.e. the input to token mapping. There is no ambiguity at this point, the bad characters are already removed, and case is preserved. What I would suggest is to concatenate the strings of all tokens contained in the character offset you got from the derivation tree.</div>
<div><br></div><div>In the case of the example you referenced, the p-input field contains (among other tokens) the following:</div><div><br></div><div>(21, 20, 21, <137:144>, 1, "control", 0, "null", "NN" 1.0000)</div>
<div>(22, 21, 22, <146:147>, 1, ",", 0, "null", "," 1.0000)</div><div><br></div><div>All this headache is brought about by the extra wiki markup embedded in the input string, which IMHO is not English. If you put English in, taking the substring directly out of the input string will give you something more worth looking at :-)</div>
<span><font color="#888888"><div><br></div><div>-Woodley</div></font></span><div><div><br><div><div>On Feb 5, 2014, at 9:21 PM, Ned Letcher <<a href="mailto:nletcher@gmail.com" target="_blank">nletcher@gmail.com</a>> wrote:</div>
<br><blockquote type="cite"><div dir="ltr">What Woodley described is I think what's going on. It looks like the START/END offsets returned by the lkb:repp call are different to that of the +FROM/+TO offsets found in the derivation. The derivation I'm using is from the gold tree in the logon repository: i-id 10032820 in $LOGONROOT/lingo/erg/tsdb/gold/ws01/result.gz. (also, for some reason I just get NIL when I try to evaluate that repp function call in my lisp buffer after loading logon)<div>
<br></div><div>From comparing derivations and the relevant portions of the p-token field, it looks to me like using the +FORM feature of the has the same effect as extracting the string from the p-tokens field for the token that was ultimately used? But as you say, this still leaves the issue of the correct casing. Angelina's workaround is a good suggestion, but definitely feels like a hack. It seems like it would be desirable to be keeping track of the value of tokens after REPP normalization but before downcasing for lexicon lookup. I was talking about this with Bec also, and while her problem was slightly different in that she only needed features for ubertagging rather than the original surface form, she said she was also struggling with this limitation.</div>
<div><br></div><div>Ned</div><div><br></div><div><div>
<br></div><div><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Feb 6, 2014 at 11:26 AM, Woodley Packard <span dir="ltr"><<a href="mailto:sweaglesw@sweaglesw.org" target="_blank">sweaglesw@sweaglesw.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I believe what happened in this particular case is that "control" and the following "," punctuation token got combined, resulting in a contiguous CFROM/CTO span of 137 to 147, which includes not only "control" and "," but also the deleted text in the middle.<br>
<span><font color="#888888"><br>
-Woodley<br>
</font></span><div><br>
On Feb 5, 2014, at 2:24 PM, Stephan Oepen <<a href="mailto:oe@ifi.uio.no" target="_blank">oe@ifi.uio.no</a>> wrote:<br>
<br>
> hi ned,<br>
><br>
> the practical challenge you are facing is deeply interesting. i<br>
> believe both angelina (in conversion to bi-lexical dependencies)<br>
> and bec (working on ubertagging in PET) have looked at this.<br>
><br>
> the derivation trees include the identifiers of internal tokens<br>
> (an integer immediately preceding the token feature structure),<br>
> and these tokens you can retrieve from the :p-tokens field in<br>
> reasonably up-to-date [incr tsdb()] profiles. this will give you<br>
> the strings that were used for lexical lookup. capitalization is<br>
> lost at this point, more often than not, hence one needs to do<br>
> something approximative in addition to finding the token. for<br>
> all i recall, angelina compares the actual token to others that<br>
> have the same position in the chart (by start and end vertex,<br>
> as recorded in the :p-tokens format); in case she finds one<br>
> whose orthography differs from the downcased string, then<br>
> she uses that token instead. bec, on the other hand, i think<br>
> consults the +CASE feature in the token feature structure.<br>
><br>
> underlying all this, i suspect there is a question of what the<br>
> characterization of initial tokens really should be, e.g. when<br>
> we strip wiki markup at the REPP level. but i seem unable<br>
> to reproduce the particular example you give:<br>
><br>
> TSNLP(88): (setf string<br>
> "Artificial intelligence has successfully been used in a<br>
> wide range of fields including [[medical diagnosis]], [[stock<br>
> trading]], [[robot control]], [[law]], scientific discovery and<br>
> toys.")<br>
> "Artificial intelligence has successfully been used in a wide range of<br>
> fields including [[medical diagnosis]], [[stock trading]], [[robot<br>
> control]], [[law]], scientific discovery and toys."<br>
><br>
> TSNLP(89): (pprint (lkb::repp string :calls '(:xml :wiki :lgt :ascii<br>
> :quotes) :format :raw))<br>
> ...<br>
> #S(LKB::TOKEN :ID 20 :FORM "control" :STEM NIL :FROM 20 :TO 21 :START<br>
> 137 :END 144 :TAGS NIL :ERSATZ NIL)<br>
> ...<br>
><br>
> TSNLP(90): (subseq string <a href="tel:137%20144" value="+61137144" target="_blank">137 144</a>)<br>
> "control"<br>
><br>
> i don't doubt the problem is real, but out of curiosity: how did<br>
> you produce your derivations?<br>
><br>
> all best, oe<br>
><br>
> On Wed, Feb 5, 2014 at 5:02 AM, Ned Letcher <<a href="mailto:nletcher@gmail.com" target="_blank">nletcher@gmail.com</a>> wrote:<br>
>> Hi all,<br>
>><br>
>> I'm trying to export DELPH-IN derivation trees for use in the Fangorn<br>
>> treebank querying tool (which uses PTB style trees for importing) and have<br>
>> run into a hiccup extracting the string to use for the leaves of the trees.<br>
>> Fangorn does not support storing the original input string alongside the<br>
>> derivation, with the string used for displaying the original sentence being<br>
>> reconstructed by concatenating the leaves of the tree together.<br>
>><br>
>> I've been populating the leaves of the exported PTB tree by extracting the<br>
>> relevant slice of the i-input string using the +FROM +TO offsets in the<br>
>> token information (if token mapping was used). One case I've found where<br>
>> this doesn't work so well (and there may be more), is where characters which<br>
>> have been stripped by REPP occur within a token, so these characters are<br>
>> then included in the slice. Wikipedia markup, for instance, results in these<br>
>> artefacts:<br>
>><br>
>> "Artificial intelligence has successfully been used in a wide range of<br>
>> fields including medical diagnosis]], stock trading]], robot control]],<br>
>> law]], scientific discovery and toys."<br>
>><br>
>> I also tried using the value of the +FORM feature, but it seems that this<br>
>> doesn't always preserve the casing of the original input string.<br>
>><br>
>> Does anyone have any ideas for combating this problem?<br>
>><br>
>> Ned<br>
>><br>
>> --<br>
>> <a href="http://nedned.net/" target="_blank">nedned.net</a><br>
><br>
><br>
><br>
> --<br>
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
> +++ Universitetet i Oslo (IFI); Boks 1080 Blindern; 0316 Oslo; <a href="tel:%28%2B47%29%202284%200125" value="+4722840125" target="_blank">(+47) 2284 0125</a><br>
> +++ --- <a href="mailto:oe@ifi.uio.no" target="_blank">oe@ifi.uio.no</a>; <a href="mailto:stephan@oepen.net" target="_blank">stephan@oepen.net</a>; <a href="http://www.emmtee.net/oe/" target="_blank">http://www.emmtee.net/oe/</a> ---<br>
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
<br>
</div></blockquote></div><br><br clear="all"><div><br></div>-- <br><a href="http://nedned.net/" target="_blank">nedned.net</a>
</div>
</blockquote></div><br></div></div></div></blockquote></div><br></div>
</blockquote></div><br></div></blockquote></div>
</div></div></blockquote></div><br><br clear="all"><div><br></div></div></div><span><font color="#888888">-- <br><a href="http://nedned.net" target="_blank">nedned.net</a>
</font></span></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><a href="http://nedned.net" target="_blank">nedned.net</a>
</div></div>