hosted by
publicationslist.org
    

Jochen L. Leidner


leidner@acm.org

Books

2008

Journal articles

2007
2004
Jochen L Leidner (2004)  Review of Pasca (2003), Open-Domain Question Answering from Large Text Collections   Journal of Logic, Language and Information 13: 3. 373-376  
Abstract: The book is structured as follows: Chapter 1 gives a brief introduction into the problem to be solved and the question answering track of the TREC Conference, in which the proposed approach was evaluated. Chapter 2 outlines the approach to open-domain question answering advocated by \pasca. Chapter 3 describes a layered, relational approach to question processing. Chapter 4 is concerned with how to determine the expected answer type for a question under consideration. Chapter 5 deals with the passage retrieval back-end and the notion of loops to adjust query granularity dynamically. Chapter 6 describes answer extraction as question-driven passage ranking task and compares a heuristic with a neural network based machine-learning approach. Chapter 7 is concerned with the differences between searching an offline text collection and WWW question answering. Chapter 8 mentions related work and Chapter 9 concludes the book.
Notes:

Conference papers

2010
2004
Jochen L Leidner, Johan Bos, Tiphaine Dalmas, James R Curran, Stephen Clark, Colin J Bannard, Mark Steedman, Bonnie Webber (2004)  The QED Open-Domain Answer Retrieval System for TREC 2003   In: Proceedings of the Twelfth Text Retrieval Conference (TREC 2003) 595-599 Gaithersburg, MD:  
Abstract: This report describes a new open-domain answer retrieval system developed at the University of Edinburgh and gives results for the TREC-12 question answering track. Phrasal answers are identified by increasingly narrowing down the search space from a large text collection to a single phrase. The system uses document retrieval, query-based passage segmentation and ranking, semantic analysis from a wide-coverage parser, and a unification-like matching procedure to extract potential answers. A simple Web-based answer validation stage is also applied. The system is based on the Open Agent Architecture and has a parallel design so that multiple questions can be answered simultaneously on a Beowulf cluster.
Notes:
2003
Jochen L Leidner, Chris Callison-Burch (2003)  Evaluating Question Answering Systems Using FAQ Answer Injection   In: Proceedings of the Sixth Computational Linguistics Research Colloquium (CLUK-6) 57-62 Edinburgh, UK:  
Abstract: Question answering (NLQA) systems which retrieve a textual fragment from a document collection that represents the answer to a question are an active field of research. But evaluations currently involve a large amount of manual effort. We propose a new evaluation scheme that uses the insertion of answers from Frequently Asked Questions collections (FAQs) to measure the ability of a system to retrieve it from the corresponding question. We describe how the usefulness of the approach can be assessed and discuss advantages and problems.
Notes:
Tiphaine Dalmas, Jochen L Leidner, Bonnie Webber, Claire Grover, Johan Bos (2003)  Annotated Corpora for Reading Comprehension and Question Answering Evaluation   In: Proceedings of the Workshop on Question Answering held at the Tenth Annual Meeting of the European Chapter of the Association for Computational Linguistics 2003 (EACL’03) 13-19 Budapest, Hungary:  
Abstract: Recently, reading comprehension tests for students and adult language learners have received increased attention within the NLP community as a means to develop and evaluate robust question answering (NLQA) methods. We present our ongoing work on automatically creating richly annotated corpus resources for NLQA and on comparing automatic methods for answering questions against this data set. Starting with the CBC4Kids corpus, we have added XML annotation layers for tokenization, lemmatization, stemming, semantic classes, POS tags and best-ranking syntactic parses to support future experiments with semantic answer retrieval and inference. Using this resource, we have calculated a baseline for word-overlap based answer retrieval [Hirschman et al. 1999] on the CBC4Kids data and found the method performs slightly better than on the REMEDIA corpus. We hope that our richly annotated version of the CBC4Kids corpus will become a standard resource, especially as a controlled environment for evaluating inference-based techniques.
Notes:
Jochen L Leidner, Gail Sinclair, Bonnie Webber (2003)  Grounding Spatial Named Entities for Information Extraction and Question Answering   In: Proceedings of the Workshop on the Analysis of Geographic References held at the Joint Conference for Human Language Technology and the Annual Meeting of the Noth American Chapter of the Association for Computational Linguistics 2003 (HLT/NAACL’03) 31-38 Edmonton, Alberta, Canada:  
Abstract: The task of named entity annotation of unseen text has recently been successfully automated with near-human performance. The full task comprises identifying the scope of each (continuous) text span, its class (e.g. place name), and its grounding (i.e., its denotation with respect to the world or a model). The latter aspect has so far been neglected. We show how geo-spatial named entities can be grounded using geographic coordinates, and how the results can be visualized using off-the-shelf software. We use this to compare a âtextual surrogateâ of a newspaper story, with a âvisual surrogateâ based on geographic coordinates.
Notes:
Jochen L Leidner (2003)  Current Issues in Software Engineering for Natural Language Processing   In: Proceedings of the Workshop on Software Engineering and Architecture of Language Technology Systems (SEALTS) held at the Joint Conference for Human Language Technology and the Annual Meeting of the North American Chapter of the Association for Computational Linguistics 2003 (HLT/NAACL’03) 45-50 Edmonton, Alberta, Canada:  
Abstract: In Natural Language Processing (NLP), research results from software engineering and software technology have often been neglected. This paper describes some factors that add additional complexity to the task of engineering reusable NLP systems (beyond conventional software systems). Current work in the area of design patterns and composition languages is described and claimed relevant for natural language processing. The benefits of NLP componentware and barriers to reuse are outlined, and the dichotomies âsystem versus experimentâ and âtoolkit versus frameworkâ are discussed. It is argued that in order to live up to its name language engineering must not neglect component quality and architectural evaluation when reporting new NLP research.
Notes:
Jochen L Leidner, Tiphaine Dalmas, Bonnie Webber, Johan Bos, Claire Grover (2003)  Automatic Multi-Layer Corpus Annotation for Evaluating Question Answering Methods : CBC4Kids   In: Proceedings of the Third Workshop on Linguistically Interpreted Corpora (LINC-3) held at the Tenth Annual Meeting of the European Chapter of the Association for Computational Linguistics 2003 (EACL’03) 39-46 Budapest, Hungary:  
Abstract: Reading comprehension tests are receiving increased attention within the NLP community as a controlled test-bed for developing, evaluating and comparing robust question answering (NLQA) methods. To support this, we have enriched the MITRE CBC4Kids corpus with multiple XML annotation layers recording the output of various tokenizers, lemmatizers, a stemmer, a semantic tagger, POS taggers and syntactic parsers. Using this resource, we have built a baseline NLQA system for word-overlap based answer retrieval.
Notes:

Other

2004
Jochen L Leidner (2004)  Text and Space : A First Reference Corpus for Toponym Resolution Evaluation   Invited Talk, Department of Information Studies Seminar Series, University of Sheffield, Sheffield, UK  
Abstract: Traditionally, Named entity tagging comprises the sub-tasks of identifying a text span and classifying it, but this view ignores the relationship between the entities and the world. Spatial and temporal entities ground events in space-time, and this relationship is vital for applications such as question answering and event tracking. There is much recent work regarding the temporal dimension (Setzer and Gaizauskas, 2002), but no detailed study of the spatial dimension. propose to investigate how spatial named entities (which are often referentially ambiguous) can be automatically resolved with respect to an extensional coordinate model (toponym resolution). To this end, various information sources including linguistic cue patterns, co-occurrence information, discourse/positional information, world knowledge (such as size and population) as well as minimality heuristics (Leidner et al., 2003) can be combined. However, before we can embark in a comparative study of algorithms proposed in the past, we argue it is necessary to curate a reference resource for evaluating these methods. In partial analogy to the Word Sense Disambiguation (WSD) task, such a resource comprises a static gazetteer (geographic thesaurus) snapshot and a textual corpus in which LOCATIONs are marked up as such, and enriched with latitude/longitude information. I report on the curation of the first reference corpus for the toponym resolution task (Leidner, 2004). In this synchronic corpus of present-day written English news, toponyms are marked up with geographic latitude/longitude coordinates. I briefly describe the construction of the reference gazetteer, the XML-based markup scheme TRML, the new Web-based annotation tool TAME, and the ongoing annotation work. Then I give a sketch of the big picture, which this research is but a small part of: curators of Digital Libraries want to enable their collections for geographic browsing, analysts (e.g. in competitive marketing and intelligence) need maps to visualize events in news, and we all want Web search engines to be location-aware so as to be able to find the nearest pizza takeaway to Portobello Street.
Notes:
Tiphaine Dalmas, Jochen L Leidner, Bonnie Webber, Claire Grover, Johan Bos (2004)  Annotating CBC4Kids : A Corpus for Reading Comprehension and Question Answering Evaluation   http://www.inf.ed.ac.uk/publications/report/0204.html  
Abstract: Reading comprehension tests are receiving increased attention within the NLP community as a controlled test-bed for developing, evaluating and comparing robust question answering (NLQA) methods. To support this, we have enriched the MITRE CBC4Kids corpus with multiple XML annotation layers recording the output of various tokenizers, lemmatizers, a stemmer, a semantic tagger, POS taggers and syntactic parsers. To demonstrate its use, we have built a baseline NLQA system for word-overlap based answer retrieval, NLQA evaluation and corpus browsing.
Notes:
2003
Jochen L Leidner (2003)  Grounding Spatial Named Entities and Generating Visual Document Surrogates   Poster presented at the 2003 Informatics Jamboree  
Abstract: Two parts of the Named entity (NE) annotation task of text have recently been automated with near-human performance: identifying the scope of each text span and its class (e.g. place name). The third part, grounding the result with respect to its denotation in the world or a model, is the focus of this project. We ground (geo-)spatial named entities using geographic coordinates, suggest a minimality-based place name resolution algorithm, and show the results can be visualized using off-the-shelf software. Using this, visual surrogatesâ-geographic maps communicating the âspatial aboutnessâ of a storyâ-can be generated from Web newspaper stories
Notes:
Powered by PublicationsList.org.