hosted by
publicationslist.org
    

Nigel Collier

National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
collier@nii.ac.jp
I earnt my MSc and PhD in Computational Linguistics at UMIST (now merged with Manchester University) and a BSc in Computer Science at Leeds University in the UK. As a doctoral student and Toshiba Fellow I worked on machine translation and knowledge acquisition. I came to the National Institute of Informatics in 2000 from the University of Tokyo where I helped found the GENIA project. Since then I have been active in biomedical text mining - helping experts to access the information they need in large text collections. In recent years I have developed the BioCaster project - an experimental health surveillance system that uses the Web to discover early warnings about infectious diseases. Research interests include: Natural Language Processing (NLP), Text Mining for Biomedicine and Health Surveillance, Ontology Engineering and Knowledge Acquisition.

Journal articles

2010
Hutchatai Chanlekha, Ai Kawazoe, Nigel Collier (2010)  A framework for enhancing spatial and temporal granularity in report-based health surveillance systems.   BMC Med Inform Decis Mak 10: 01  
Abstract: BACKGROUND: Current public concern over the spread of infectious diseases has underscored the importance of health surveillance systems for the speedy detection of disease outbreaks. Several international report-based monitoring systems have been developed, including GPHIN, Argus, HealthMap, and BioCaster. A vital feature of these report-based systems is the geo-temporal encoding of outbreak-related textual data. Until now, automated systems have tended to use an ad-hoc strategy for processing geo-temporal information, normally involving the detection of locations that match pre-determined criteria, and the use of document publication dates as a proxy for disease event dates. Although these strategies appear to be effective enough for reporting events at the country and province levels, they may be less effective at discovering geo-temporal information at more detailed levels of granularity. In order to improve the capabilities of current Web-based health surveillance systems, we introduce the design for a novel scheme called spatiotemporal zoning. METHOD: The proposed scheme classifies news articles into zones according to the spatiotemporal characteristics of their content. In order to study the reliability of the annotation scheme, we analyzed the inter-annotator agreements on a group of human annotators for over 1000 reported events. Qualitative and quantitative evaluation is made on the results including the kappa and percentage agreement. RESULTS: The reliability evaluation of our scheme yielded very promising inter-annotator agreement, more than a 0.9 kappa and a 0.9 percentage agreement for event type annotation and temporal attributes annotation, respectively, with a slight degradation for the spatial attribute. However, for events indicating an outbreak situation, the annotators usually had inter-annotator agreements with the lowest granularity location. CONCLUSIONS: We developed and evaluated a novel spatiotemporal zoning annotation scheme. The results of the scheme evaluation indicate that our annotated corpus and the proposed annotation scheme are reliable and could be effectively used for developing an automatic system. Given the current advances in natural language processing techniques, including the availability of language resources and tools, we believe that a reliable automatic spatiotemporal zoning system can be achieved. In the next stage of this work, we plan to develop an automatic zoning system and evaluate its usability within an operational health surveillance system.
Notes:
Hutchatai Chanlekha, Nigel Collier (2010)  A methodology to enhance spatial understanding of disease outbreak events reported in news articles.   Int J Med Inform 79: 4. 284-296 Apr  
Abstract: PURPOSE: The emergence and re-emergence of disease outbreaks of international concern in the last several years has raised the importance of health surveillance systems that exploit the open media for their timely and precise detection of events. However, one of the key barriers faced by current event-based health surveillance systems is in identifying fine-grained terms for an outbreak's geographical location. In this article, we present a method to tackle this problem by associating each reported event with the most specific spatial information available in a news report. This would be useful not only for health surveillance systems, but also for other event-centered processing systems. METHODS: To develop an automated spatial attribute annotation system, we first created a gold standard corpus for training a machine learning model. Since the qualitative analysis on data suggested that the event class might have an impact on the spatial attribute annotation, we also developed an event classification system to incorporate event class information into the spatial attribute annotation model. To automatically recognize the spatial attribute of events, several approaches, ranging from a simple heuristic technique to a more sophisticated approach based on a state-of-the-art Conditional Random Fields (CRFs) model were explored. Different feature sets were incorporated into the model and compared. RESULTS: The evaluations were conducted on 100 outbreak news articles. Spatial attribute recognition performance was evaluated based on three metrics; precision, recall and the harmonic mean of precision and recall (F-score). Among three strategies proposed in this article, the CRF model appeared to be the most promising for spatial attribute recognition with a best performance of 85.5% F-score (86.3% precision and 84.7% recall). CONCLUSION: We presented a methodology for associating each event in media outbreak reports with their spatial attribute at the finest level of granularity. Our goal has been to provide a means for enhancing the spatial understanding of outbreak-related events. Evaluation studies showed promising results for automatic spatial attribute annotation. In the future, we plan to explore more features, such as semantic correlation between words, that maybe useful for the spatial attribute annotation task.
Notes:
2009
Son Doan, Ai Kawazoe, Mike Conway, Nigel Collier (2009)  Towards role-based filtering of disease outbreak reports.   J Biomed Inform 42: 5. 773-780 Oct  
Abstract: This paper explores the role of named entities (NEs) in the classification of disease outbreak report. In the annotation schema of BioCaster, a text mining system for public health protection, important concepts that reflect information about infectious diseases were conceptually analyzed with a formal ontological methodology and classified into types and roles. Types are specified as NE classes and roles are integrated into NEs as attributes such as a chemical and whether it is being used as a therapy for some infectious disease. We focus on the roles of NEs and explore different ways to extract, combine and use them as features in a text classifier. In addition, we investigate the combination of roles with semantic categories of disease-related nouns and verbs. Experimental results using naïve Bayes and Support Vector Machine (SVM) algorithms show that: (1) roles in combination with NEs improve performance in text classification, (2) roles in combination with semantic categories of noun and verb features contribute substantially to the improvement of text classification. Both these results were statistically significant compared to the baseline "raw text" representation. We discuss in detail the effects of roles on each NE and on semantic categories of noun and verb features in terms of accuracy, precision/recall and F-score measures for the text classification task.
Notes:
Mike Conway, Son Doan, Ai Kawazoe, Nigel Collier (2009)  Classifying disease outbreak reports using n-grams and semantic features.   Int J Med Inform 78: 12. e47-e58 Dec  
Abstract: INTRODUCTION: This paper explores the benefits of using n-grams and semantic features for the classification of disease outbreak reports, in the context of the BioCaster disease outbreak report text mining system. A novel feature of this work is the use of a general purpose semantic tagger - the USAS tagger - to generate features. BACKGROUND: We outline the application context for this work (the BioCaster epidemiological text mining system), before going on to describe the experimental data used in our classification experiments (the 1000 document BioCaster corpus). FEATURE SETS: Three broad groups of features are used in this work: Named Entity based features, n-gram features, and features derived from the USAS semantic tagger. METHODOLOGY: Three standard machine learning algorithms - Naïve Bayes, the Support Vector Machine algorithm, and the C4.5 decision tree algorithm - were used for classifying experimental data (that is, the BioCaster corpus). Feature selection was performed using the chi(2) feature selection algorithm. Standard text classification performance metrics - Accuracy, Precision, Recall, Specificity and F-score - are reported. RESULTS: A feature representation composed of unigrams, bigrams, trigrams and features derived from a semantic tagger, in conjunction with the Naïve Bayes algorithm and feature selection yielded the highest classification accuracy (and F-score). This result was statistically significant compared to a baseline unigram representation and to previous work on the same task. However, it was feature selection rather than semantic tagging that contributed most to the improved performance. CONCLUSION: This study has shown that for the classification of disease outbreak reports, a combination of bag-of-words, n-grams and semantic features, in conjunction with feature selection, increases classification accuracy at a statistically significant level compared to previous work in this domain.
Notes:
2008
John McCrae, Nigel Collier (2008)  Synonym set extraction from the biomedical literature by lexical pattern discovery.   BMC Bioinformatics 9: 03  
Abstract: BACKGROUND: Although there are a large number of thesauri for the biomedical domain many of them lack coverage in terms and their variant forms. Automatic thesaurus construction based on patterns was first suggested by Hearst 1, but it is still not clear how to automatically construct such patterns for different semantic relations and domains. In particular it is not certain which patterns are useful for capturing synonymy. The assumption of extant resources such as parsers is also a limiting factor for many languages, so it is desirable to find patterns that do not use syntactical analysis. Finally to give a more consistent and applicable result it is desirable to use these patterns to form synonym sets in a sound way. RESULTS: We present a method that automatically generates regular expression patterns by expanding seed patterns in a heuristic search and then develops a feature vector based on the occurrence of term pairs in each developed pattern. This allows for a binary classifications of term pairs as synonymous or non-synonymous. We then model this result as a probability graph to find synonym sets, which is equivalent to the well-studied problem of finding an optimal set cover. We achieved 73.2% precision and 29.7% recall by our method, out-performing hand-made resources such as MeSH and Wikipedia. CONCLUSION: We conclude that automatic methods can play a practical role in developing new thesauri or expanding on existing ones, and this can be done with only a small amount of training data and no need for resources such as parsers. We also concluded that the accuracy can be improved by grouping into synonym sets.
Notes:
Ai Kawazoe, Hutchatai Chanlekha, Mika Shigematsu, Nigel Collier (2008)  Structuring an event ontology for disease outbreak detection.   BMC Bioinformatics 9 Suppl 3: 04  
Abstract: BACKGROUND: This paper describes the design of an event ontology being developed for application in the machine understanding of infectious disease-related events reported in natural language text. This event ontology is designed to support timely detection of disease outbreaks and rapid judgment of their alerting status by 1) bridging a gap between layman's language used in disease outbreak reports and public health experts' deep knowledge, and 2) making multi-lingual information available. CONSTRUCTION AND CONTENT: This event ontology integrates a model of experts' knowledge for disease surveillance, and at the same time sets of linguistic expressions which denote disease-related events, and formal definitions of events. In this ontology, rather general event classes, which are suitable for application to language-oriented tasks such as recognition of event expressions, are placed on the upper-level, and more specific events of the experts' interest are in the lower level. Each class is related to other classes which represent participants of events, and linked with multi-lingual synonym sets and axioms. CONCLUSIONS: We consider that the design of the event ontology and the methodology introduced in this paper are applicable to other domains which require integration of natural language information and machine support for experts to assess them. The first version of the ontology, with about 40 concepts, will be available in March 2008.
Notes:
Nigel Collier, Son Doan, Ai Kawazoe, Reiko Matsuda Goodwin, Mike Conway, Yoshio Tateno, Quoc-Hung Ngo, Dinh Dien, Asanee Kawtrakul, Koichi Takeuchi, Mika Shigematsu, Kiyosu Taniguchi (2008)  BioCaster: detecting public health rumors with a Web-based text mining system.   Bioinformatics 24: 24. 2940-2941 Dec  
Abstract: SUMMARY: BioCaster is an ontology-based text mining system for detecting and tracking the distribution of infectious disease outbreaks from linguistic signals on the Web. The system continuously analyzes documents reported from over 1700 RSS feeds, classifies them for topical relevance and plots them onto a Google map using geocoded information. The background knowledge for bridging the gap between Layman's terms and formal-coding systems is contained in the freely available BioCaster ontology which includes information in eight languages focused on the epidemiological role of pathogens as well as geographical locations with their latitudes/longitudes. The system consists of four main stages: topic classification, named entity recognition (NER), disease/location detection and event recognition. Higher order event analysis is used to detect more precisely specified warning signals that can then be notified to registered users via email alerts. Evaluation of the system for topic recognition and entity identification is conducted on a gold standard corpus of annotated news articles. AVAILABILITY: The BioCaster map and ontology are freely available via a web portal at http://www.biocaster.org.
Notes:
2006
Yoko Mizuta, Anna Korhonen, Tony Mullen, Nigel Collier (2006)  Zone analysis in biology articles as a basis for information extraction.   Int J Med Inform 75: 6. 468-487 Jun  
Abstract: In the field of biomedicine, an overwhelming amount of experimental data has become available as a result of the high throughput of research in this domain. The amount of results reported has now grown beyond the limits of what can be managed by manual means. This makes it increasingly difficult for the researchers in this area to keep up with the latest developments. Information extraction (IE) in the biological domain aims to provide an effective automatic means to dynamically manage the information contained in archived journal articles and abstract collections and thus help researchers in their work. However, while considerable advances have been made in certain areas of IE, pinpointing and organizing factual information (such as experimental results) remains a challenge. In this paper we propose tackling this task by incorporating into IE information about rhetorical zones, i.e. classification of spans of text in terms of argumentation and intellectual attribution. As the first step towards this goal, we introduce a scheme for annotating biological texts for rhetorical zones and provide a qualitative and quantitative analysis of the data annotated according to this scheme. We also discuss our preliminary research on automatic zone analysis, and its incorporation into our IE framework.
Notes:
Nigel Collier, Adeline Nazarenko, Robert Baud, Patrick Ruch (2006)  Recent advances in natural language processing for biomedical applications.   Int J Med Inform 75: 6. 413-417 Jun  
Abstract: We survey a set a recent advances in natural language processing applied to biomedical applications, which were presented in Geneva, Switzerland, in 2004 at an international workshop. While text mining applied to molecular biology and biomedical literature can report several interesting achievements, we observe that studies applied to clinical contents are still rare. In general, we argue that clinical corpora, including electronic patient records, must be made available to fill the gap between bioinformatics and medical informatics.
Notes:
2005
Yacov Kogan, Nigel Collier, Serguei Pakhomov, Michael Krauthammer (2005)  Towards semantic role labeling & IE in the medical literature.   AMIA Annu Symp Proc 410-414  
Abstract: INTRODUCTION: In this work, we introduce the concept of semantic role labeling to the medical domain. We report first results of porting and adapting an existing resource, Propbank, to the medical field. Propbank is an adjunct to Penn Treebank that provides semantic annotation of predicates and the roles played by their arguments. The main aim of this work is the applicability of the Propbank frame files to predicates typically encountered in the medical literature. METHODS: We analyzed a target corpus of 610,100 abstracts, which was selected by searching for publication type "case reports". From this target corpus, we randomly selected 10,000 sample abstracts to estimate the predicate distribution, and matched the predicates from this sample to the predicates in Propbank. RESULTS: Of the 1998 unique verbs in our sample, 76% were represented in Propbank. This included the 40 most frequent verbs, which represented 49% of all predicate instances in our sample and which matched the Propbank usage in a study of representative sentences. We propose extensions to Propbank that handle medical predicates, which are not adequately covered by Propbank. CONCLUSION: We believe that semantic role labeling using Propbank is a valid approach to capture predicate relations in the medical literature.
Notes:
Koichi Takeuchi, Nigel Collier (2005)  Bio-medical entity extraction using support vector machines.   Artif Intell Med 33: 2. 125-137 Feb  
Abstract: OBJECTIVE: Support vector machines (SVMs) have achieved state-of-the-art performance in several classification tasks. In this article we apply them to the identification and semantic annotation of scientific and technical terminology in the domain of molecular biology. This illustrates the extensibility of the traditional named entity task to special domains with large-scale terminologies such as those in medicine and related disciplines. METHODS AND MATERIALS: The foundation for the model is a sample of text annotated by a domain expert according to an ontology of concepts, properties and relations. The model then learns to annotate unseen terms in new texts and contexts. The results can be used for a variety of intelligent language processing applications. We illustrate SVMs capabilities using a sample of 100 journal abstracts texts taken from the {human, blood cell, transcription factor} domain of MEDLINE. RESULTS: Approximately 3400 terms are annotated and the model performs at about 74% F-score on cross-validation tests. A detailed analysis based on empirical evidence shows the contribution of various feature sets to performance. CONCLUSION: Our experiments indicate a relationship between feature window size and the amount of training data and that a combination of surface words, orthographic features and head noun features achieve the best performance among the feature sets tested.
Notes:
2004
Tuangthong Wattarujeekrit, Parantu K Shah, Nigel Collier (2004)  PASBio: predicate-argument structures for event extraction in molecular biology.   BMC Bioinformatics 5: Oct  
Abstract: BACKGROUND: The exploitation of information extraction (IE), a technology aiming to provide instances of structured representations from free-form text, has been rapidly growing within the molecular biology (MB) research community to keep track of the latest results reported in literature. IE systems have traditionally used shallow syntactic patterns for matching facts in sentences but such approaches appear inadequate to achieve high accuracy in MB event extraction due to complex sentence structure. A consensus in the IE community is emerging on the necessity for exploiting deeper knowledge structures such as through the relations between a verb and its arguments shown by predicate-argument structure (PAS). PAS is of interest as structures typically correspond to events of interest and their participating entities. For this to be realized within IE a key knowledge component is the definition of PAS frames. PAS frames for non-technical domains such as newswire are already being constructed in several projects such as PropBank, VerbNet, and FrameNet. Knowledge from PAS should enable more accurate applications in several areas where sentence understanding is required like machine translation and text summarization. In this article, we explore the need to adapt PAS for the MB domain and specify PAS frames to support IE, as well as outlining the major issues that require consideration in their construction. RESULTS: We introduce PASBio by extending a model based on PropBank to the MB domain. The hypothesis we explore is that PAS holds the key for understanding relationships describing the roles of genes and gene products in mediating their biological functions. We chose predicates describing gene expression, molecular interactions and signal transduction events with the aim of covering a number of research areas in MB. Analysis was performed on sentences containing a set of verbal predicates from MEDLINE and full text journals. Results confirm the necessity to analyze PAS specifically for MB domain. CONCLUSIONS: At present PASBio contains the analyzed PAS of over 30 verbs, publicly available on the Internet for use in advanced applications. In the future we aim to expand the knowledge base to cover more verbs and the nominal form of each predicate.
Notes:
Nigel Collier, Koichi Takeuchi (2004)  Comparison of character-level and part of speech features for name recognition in biomedical texts.   J Biomed Inform 37: 6. 423-435 Dec  
Abstract: The immense volume of data which is now available from experiments in molecular biology has led to an explosion in reported results most of which are available only in unstructured text format. For this reason there has been great interest in the task of text mining to aid in fact extraction, document screening, citation analysis, and linkage with large gene and gene-product databases. In particular there has been an intensive investigation into the named entity (NE) task as a core technology in all of these tasks which has been driven by the availability of high volume training sets such as the GENIA v3.02 corpus. Despite such large training sets accuracy for biology NE has proven to be consistently far below the high levels of performance in the news domain where F scores above 90 are commonly reported which can be considered near to human performance. We argue that it is crucial that more rigorous analysis of the factors that contribute to the model's performance be applied to discover where the underlying limitations are and what our future research direction should be. Our investigation in this paper reports on variations of two widely used feature types, part of speech (POS) tags and character-level orthographic features, and makes a comparison of how these variations influence performance. We base our experiments on a proven state-of-the-art model, support vector machines using a high quality subset of 100 annotated MEDLINE abstracts. Experiments reveal that the best performing features are orthographic features with F score of 72.6. Although the Brill tagger trained in-domain on the GENIA v3.02p POS corpus gives the best overall performance of any POS tagger, at an F score of 68.6, this is still significantly below the orthographic features. In combination these two features types appear to interfere with each other and degrade performance slightly to an F score of 72.3.
Notes:
Powered by PublicationsList.org.