hosted by
publicationslist.org
    
Chang-Tsun Li
Department of Computer Science
University of Warwick
Coventry CV4 7AL
United Kingdom
ctli@dcs.warwick.ac.uk
Chang-Tsun Li received the B.E. degree in electrical engineering from Chung-Cheng Institute of Technology (CCIT), National Defense University, Taiwan, in 1987, the M.S. degree in computer science from U. S. Naval Postgraduate School, USA, in 1992, and the Ph.D. degree in computer science from the University of Warwick, UK, in 1998. He was an associate professor of the Department of Electrical Engineering at CCIT during 1999-2002 and a visiting professor of the Department of Computer Science at U.S. Naval Postgraduate School in the second half of 2001. He is currently an associate professor of the Department of Computer Science at the University of Warwick, UK, the Editor-in-Chief of the International Journal of Digital Crime and Forensics an editor of International Journal of Imaging (IJI) and an associate editor of the International Journal of Applied Systemic Studies (IJASS) and International Journal of Computer Sciences and Engineering Systems (IJCSE). He has involved in the organisation of a number of international conferences and workshops and also served as member of the international program committees for several international conferences. His research interests include computational forensics, multimedia security, bioinformatics, image processing, pattern recognition, computer vision and content-based image retrieval.

Journal articles

2009
C -H Wei, Y Li, W Y Chau, C -T Li (2009)  Trademark Image Retrieval Using Synthetic Features for Describing Global Shape and Interior Structure   Pattern Recognition  
Abstract: A trademark image retrieval (TIR) system is proposed in this work to deal with the vast number of trademark images in the trademark registration system. The proposed approach commences with the extraction of edges using the Canny edge detector, performs a shape normalization procedure, and then extracts the global and local features. The global features capture the gross essence of the shapes while the local features describe the interior details of the trademarks. A two-component feature matching strategy is used to measure the similarity between the query and database images. The performance of the proposed algorithm is compared against four other algorithms.
Notes:
C -T Li, Y Li, C -H Wei (2009)  Protection of Digital Mammograms on PACSs Using Data Hiding Techniques   International Journal of Digital Crime and Forensics 1: 1. 75-88 January  
Abstract: Picture archiving and communication systems (PACS) are typical information systems, which may be undermined by unauthorized users who have illegal access to the systems. This paper proposes a role-based access control framework comprising two main components – a content-based steganographic module and a reversible watermarking module, to protect mammograms on PACSs. Within this framework, the content-based steganographic module is to hide patients’ textual information into mammograms without changing the important details of the pictorial contents and to verify the authenticity and integrity of the mammograms. The reversible watermarking module, capable of masking the contents of mammograms, is for preventing unauthorized users from viewing the contents of the mammograms. The scheme is compatible with mammogram transmission and storage on PACSs. Our experiments have demonstrated that the content-based steganographic method and reversible watermarking technique can effectively protect mammograms at PACS.
Notes:
2008
 
DOI 
Y Yuan, C -T Li (2008)  A Bayesian Random Field Approach for Integrative Large-Scale Regulatory Network Analysis   Journal of Integrative Bioinformatics 5: 2. August  
Abstract: We present a Bayes-Random Fields framework which is capable of integrating unlimited data sources for discovering relevant network architecture of large-scale networks. The random field potential function is designed to impose a cluster constraint, teamed with a full Bayesian approach for incorporating heterogenous data sets. The probabilistic nature of our framework facilitates robust analysis in order to minimize the influence of noise inherent in the data on the inferred structure in a seamless and coherent manner. This is later proved in its applications to both large-scale synthetic data sets and Saccharomyces Cerevisiae data sets. The analytical and experimental results reveal the varied characteristic of dierent types of data and refelct their discriminative ability in terms of identifying direct gene interactions.
Notes:
Y Yang, X Sun, H Yang, C -T Li (2008)  A Removable Visible Image Watermarking Algorithm in DCT Domain   Journal of Electronic Imaging  
Abstract: In this work, a removable visible watermarking scheme, which operates in DCT domain, is proposed for combating copyright piracy. First, the original watermark image is divided into 16×16 blocks and the preprocessed watermark to be embedded is generated by performing element-by-element matrix multiplication on the DCT coefficient matrix of each block and a key-based matrix. The intention of generating the preprocessed watermark is to guarantee the infeasibility of the illegal removal of the embedded watermark by the unauthorized users. Then, adaptive scaling and embedding factors are computed for each block of the host image and the preprocessed watermark according to the features of the corresponding blocks to better match the HVS (human visual system) characteristics. Finally, the significant DCT coefficients of the preprocessed watermark are adaptively added to those of the host image to yield the watermarked image. The watermarking system is robust against compression to some extent. The performance of the proposed method is verified and the test results show that the introduced scheme 1 succeeds in preventing the embedded watermark from illegal removal. Moreover, experimental results demonstrate that legally recovered images can achieve superior visual effects, and PSNR values of these images are greater than 50 dB.
Notes:
 
DOI   
PMID 
Y Yuan, C -T Li, R Wilson (2008)  Partial mixture model for tight clustering of gene expression time-course.   BMC Bioinformatics 9: June  
Abstract: BACKGROUND: Tight clustering arose recently from a desire to obtain tighter and potentially more informative clusters in gene expression studies. Scattered genes with relatively loose correlations should be excluded from the clusters. However, in the literature there is little work dedicated to this area of research. On the other hand, there has been extensive use of maximum likelihood techniques for model parameter estimation. By contrast, the minimum distance estimator has been largely ignored. RESULTS: In this paper we show the inherent robustness of the minimum distance estimator that makes it a powerful tool for parameter estimation in model-based time-course clustering. To apply minimum distance estimation, a partial mixture model that can naturally incorporate replicate information and allow scattered genes is formulated. We provide experimental results of simulated data fitting, where the minimum distance estimator demonstrates superior performance to the maximum likelihood estimator. Both biological and statistical validations are conducted on a simulated dataset and two real gene expression datasets. Our proposed partial regression clustering algorithm scores top in Gene Ontology driven evaluation, in comparison with four other popular clustering algorithms. CONCLUSION: For the first time partial mixture model is successfully extended to time-course data analysis. The robustness of our partial regression clustering algorithm proves the suitability of the combination of both partial mixture model and minimum distance estimator in this field. We show that tight clustering not only is capable to generate more profound understanding of the dataset under study well in accordance to established biological knowledge, but also presents interesting new hypotheses during interpretation of clustering results. In particular, we provide biological evidences that scattered genes can be relevant and are interesting subjects for study, in contrast to prevailing opinion.
Notes:
 
DOI   
PMID 
C -T Li, Y Yuan, R Wilson (2008)  An Unsupervised Conditional Random Fields Approach for Clustering Gene Expression Time Series.   Bioinformatics Aug  
Abstract: MOTIVATION: There is a growing interest in extracting statistical patterns from gene expression time series data, in which a key challenge is the development of stable and accurate probabilistic models. Currently popular models, however, would be computationally prohibitive unless some independence assumptions are made to describe large scale data. We propose an unsupervised conditional random fields model to overcome this problem by progressively infusing information into the labelling process through a samll variable voting pool. RESULTS: An unsupervised conditional random fields model (CRF) is proposed for efficient analysis of gene expression time series and is successfully applied to gene class discovery and class prediction. The proposed model treats each time series as a random field and assigns an optimal cluster label to each time series, so as to partition the time series into clusters without a priori knowledge about the number of clusters and the initial centroids. Another advantage of the proposed method is the relaxation of independence assumptions. AVAILABILITY: CONTACT: ctli@dcs.warwick.ac.uk.
Notes:
2007
C -T Li, H Si (2007)  Wavelet-based Fragile Watermarking Scheme for Image Authentication   Journal of Electronic Imaging 16: 1. 013009-1 ~ 013009-9 Jan-March  
Abstract: In this work, we propose a novel fragile watermarking scheme in wavelet transform domain, which is sensitive to all kinds of manipulations and has the ability to localize the tampered regions. To achieve high transparency (i.e., low embedding distortion) while providing protection to all coefficients, the embedder involves all the coefficients within a hierarchical neighborhood of each sparsely selected watermarkable coefficient during the watermark embedding process. The way the non-watermarkable coefficients are involved in the embedding process is content-dependent and non-deterministic, which allows the proposed scheme to put up resistance to the so-called vector quantization attack, Holliman-Memon attack, collage attack and transplantation attack.
Notes:
2006
C -T Li, Y Yuan (2006)  Digital Watermarking Scheme Exploiting Non-deterministic Dependence for Image Authentication   Optical Engineering 45: 12. 127001-1 ~ 127001- December  
Abstract: Watermarking schemes for authentication purposes are characterized by three factors namely security, resolution of tamper localization, and embedding distortion. Since the requirements of high security, high localization resolution, and low distortion cannot be fulfilled simultaneously, the relative importance of a particular factor is application-dependent. Moreover, block-wise dependence is recognized as a key requirement for fragile watermarking schemes to thwart the Holliman-Memon counterfeiting attack. However, it has also been observed that deterministic dependence is still susceptible to transplantation attack or even simple cover-up attack. This work is intended to propose a fragile watermarking scheme for image authentication, which exploits non-deterministic dependence and provides the users with freedom of making trade-offs among the three factors according to the needs of their applications.
Notes:
2005
C -T Li (2005)  Reversible Watermarking Scheme with Image-independent Embedding Capacity   IEE Proceedings - Vision, Image, and Signal Processing 152: 6. 779 - 786  
Abstract: Permanent distortion is one of the main drawbacks of all the irreversible watermarking schemes. Attempts to recover the original signal after the signal passing the authentication process are being made starting just a few years ago. Some common problems, such as salt-andpepper artefacts due to intensity wraparound and low embedding capacity, can now be resolved. However, we point out in this work that there are still some significant problems remain unsolved. Firstly, the embedding capacity is signal-dependent, i.e., capacity varies significantly depending on the nature of the host signal. The direct impact of this ill factor is compromised security for signals with low capacity. Some signal may be even non-embeddable. Secondly, while seriously tackled in the irreversible watermarking schemes, the well-recognized problem of block-wise dependence, which opens a security gap for the vector quantisation attack and transplantation attack are not addressed by the researchers of the reversible schemes. It is our intention in this work to propose a reversible watermarking scheme with near-constant signal-independent embedding capacity and immunity to the vector quantisation attack and transplantation attack.
Notes:
2004
C -T Li (2004)  Digital Fragile Watermarking Scheme for Authentication of JPEG Images   IEE Proceedings - Vision, Image, and Signal Processing 151: 6. 460 - 466 December  
Abstract: It is a common practice in transform-do main fragile watermarking schemes for authentication purposes to watermark some selected transform coefficients so as to minimise embedding distortion. However, we point out in this work that leaving most of the coefficients unmarked results in a wide-open security gap for attacks to be mounted on them. A fragile watermarking scheme is proposed to implicitly watermark all the coefficients by registering the zero-valued coefficients with a key-generated binary sequence to create the watermark and involving the unwatermarkable coefficients during the embedding process of the embeddable ones. Non-deterministic dependence is established by involving some of the unwatermarkable coefficients selected according to the watermark from a 9-neighbourhood system in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. No hashing and cryptography are needed in establishing the non-deterministic dependence.
Notes:
2003
C -T Li, F M Yang (2003)  One-dimensional Neighbourhood Forming Strategy for Fragile Watermarking   Journal of Electronic Imaging 12: 2. 284-291 April  
Abstract: It is recognized that block-wise dependence is a key requirement for fragile watermarking schemes to thwart vector quantization attack. It has also been proved that dependence with deterministic or limited context is susceptible to transplantation attack or even simple cover-up attacks. In this work, we point out that traditional nondeterministic block-wise dependence is still vulnerable to cropping attacks and propose a 1-D neighborhood forming strategy to tackle the problem. The proposed strategy is then implemented in our new fragile watermarking scheme, which does not resort to cryptography and requires no a priori knowledge about the image for verification. To watermark the underlying image, the gray scale of each pixel is adjusted by an imperceptible quantity according to the consistency between a key-dependent binary watermark bit and the parity of a bit stream converted from the gray scales of a secrete neighborhood formed with the 1-D strategy. The watermark extraction process works exactly the same as the embedding process, and produces a difference map as output, indicating the authenticity and integrity of the image.
Notes:
 
DOI 
C -T Li, R Chiao (2003)  Multiresolution Genetic Clustering Algorithm for Texture Segmentation   Image and Vision Computing 21: 11. 955-966 October  
Abstract: This work plans to approach the texture segmentation problem by incorporating genetic algorithm and K-means clustering method within a multiresolution structure. As the algorithm descends the multiresolution structure, the coarse segmentation results are propagated down to the lower levels so as to reduce the inherent class–position uncertainty and to improve the segmentation accuracy. The procedure is described as follows. In the first step, a quad-tree structure of multiple resolutions is constructed. Sampling windows of different sizes are utilized to partition the underlying image into blocks at different resolution levels and texture features are extracted from each block. Based on the texture features, a hybrid genetic algorithm is employed to perform the segmentation. While the select and mutate operators of the traditional genetic algorithm are adopted in this work, the crossover operator is replaced with K-means clustering method. In the final step, the boundaries and the segmentation result of the current resolution level are propagated down to the next level to act as contextual constraints and the initial configuration of the next level, respectively.
Notes:
 
DOI 
C -T Li (2003)  Multiresolution Image Segmentation Integrating Gibbs Sampler and Region Merging Algorithm   Signal Processing 83: 1. 67-78 January  
Abstract: This work approaches the texture segmentation problem by incorporating Gibbs sampler (i.e., the combination of Markov random fields and simulated annealing) and a region-merging process within a multiresolution structure with “high class resolution and low boundary resolution” at high levels and “low class resolution and high boundary resolution” at lower ones. As the algorithm descends the multiresolution structure, the coarse segmentation results are propagated down to the lower levels so as to reduce the inherent class-boundary uncertainty and to improve the segmentation accuracy. The computational complexity and frequent occurrences of over-segmentation of Gibbs sampler are addressed and the computationally and functionally effective region-merging process is included to allow Gibbs sampler to start its annealing schedule at relatively low pseudo-temperature and to guide the search trajectory away from local minima associated with over-segmented configurations.
Notes:
C -T Li, D C Lou, J L Liu (2003)  Image Integrity and Authenticity Verification Via Content-based Watermarks and a Public Key Cryptosystem   Journal of Chinese Institute of Electrical Engineering 10: 1. 99-106 February  
Abstract: A technique using the inherent feature map of the underlying image as the watermark is proposed in this work. First, on the transmitting side a binary feature map is extracted as watermark and partitioned into blocks. Secondly, to create a watermarked image, neighboring feature map/watermark blocks are blended and encrypted for insertion. On the receiving side, the feature map from the received image is extracted again and compared against the recovered watermark to verify the integrity and authenticity. In addition to the capability of detecting geometric transformation, removal of original objects and addition of foreign objects, the proposed scheme is also capable of localizing tampering and detecting cropping without a priori knowledge about the image. This work can be applicable in the areas of military imagery transmission, imaging of micro evidence in the criminal scene, and medical image archiving.
Notes:
 
DOI 
R Wilson, C -T Li (2003)  A Class of Discrete Multiresolution Random Fields and its Application to Image Segmentation   IEEE Trans. on Pattern Analysis and Machine Intelligence 25: 1. 42-56 January  
Abstract: In this paper, a class of Random Field model, defined on a multiresolution array is used in the segmentation of gray level and textured images. The novel feature of one form of the model is that it is able to segment images containing unknown numbers of regions, where there may be significant variation of properties within each region. The estimation algorithms used are stochastic, but because of the multiresolution representation, are fast computationally, requiring only a few iterations per pixel to converge to accurate results, with error rates of 1-2 percent across a range of image structures and textures. The addition of a simple boundary process gives accurate results even at low resolutions, and consequently at very low computational cost.
Notes:
2001
 
DOI 
C -T Li (2001)  An Approach to Reducing the Labeling Cost of Markov Random Fields within an Infinite Label Space   Signal Processing 81: 3. 609-620 March  
Abstract: This work proposes a novel idea, called SOIL, for reducing the computational complexity of the maximum a posteriori optimization problem using Markov random fields (MRFs). The local characteristics of MRFs are employed so that the searching in a virtually infinite label space is confined in a small finite space. Globally, the number of labels allowed is as many as the number of image sites while locally the optimal label is sampled from a space consisting of the labels assigned to the four-neighbor plus a random one. Neither the prior knowledge about the number of classes nor the estimation phase of the class number is required in this work. The proposed method is applied to the problem of texture segmentation and the result is compared with those obtained from conventional methods.
Notes:
Y Liu, X Sun, Y Liu, C -T Li  MIMIC-PPT: Mimicking-based Steganography for Microsoft PowerPoint Document   Information Technology Journal  
Abstract: Communications via Microsoft PowerPoint (PPT for short) documents are commonplace, so it is crucial to take advantage of PPT documents for information se-curity and digital forensics. In this paper, we propose a new method of text stegano-graphy, called MIMIC-PPT, which combines text mimicking technique with characte-ristics of PPT documents. Firstly, a dictionary and some sentence templates are auto-matically created by parsing the body text of a PPT document. Then, cryptographic information is converted into innocuous sentences by using the dictionary and the sen-tence templates. Finally, the sentences are written into the note pages of the PPT doc-ument. With MIMIC-PPT, there is no need for the communication parties to share the dictionary and sentence templates while the efficiency and security are greatly im-proved.
Notes:
Powered by publicationslist.org.