Science & Technology

Bayesian learning with local support vector machines for cancer classification with gene expression data

Published
of 10
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Description
Bayesian learning with local support vector machines for cancer classification with gene expression data Elena Marchiori 1 and Michèle Sebag 2 1 Department of Computer Science Vrije Universiteit Amsterdam,
Transcript
Bayesian learning with local support vector machines for cancer classification with gene expression data Elena Marchiori 1 and Michèle Sebag 2 1 Department of Computer Science Vrije Universiteit Amsterdam, The Netherlands 2 Laboratoire de Recherche en Informatique, CNRS-INRIA Universit Paris-Sud Orsay, France Abstract. This paper describes a novel method for improving classification of support vector machines (SVM) with recursive feature selection (SVM-RFE) when applied to cancer classification with gene expression data. The method employs pairs of support vectors of a linear SVM- RFE classifier for generating a sequence of new SVM classifiers, called local support classifiers. This sequence is used in two Bayesian learning techniques: as ensemble of classifiers in Optimal Bayes, and as attributes in Naive Bayes. The resulting classifiers are applied to four publically available gene expression datasets from leukemia, ovarian, lymphoma, and colon cancer data, respectively. The results indicate that the proposed approach improves significantly the predictive performance of the baseline SVM classifier, its stability and robustness, with satisfactory results on all datasets. In particular, perfect classification is achieved on the leukemia and ovarian cancer datasets. 1 Introduction This paper deals with tumor classification with gene expression data. Microarray technology provides a tool for estimating expression of thousands of genes simultaneously. To this end, DNA arrays are used, consisting of a large number of DNA molecules spotted in a systematic order on a solid substrate. Depending on the size of each DNA spot on the array, DNA arrays are called microarrays when the diameter of DNA spot is less than 250 microns, and macroarrays when the diameter is bigger than 300 microns. DNA microarrays contain thousands of individual DNA sequences printed in a high density array on a glass microscope slide using a robotic instrument. The relative abundance of these spotted DNA sequences in the two DNA and RNA samples may be assessed by monitoring the differential hybridization of the two samples to the sequences on the array. For mrna samples, the two samples are reverse-transcribed into cdna, labeled using different fluorescent dyes mixed (red-fluorescent dye Cy5 and green-fluorescent dye Cy3). After these samples are hybridized with the arrayed DNA probes, the slides are imaged using scanner that makes fluorescence measurements for each dye. The log ratio between the two intensities of each dye is used as the gene expression data (cf. [11]) expression(gene) = log 2 (int(cy5)/int(cy3)), were int(cy5) and int(cy3) are the intensities of the two fluorescent dyes. Four main machine learning tasks are used to analyze DNA microarray data: clustering, e.g. for identifying tumor subtypes, classification, e.g. for tumor diagnostic, feature selection for potential tumor biomarker identification, and gene regulatory network modeling. This paper deals with classification. Many machine learning techniques have been applied to classify gene expression data, including Fisher linear discriminat analysis [10], k-nearest neighbour [18], decision tree, multi-layer perceptron [17, 25], support vector machine (SVM) [6, 13, 15, 21], boosting and ensemble methods [14, 7, 23, 3, 9]. A recent comparison of classification and feature selection algorithms applied to tumor classification can be found in [7, 23]. This paper introduces a method that improves the predictive performance of a linear SVM with Recursive Feature Elimination (SVM-RFE) [15] on four gene expression datasets. The method is motivated by previous work on aggregration of classifiers [4, 5], where it is shown that gains in accuracy can be obtained by aggregrating classifiers built from perturbed versions of the train set, for instance using bootstrapping. Application of aggregration of classifiers to microarray data is described e.g. in [10, 7, 3, 9]. In this paper a novel approach is proposed, for generating a sequence of classifiers from the support vectors of a baseline linear SVM-RFE classifier. Each pair of support vectors of the same class are used to generate an element of the sequence, called local support classifier (lsc). Such classifier is obtained by training SVM-RFE on data consisting of the two selected support vectors and all the support vectors of the other class. The sequence of lsc s provides an approximate description of the data distribution by means of a set of linear decision functions, one for each region of the input space in a small neighbourhoods of two support vectors having equal class label. We propose to use this sequence of classifiers in Bayesian learning (cf. [20]). The first technique applies Naive Bayes to the transformed data, where an example is mapped into the binary vector of its classification values. The resulting classifier is called Naive Bayes Local Support Classifier (NB-LSC). The second technique applies Optimal Bayes to the sequence of lsc s classifiers. The resulting classifier is called Optimal Bayes Local Support Classifier (OB-LSC). The two classifiers are applied to four publically available datasets for cancer classification with gene expression. The results show a significant improvement in predictive performance of OB-LSC over the baseline linear SVM-RFE classifier, and a gain in stability. In particular, on the leukemia and ovarian cancer datasets perfect classification is obtained, and on the other datasets performance comparable to the best published results we are aware of. The rest of the paper is organized as follows. The next two sections describe the baseline and new methods. Sections 4 contains a short description of the data. Section 5 reports results of experiments and discuss them. Finally, the paper ends with conclusive considerations on research issues to be tackled in future work. 2 Support Vector Machines This section describes in brief SVM-RFE, the local support classifier construction procedure, and the integration of the resulting classifier sequence in Naive Bayes and Optimal Bayes classification. 2.1 SVM In linear SVM binary classification [24, 8] patterns of two classes are linearly separated by means of a maximum margin hyperplane, that is, the hyperplane that maximizes the sum of the distances between the hyperplane and its closest points of each of the two classes (the margin). When the classes are not linearly separable, a variant of SVM, called soft-margin SVM, is used. This SVM variant penalizes misclassification errors and employs a parameter (the soft-margin constant C) to control the cost of misclassification. Training a linear SVM classifier amounts to solving the following constrained optimization problem: min w,b,ξk 1 2 w 2 + C m ξ i s.t. w x i + b 1 ξ i i=1 with one constraint for each training example x i. Usually the dual form of the optimization problem is solved: min αi 1 2 m m m α i α j y i y j x i x j i=1 j=1 m such that 0 α i C, i=1 α iy i = 0. SVM requires O(m 2 ) storage and O(m 3 ) to solve. The resulting decision function f(x) = w x + b has weight vector w = m k=1 α ky k x k. Examples x i for which α i 0 are called support vectors, since they define uniquely the maximum margin hyperplane. Maximizing the margin allows one to minimize bounds on generalization error. Because the size of the margin does not depend on the data dimension, SVM are robust with respect to data with high input dimension. However, SVM are sensitive to the presence of (potential) outliers, (cf. [15] for an illustrative example) due to the regularization term for penalizing misclassification (which depends on the choice of C). i=1 α i 2.2 SVM-RFE The weights w i provide information about feature relevance, where bigger weight size implies higher feature relevance. In this paper feature x i is scored by means of the absolute value of w i. Other scoring functions based on weight features are possible, like, e.g., wi 2, which is used in the original SVM-RFE algorithm [15]. SVM-RFE is an iterative algorithm. Each iteration consists of the following two steps. First feature weights, obtained by training a linear SVM on the training set, are used in a scoring function for ranking features as described above. Next, the feature with minimum rank is removed from the data. In this way, a chain of feature subsets of decreasing size is obtained. In the original SVM-RFE algorithm one feature is discarded at each iteration. Other choices are suggested in [15], where at each iteration features with rank lower than a user-given theshold are removed. In general, the threshold influences the results of SVM-RFE [15]. In this paper we use a simple instance of SVM- RFE where the user specifies the number of features to be selected, 70% of the actual number of features are initially removed, and then 50% at each further iteration. These values are chosen after cross-validation applied to the training set. 3 Local Support Classifiers We propose to describe the distribution of the two classes by means of a sequence of classifiers, generated from pairs of support vectors ofsvm-rfe. Each of these classifiers, called local support classifier (lsc), is obtained using data generated from two support vectors of the same class, and all support vectors of the other class. In this way, each classifier uses only a local region near the two selected support vectors when separating the two classes. Each classifier generated from two (distinct) support vectors of the same class provides an approximate description of the distribution of the other class given the two selected support vectors. Before describing the procedure for constructing lsc s, some notation used throughout the paper is introduced. D denotes the training set, c denotes the classifier obtained by training a linear SVM on D, S p and S n denote the set of positive and negative support vectors of c, respectively, P air p and P air n denote the set of pairs of distinct elements of S p and S n, respectively. The following procedure, called LSC, takes as input one (s, s ) in P air p and outputs a linear SVM classifier C s,s by means of the following two steps. 1. Let Xp = {s, s }. Assign positive class label to these examples. 2. Let C s,s be the classifier obtained by training a linear SVM on data Xp Sn. An analogous procedure is applied to generate C s,s from pairs (s, s ) in P air n. When applied to all pairs of support vectors in P air p, P air n, LSC produces a sequence of lsc s. Such sequence of classifiers induces a data transformation, called seq D, which maps example x in the sequence seq D (x) of class values C s,s (x), with (s, s ) in P air p P air n. The construction of the sequence oflsc s requires computation that grows quadratically with the number of support vectors. However, this is not a severe problem, since the number of examples, hence of support vectors, is small for this type of data. Furthermore, LSC is applied to each pair of support vectors independently, hence can be executed in parallel. 3.1 Naive Bayes and Optimal Bayes Classification Naive Bayes (NB) is based on the principle of assigning to a new example the most probable target value, given the attribute values of the example. In order to apply directly NB to the original gene expression data, gene values need to be discretized, since NB assumes discrete-valued attributes. Examples transformed using seq D contain binary attributes, hence discretization is not necessary. Let x be a new example. Suppose seq D (x) = (x 1,..., x N ). First, the prior probabilities p y of the two target values are estimated by means of the frequency of positive and negative examples occurring in the train set D, respectively. Next, for each attribute value x i, the probability P (x i y) of x i given target value y is estimated as the frequency with which x i occurs as value of i-th attribute among the examples of D with class value y. Finally, the classification of x is computed as the y that maximizes the product p y N i=1 P (x i y). The resulting classifier is denoted by NB-LSC. Optimal Bayes (OB) classifier is based on the principle of maximizing the probability that a new example is classified correctly, given the available data, classifiers, and prior probabilities over the classifiers. OB maps example x to the class that maximizes the weighted sum w s,s I(C s,s (x) = y), C s,s where w s,s is the accuracy of C s,s over D, and I is the indicator function, which returns 1 if the test contained in its argument is satisfied and 0 otherwise. The resulting classifier is denoted by OB-LSC. 4 Datasets There are several microarray datasets from published cancer gene expression studies, including leukemia cancer dataset, colon cancer dataset, lymphoma dataset, breast cancer dataset, NCI60 dataset, ovarian cancer, and prostate dataset. Among them four datasets are used in this paper, available e.g. at The first and third dataset contain samples from two variants of the same disease, the second and last dataset consist of tumor and normal samples of the same tissue. Table 1 shows input dimension and class sizes of the datasets. The following short description of the datasets is partly based on [7]. Table 1. Datasets description Name Tot Positive Negative Genes Colon Leukemia Lymphoma Ovarian Leukemia The Leukemia dataset consists of 72 samples: 25 samples of acute myeloid leukemia (AML) and 47 samples of acute lymphoblastic leukemia (ALL). The source of the gene expression measurements is taken from 63 bone marrow samples and 9 peripheral blood samples. Gene expression levels in these 72 samples are measured using high density oligonucleotide microarrays [2]. Each sample contains 7129 gene expression levels. 4.2 Colon The Colon dataset consists of 62 samples of colon epithelial cells taken from colon-cancer patients. Each sample contains 2000 gene expression levels. Although the original data consists of 6000 gene expression levels, 4000 out of 6000 were removed based on the confidence in the measured expression levels. 40 of 62 samples are colon cancer samples and the remaining are normal samples. Each sample is taken from tumors and normal healthy parts of the colons of the same patients and measured using high density oligonucleotide arrays [1]. 4.3 Lymphoma B cell diffuse large cell lymphoma (B-DLCL) is a heterogeneous group of tumors, based on significant variations in morphology, clinical presentation, and response to treatment. Gene expression profiling has revealed two distinct tumor subtypes of B-DLCL: germinal center B cell-like DLCL and activated B cell-like DLCL [19]. Lymphoma dataset consists of 24 samples of germinal center B-like and 23 samples of activated B-like. 4.4 Ovarian Ovarian tissue from 30 patients with cancer and 23 without cancer were analyzed for mrna expression using glass arrays spotted for 1536 gene clones. Attribute i of patient j is the measure of the mrna expression of the i-th gene in that tissue sample, relative to control tissue, with a common control employed for all experiments [22]. 5 Numerical Experiments The two classifiers NB-LSC and OB-LSC, described in Section 3.1, are applied to the four gene expression datasets the baseline SVM-RFE algorithm. In all experiments the same value of the SVM parameter C = 10 is used, while the number of selected genes was set to 30 for the lymphoma dataset and 50 for all other datasets. These values are chosen by means of cross-validation applied to the training set. Because of the small size of the datasets, Leave One Out Cross Validation (LOOCV) is used to estimate the predictive performance of the algorithms [12]. Table 2. Results of LOOCV: average sensitivity, specificity and accuracy (with standard deviation between brackets). Method Dataset Sensitivity Specificity Accuracy SVM-RFE Colon 0.90 (0.3038) 1.00 (0.00) (0.2477) NB-LSC 0.75 (0.4385) 1.00 (0.00) (0.3708) OB-LSC 0.90 (0.3038) 1.00 (0.00) (0.2477) SVM-RFE Leukemia 0.96 (0.20) 1.00 (0.00) (0.1179) NB-LSC 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) OB-LSC 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) SVM-RFE Ovarian (0.4661) (0.2041) (0.3921) NB-LSC 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) OB-LSC 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) SVM-RFE Lymphoma (0.4707) (0.4826) ( ) NB-LSC 1.00 (0.00) (0.4826) (0.3955) OB-LSC 1.00 (0.00) (0.3360) (0.2556) Table 2 reports results of LOOCV. They indicate a statistically significant improvement of OB-LSC over the baseline SVM-RFE classifier, and a gain in stability, indicated by lower standard deviation values. In particular, on the ovarian and leukemia datasets both NB-LSC and OB-LSC achieve perfect classification. Moreover, while the performance of SVM on the Lymphoma dataset is rather scare (possibly due to the fact that we did not scale the data), OB-LSC obtains results competitive to the best results known (see Table 3). Table 3. Comparison of results with best average accuracy reported in previous papers on tumor classification. The type of classifiers considered in the paper are given between brackets. An entry - means that the corresponding dataset has not been considered. Colon Leukemia Lymphoma Ovarian Furey et al (SVM) Li et al 00 (Logistic regression) Li et al 01 (KNN) Ben-Dor et al (Quadratic SVM, 1NN, AdaBoost) Dudoit et al (1NN, LDA, BoostCART) Nguyen et al (Logistic discriminant, QDA) Cho et al ( Ensemble SVM, KNN) Liu et al 04 (Ensemble NN) Dettling et al 03 (Boosting) OB-LSC Table 3 reports results of OB-LSC and the best result among those contained nine papers on tumor classification and feature selection using different machine learning methods [7]. Note that results reported in this table have been obtained using different cross-validation methods, mainly by repeated random partitioning the data into train and test set using 70 and 30 % of the data, respectively. Because the resulting estimate of predictive performance may be more biased than the one of LOOCV [12], those results give only an indication for comparing the methods. Only the results on the colon dataset from Liu et al 04 and Dettling et al 03 [9, 3] are obtained using LOOCV. The methods proposed in these latter papers use boosting and bagging, respectively. The results they obtain seem comparable to OB-LSC. The results indicate that OB-LSC is competitive with most recent classification techniques for this task, including non-linear methods. 6 Conclusion This paper introduced an approach that improves predictive performance and stability of linear SVM for tumor classification with gene expression data on four gene expression datasets. We conclude with two considerations on research issues still to be addressed. Our approach is at this stage still an heuristic, and needs further experimental and theoretical analysis. In particular, we intend to analyze how performance is related to the number of support vectors chosen to generate lsc s. Moreover, we intend to investigate the use of this approach for feature selection, for instance whether the generated lsc s can be used for ensemble feature ranking [16]. References 1. U. Alon, N. Barkai, and D.A. Notterman et al. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. PNAS, 96: , A. Ben-Dor, L. Bruhn, and N. Friedman et al. Tissue classification with gene expression profiles. Journal of Computational Biology, 7: , B.Liu, Q. Cui, and T.Jiang et al. A combinational feature selection and ensemble neural network method for classification of gene expression data. BMC Bioinformatics, 5(136), L. Breiman. Bagging predictors. Machine Learning, 24(2): , L. Breiman. Arcing classifiers. The Annals of Statistics, 26(3): , M. Brown, W. Grundy, D. Lin, N. Cristianini, C. Sugnet, T. Furey, M. Jr, and D. Haussler. Knowledge-based analysis of microarray gene expression data by using support vector machines. In Proc. Natl. Acad. Sci., volume 97, pages , S. Cho and H. Won. Machine l
Search
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks