一些支持向量机(SVM)的开源代码库的链接及其简介
转载出处:http://blog.csdn.net/carson2005/article/details/8586201
(1)LIBSVM: http://www.csie.ntu.edu.tw/~cjlin/libsvm/
LIBSVMis an integrated software for support vector classification, (C-SVC,nu-SVC), regression (epsilon-SVR,nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification.
Since version 2.8, it implements an SMO-type algorithm proposed in this paper:
R.-E. Fan, P.-H. Chen, and C.-J. Lin.Working set selection using second order information for training SVM. Journal of Machine Learning
Research 6, 1889-1918, 2005. You can also find a pseudo code there. (how to cite LIBSVM)
Our goal is to help users from other fields to easily use SVM as a tool.LIBSVMprovides a simple interface where users can easily link it with their own programs. Main features ofLIBSVMinclude
- Different SVM formulations
- Efficient multi-class classification
- Cross validation for model selection
- Probability estimates
- Various kernels (including precomputed kernel matrix)
- Weighted SVM for unbalanced data
- Both C++ andJavasources
- GUIdemonstrating SVM classification and regression
- Python,R,MATLAB,Perl,Ruby,Weka,Common
LISP,CLISP,Haskell,LabVIEW,
andPHPinterfaces.C# .NETcode
andCUDAextension is available.
It's also included in some data mining environments:RapidMiner,PCP, andLIONsolver. - Automatic model selection which can generate contour of cross valiation accuracy.
LIBLINEARis alinearclassifier for data withmillionsof instances and features. It supports
- L2-regularized classifiers
L2-loss linear SVM, L1-loss linear SVM, and logistic regression (LR) - L1-regularized classifiers (after version 1.4)
L2-loss linear SVM and logistic regression (LR) - L2-regularized support vector regression (after version 1.9)
L2-loss linear SVR and L1-loss linear SVR.
Main features ofLIBLINEARinclude
- Same data format asLIBSVM, our general-purpose SVM solver, and also similar usage
- Multi-class classification: 1) one-vs-the rest, 2) Crammer & Singer
- Cross validation for model selection
- Probability estimates (logistic regression only)
- Weights for unbalanced data
- MATLAB/Octave, Java, Python, Ruby interfaces
SVMlightis an implementation of Vapnik's Support Vector Machine [Vapnik, 1995] for the problem of pattern recognition, for the problem of regression, and for the problem of learning a ranking function. The optimization algorithms used in SVMlightare described in [Joachims, 2002a]. [Joachims, 1999a]. The algorithm has scalable memory requirements and can handle problems with many thousands of support vectors efficiently.
The software also provides methods for assessing the generalization performance efficiently. It includes two efficient estimation methods for both error rate and precision/recall. XiAlpha-estimates [Joachims, 2002a,Joachims, 2000b] can be computed at essentially no computational expense, but they are conservatively biased. Almost unbiased estimates provides leave-one-out testing. SVMlightexploits that the results of most leave-one-outs (often more than 99%) are predetermined and need not be computed [Joachims, 2002a].
New in this version is an algorithm for learning ranking functions [Joachims, 2002c]. The goal is to learn a function from preference examples, so that it orders a new set of objects as accurately as possible. Such ranking problems naturally occur in applications like search engines and recommender systems.
Futhermore, this version includes an algorithm for training large-scale transductive SVMs. The algorithm proceeds by solving a sequence of optimization problems lower-bounding the solution using a form of local search. A detailed description of the algorithm can be found in [Joachims, 1999c]. A similar transductive learner, which can be thought of as a transductive version of k-Nearest Neighbor is theSpectral Graph Transducer.
SVMlightcan also train SVMs with cost models (see [Morik et al., 1999]).
The code has been used on a large range of problems, including text classification [Joachims, 1999c][Joachims, 1998a], image recognition tasks, bioinformatics and medical applications. Many tasks have the property of sparse instance vectors. This implementation makes use of this property which leads to a very compact and efficient representation.
SVMstructis a Support Vector Machine (SVM) algorithm for predicting multivariate or structured outputs. It performs supervised learning by approximating a mapping
h: X --> Yusing labeled training examples(x1,y1), ..., (xn,yn). Unlike regular SVMs, however, which consider only univariate predictions like in classification and regression,SVMstructcan predict complex objectsylike trees, sequences, or sets. Examples of problems with complex outputs are natural language parsing, sequence alignment in protein homology detection, and markov models for part-of-speech tagging. TheSVMstructalgorithm can also be used for linear-time training of binary and multi-class SVMs under the linear kernel [4].
The 1-slack cutting-plane algorithm implemented inSVMstructV3.10 uses a new but equivalent formulation of the structural SVM quadratic program and is several orders of magnitude faster than prior methods. The algorithm is described in [5]. The n-slack algorithm ofSVMstructV2.50 is described in [1][2]. TheSVMstructimplementation is based on theSVMlightquadratic optimizer
- One vs. One multi-class classification using a bound-constrained formulation
- Multi-class classification by solving a single optimization problem (again, a bounded formulation). See Section 3 of our comparison paper.
- Multi-class classification usingCrammer and Singer's formulation. See Section 4 of our comparison paper.
- Regression using a bound-constrained formulation
- Multi-class classification using Crammer and Singer's formulation with squared hinge (L2) loss
The current implementation borrows the structure oflibsvm. Similar options are also adopted. For the bound-constrained formulation for classification and regression,BSVMuses a decomposition method.BSVMuses a simple working set selection which leads to faster convergences for difficult cases. The use of a special implementation of the opmization solverTRONallowsBSVMto stably identify bounded variables.
GPDT is a C++ software designed to train large-scale Support Vector Machines (SVMs) for binary classification in both scalar and distributed memory parallel environments. It uses a popular problem decomposition technique [1,2,4,6,7] to split the SVM quadratic programming (QP) problem into a sequence of smaller QP subproblems, each one being solved by a suitable gradient projection method (GPM). The currently implemented GPMs are the Generalized Variable Projection Method (GVPM) [3] and the Dai-Fletcher method (DFGPM) [5].
A few minor bugs fixed (see more details in theCHANGESfile, also packaged with the sources distribution)
[Last updated: Fabruary 7, 2007.]
LSVM is a fast technique for training support vector machines (SVMs), based on a simple iterative approach. For example, it has been used to classify a dataset with 2 million points and 10 features in only 34 minutes on a 400 Mhz Pentium II. For more information, see our paperLagrangian Support Vector Machines.
SVMs are optimization based tools for solving machine learning problems. For an introduction to SVMs, you may want to look at thistutorial.
The software is free for academic and research use. For commercial use, please contactOlvi MangasarianorDave Musicant.
Click here to download the software,which consists ofMATLABm-files.
If you publish any work based on LSVM, please cite both the software and the paper on which it is based. Here are recommended LaTeX bibliography entries:
@misc{lsvm,
author = "O.L. Mangasarian and D. R. Musicant",
title = {{LSVM Software:} Active Set Support Vector Machine Classification Software},
year = 2000,
institution = {Computer Sciences Department, University of Wisconsin, Madison},
note = { www.cs.wisc.edu/$\sim$musicant/lsvm/.}}
@techreport{mm:00,
author = "O. L. Mangasarian and David R. Musicant",
title = "Lagrangian Support Vector Machine Classification",
institution = "Data Mining Institute, Computer Sciences Department, University of Wisconsin",
month = {June},
year = 2000,
number = {00-06},
address = "Madison, Wisconsin",
note={ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-06.ps}}
For more information, contact:
Olvi L. Mangasarian
olvi@cs.wisc.edu
David R. Musicant
dmusican@carleton.edu
(14)ASVM http://research.cs.wisc.edu/dmi/asvm/
ASVM is a fast technique for training linear support vector machines (SVMs), based on an active set approach which results in very fast running times. For example, it has been used to classify a dataset with 4 million points and 32 features in only 38 minutes on a 400 Mhz Pentium II. For more information, see our paperActive Support Vector Machines.
SVMs are an optimization based approach for solving machine learning problems. For an introduction to SVMs, you may want to look at thistutorial.
The software is free for academic use. For commercial use, please contactDave Musicant.
Click here to download the software.The software consists of:
- A stand-alone executable to do training
- A stand-alone executable to do testing
- A mexfile for use in theMATLABenvironment
No additional software whatsoever is required to use these tools.
If you publish any work based on ASVM, please cite both the software and the paper on which it is based. Here are recommended LaTeX bibliography entries:
@misc{asvm,
author = "D. R. Musicant",
title = {{ASVM Software:} Active Set Support Vector Machine Classification Software},
year = 2000,
institution = {Computer Sciences Department, University of Wisconsin, Madison},
note = { www.cs.wisc.edu/$\sim$musicant/asvm/.}}
@techreport{mm:00,
author = "O. L. Mangasarian and David R. Musicant",
title = "Active Support Vector Machine Classification",
institution = "Data Mining Institute, Computer Sciences Department, University of Wisconsin",
month = {April},
year = 2000,
number = {00-04},
address = "Madison, Wisconsin",
note={ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-04.ps}}
For more information, contact:
David R. Musicant
dmusican@carleton.edu
Iinstead of a standard support vector machine that classifies points by assigning them to one of two disjoint half-spaces, PSVM classifies points by assigning them to the closest of two parallel planes. For more information, see our paperProximal Support Vector Machines.
SVMs are an optimization based approach for solving machine learning problems. For an introduction to SVMs, you may want to look at thistutorial.
The software is free for academic use. For commercial use, please contactOlvi Mangasarian.
Click here to download the software.The software consists of:
- A linear version of the PSVM
- A nonlinear version of the PSVM
The only software needed to run these programs is MATLABwww.mathworks.com.
(16)Linear SVM http://linearsvm.com/
Linear SVM is the newestextremely fastmachine learning (data mining) algorithm for solvingmulticlassclassification problems fromultra largedata sets that implements an original proprietary version of acutting plane algorithmfor designing alinear support vector machine.LinearSVM is alinearly scalable routinemeaning that it creates an SVM model in a CPU time which scaleslinearlywith the size of the training data set. Our comparisons with other known SVM models clearly show its superior performance when high accuracy is required. We would highly appreciate if you may share LinearSVM performance on your data sets with us.
相关推荐
svm支持向量机python代码 支持向量机SVM通俗理解(python代码实现).pdf 支持向量机SVM通俗理解(python代码实现).pdf 支持向量机SVM通俗理解(python代码实现).pdf 支持向量机SVM通俗理解(python代码实现).pdf ...
支持向量机svm,支持向量机svm数据生成工具
支持向量机SVM和核函数的matlab程序代码--完整,调试过的有效的程序
支持向量机回归SVM完整数据和代码支持向量机回归SVM完整数据和代码支持向量机回归SVM完整数据和代码支持向量机回归SVM完整数据和代码支持向量机回归SVM完整数据和代码支持向量机回归SVM完整数据和代码支持向量机回归...
采用支持向量机SVM分类葡萄酒,完整代码,无错误,下载即可运行。
支持向量机SVM的matlab代码
基于python的粒子群算法PSO优化支持向量机SVM设计与实现
支持向量机SVM入门1-3支持向量支持向量机SVM入门1-3机SVM入门1-3
本文档力求直白地介绍支持向量机SVM,其中用到的很多例子都是网上经典的SVM例子。文档适合小白入门学习使用,其中涉及的数学知识也尽可能用朴实的语言带过。希望对这方面的入门学习爱好者有帮助。
【SVM分类】基于蜣螂优化算法DBO优化支持向量机SVM实现数据分类预测 【SVM分类】基于蜣螂优化算法DBO优化支持向量机SVM实现数据分类预测 【SVM分类】基于蜣螂优化算法DBO优化支持向量机SVM实现数据分类预测 【SVM...
本资源包括使用支持向量机(SVM)算法进行人脸识别预测的全部源码 SVM就是帮我们找到一个超平面,这个超平面能将不同的样本划分开,同时使得样本集中的点到这个分类超平面的最小距离(即分类间隔)最大化。 支持...
一维支持向量机SVM代码(MATLAB),包括支持向量机分类和支持向量机回归SVC&SVR;。另外还包括与BP神经网络的比较结果。
python利用支持向量机SVM进行时间序列预测(数据+源码)
SVM支持向量机,预测分类 回归,支持向量机(Support Vector Machine,SVM)是Corinna Cortes和Vapnik等于1995年首先提出的,它在解决小样本、非线性及高维模式识别中表现出许多特有的优势,并能够推广应用到函数拟合...
代码 基于SVM支持向量机算法的降水量预测模型代码代码 基于SVM支持向量机算法的降水量预测模型代码代码 基于SVM支持向量机算法的降水量预测模型代码代码 基于SVM支持向量机算法的降水量预测模型代码代码 基于SVM支持...
粒子群算法+优化支持向量机SVM+回归预测SVR+matlab源代码
SVM支持向量机VS2015+OpenCV2.4.13测试程序。其中分三个部分1、线性可分下测试程序;2、线性不可分下测试程序;3、多个类别的SVM测试程序。内涵详细注释和源码解析,方便学习
对多分类支持向量机几种算法进行分析, 系统地比较了各种算法的性能
机器学习之支持向量机SVM代码
支持向量机(SVM)汇报ppt及其代码大全支持向量机(SVM)汇报ppt及其代码大全支持向量机(SVM)汇报ppt及其代码大全支持向量机(SVM)汇报ppt及其代码大全