Logistic回归问题的,写出likelihood function
,目标是使得l()最大化。可采用梯度上升方法进行迭代,但不同的是求最大值。
#include <iostream>
#include <algorithm>
#include <cmath>
using namespace std;
double h(double* x,double* q)
{
double temp = q[0] * x[0] + q[1] * x[1] + q[2] * x[2] + q[3] * x[3];
double e = pow(2.718281828, temp);
return e / (e + 1);
}
void classifier(double* pre)
{
double x[6][4]={{1,47,76,24},
{1,46,77,23},
{1,48,74,22},
{1,34,76,21},
{1,35,75,24},
{1,34,77,25},
};
double y[]={1,1,1,0,0,0};
double theta[]={1,1,1,1};
int i, j, k;
double l;
for (i = 0; i < 10000; i++)
{
for (j=0; j<4; j++)
{
double sum=0;
for (k = 0; k < 6; k++)
{
sum += (y[k] - h(x[k], theta)) * x[k][j];
}
theta[j] += 0.001 * sum;
cout << theta[j] << " ";
}
cout << endl;
l = 0;
for (j = 0; j < 6; j++)
{
l += y[j] * log(h(x[j], theta)) + (1 - y[j]) * log(1 - h(x[j], theta));
}
//cout<< l << endl;
}
cout << i << endl;
cout << h(pre, theta) << endl;
cout << l << endl;
}
int main(void)
{
double pre[] = {1, 48 ,74, 22};
classifier(pre);
return 0;
}
试验中选择了一个学习样本进行测试,得到的h(x)=0.999984, 相似的极高,若填入的测试数据为其他,可根据h(x)值的大小进行判断y值是0还是1.
分享到:
相关推荐
Mastering Machine Learning with scikit-learn (2 ed) (True PDF + AWZ3 + codes) Table of Contents Preface 1 Chapter 1: The Fundamentals of Machine Learning 6 Defining machine learning 6 Learning from ...
1.2 When Do We Need Machine Learning? 21 1.3 Types of Learning 22 1.4 Relations to Other Fields 24 1.5 How to Read This Book 25 1.5.1 Possible Course Plans Based on This Book 26 1.6 Notation 27 Part I...
The most common learning algorithms: Linear and Polynomial Regression, Logistic Regression, k-Nearest Neighbors, Support Vector Machines, Decision Trees, Random Forests, and Ensemble methods. Part II,...
What is Machine Learning? What problems does it try to solve? What are the main categories and fundamental concepts of Machine Learning systems? • The main steps in a typical Machine Learning project...
3.1.3 Sequential learning . . . . . . . . . . . . . . . . . . . . . . 143 3.1.4 Regularized least squares . . . . . . . . . . . . . . . . . . . 144 3.1.5 Multiple outputs . . . . . . . . . . . . . . ....
-- Kernel Logistic Regression [核型羅吉斯迴歸] -- Support Vector Regression [支持向量迴歸] Combining Predictive Features [融合預測性的特徵] -- Bootstrap Aggregation [自助聚合法] -- Adaptive Boosting ...
神经网络实现分类matlab代码自述文件 #Intro to Machine Learning Course of 斯坦福大学 Coursera 的 Andrew Ng 硬件的评分结果在“成绩”文件夹中 ##周 01: ##周 02: ##week 03: Logistical Regression FILES ex2...
• The most common learning algorithms: Linear and Polynomial Regression, Logistic Regression, k-Nearest Neighbors, Support Vector Machines, Decision Trees, Random Forests, and Ensemble methods. ...
机器学习回归项目 使用的著名UCI数据集来预测葡萄酒质量。
目录Regularized Logistic regression1.0 Package1.1 Load data1.2 Visualization data1.3 Data preprocess1.4 Feature mapping1.5 Sigmoid function1.6 Regularized costunction1.7 Regularized gradientfunction...
Deep learning architectures have attained incredible popularity in recent years due to their phenomenal success in, among other appli- cations, computer vision tasks. Particularly, convolutional ...
无服务器scikit学习演示 该存储库提供了建议的项目结构和简单的设计模式,用于将轻量级的Scikit-Learn管道部署为Google ... 提供的示例代码适合简单的Logistic回归模型,以预测给定的患者是否患有心脏病。 该数据集 。
我们将 CAP 曲线映射到球盒问题,并使用统计物理技术来计算 CAP 曲线的统计数据,从中我们可以得出 PD 曲线的形状。 这种方法导致了一种新型的 PD 曲线形状,在文献中尚未考虑过,即 Fermi-Dirac 函数,它是一个取...