这篇文章主要为大家展示了“pandas如何实现机器学习的knn算法”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“pandas如何实现机器学习的knn算法”这篇文章吧。
具体内容如下
# coding: gbk
import pandas as pd
import numpy as np
def getdata(path):
data = pd.read_csv(path, header=None, sep='\t')
character = data.iloc[:, :-1]
label = data.iloc[:, -1]
chara_max = character.max()
chara_min = character.min()
chara_range = chara_max - chara_min
normal_chara = (character - chara_min) / chara_range
return normal_chara, label # 获得归一化特征值和标记
def knn(inX, normal_chara, label, k):
data_sub = normal_chara - inX
data_square = data_sub.applymap(np.square)
data_sum = data_square.sum(axis=1)
data_sqrt = data_sum.map(np.sqrt)
dis_sort = data_sqrt.argsort()
k_label = label[dis_sort[:k]]
label_sort = k_label.value_counts()
res_label = label_sort.index[0]
return res_label # knn算法分类
机器学习--KNN基本实现
# _*_ coding _*_
import numpy as np
import math
import operator
def get_data(dataset):
x = dataset[:,:-1].astype(np.float)
y = dataset[:,-1]
return x,y
# def cal_dis(a,b):
# x1,y1 = a[:]
# x2,y2 = b[:]
# dist = math.sqrt(math.pow(2,x2)-math.pow(2,x1))
def knnclassifer(dataset,predict,k=3):
x,y = get_data(dataset)
dic = {}
distince = np.sum((predict-x)**2,axis=1)**0.5
sorted_dict = np.argsort(distince)#[2 1 0 3 4]
countLabel = {}
for i in range(k):
label = y[sorted_dict[i]]
# print(i,sorted_dict[i],label)
countLabel[label] = countLabel.get(label,0)+1
new_dic = sorted(countLabel,key=operator.itemgetter(0),reverse=True)
return new_dic[0][0]
if __name__ == '__main__':
dataset = np.loadtxt("dataset.txt",dtype=np.str,delimiter=",")
predict = [2,2]
label = knnclassifer(dataset,predict,3)
print(label)
以上是“pandas如何实现机器学习的knn算法”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。