决策树 -- 基于ID3算法

xiaoxiao2021-02-28  59

决策树可以使用不熟悉的数据集合,并从中提取出一系列规则,这个过程也是机器学习的过程。

1. 首先要解决的问题

在构造决策树时,我们需要解决第一个问题,当前数据集中哪些特征在划分数据分类时起决定性作用。

信息增益: 信息论里有一个信息增益的描述,它的定义如下: 在划分数据集之前、之后信息发生的变化称为信息增益。 信息增益最高的特征就是最好的选择。

信息增益具体量化为 —- 熵 熵是如何计算的呢?如下:

相关代码如下:

from math import log import operator #示例数据 def createDataSet(): dataSet = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']] labels = ['no surfacing','flippers'] #change to discrete values return dataSet, labels #计算熵 def calcShannonEnt(dataSet): numEntries = len(dataSet) labelCounts = {} for featVec in dataSet: #the the number of unique elements and their occurance currentLabel = featVec[-1] if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0 labelCounts[currentLabel] += 1 shannonEnt = 0.0 for key in labelCounts: prob = float(labelCounts[key])/numEntries shannonEnt -= prob * log(prob,2) #log base 2 return shannonEnt #划分数据集 #输入:数据集,第几列,值多少 #返回:余下的行列 def splitDataSet(dataSet, axis, value): retDataSet = [] for featVec in dataSet: if featVec[axis] == value: reducedFeatVec = featVec[:axis] #chop out axis used for splitting reducedFeatVec.extend(featVec[axis+1:]) retDataSet.append(reducedFeatVec) return retDataSet #如第1列,值为1返回为: [[1, 'yes'], [1, 'yes'], [0, 'no']] #计算最好的信息增益 #返回下一个最好特征划分的索引值 def chooseBestFeatureToSplit(dataSet): numFeatures = len(dataSet[0]) - 1 #the last column is used for the labels baseEntropy = calcShannonEnt(dataSet) bestInfoGain = 0.0; bestFeature = -1 for i in range(numFeatures): #iterate over all the features featList = [example[i] for example in dataSet]#create a list of all the examples of this feature uniqueVals = set(featList) #get a set of unique values newEntropy = 0.0 for value in uniqueVals: subDataSet = splitDataSet(dataSet, i, value) prob = len(subDataSet)/float(len(dataSet)) newEntropy += prob * calcShannonEnt(subDataSet) infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy if (infoGain > bestInfoGain): #compare this to the best gain so far bestInfoGain = infoGain #if better than current best, set to best bestFeature = i return bestFeature #returns an integer

以上都是辅助函数



2. 创建一棵树

#创建树 def createTree(dataSet,labels): classList = [example[-1] for example in dataSet] if classList.count(classList[0]) == len(classList): return classList[0]#stop splitting when all of the classes are equal if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet return majorityCnt(classList) bestFeat = chooseBestFeatureToSplit(dataSet) bestFeatLabel = labels[bestFeat] myTree = {bestFeatLabel:{}} del(labels[bestFeat]) featValues = [example[bestFeat] for example in dataSet] uniqueVals = set(featValues) for value in uniqueVals: subLabels = labels[:] #copy all of labels, so trees don't mess up existing labels myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels) return myTree #返回树如下: {'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}}



3. 可视化

这棵树的数据结构如下,但是这种字典的表示形式非常不易于理解: {‘no surfacing’: {0: ‘no’, 1: {‘flippers’: {0: ‘no’, 1: ‘yes’}}}}

于是我们考虑使用图形化,来帮助理解。这里主要使用matplotlib库



4.举例:(银行贷款申请)

有一个贷款申请样本数据表:

希望通过所给的训练数据学习一个贷款申请树,用以对未来的贷款申请进行分类,即当新的客户提出贷款申请时,根据申请人的特征利用决策树决定是否批准贷款申请。

首先,量化数据:

其次,调用ID3算法: 生成这棵树的数据结构, {‘House’: {0: {‘Work’: {0: ‘no’, 1: ‘yes’}}, 1: ‘yes’}}

最后,数据可视化:

测试 当有一个新的申请人提出申请时,就可以很快得到结论: 老年,无工作,无房子,信贷非常好 —-> no ! tt = trees.classify(myTree1, labels1, [3, 0, 0, 3])

转载请注明原文地址: https://www.6miu.com/read-65690.html

最新回复(0)