通过根据先前训练好的模型的决策树向训练数据添加新特征,可以改进学习过程。
详情
此函数灵感来源于以下论文的 3.1 段落:
在 Facebook 预测广告点击率的实践经验
(Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yan, xin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers, Joaquin Quinonero Candela)
国际在线广告数据挖掘研讨会 (ADKDD) - 2014年8月24日
方法解释摘录
"我们发现,提升决策树是实现我们刚刚描述的非线性变换和元组变换的一种强大且非常方便的方式。我们将每棵独立的树视为一个类别特征,其取值是实例最终落入的叶节点的索引。我们对此类特征使用 1-of-K 编码。"
"例如,考虑图 1 中具有 2 个子树的提升树模型,其中第一个子树有 3 个叶节点,第二个有 2 个叶节点。如果一个实例在第一个子树中落入叶节点 2,在第二个子树中落入叶节点 1,则输入到线性分类器的总体输入将是二进制向量 [0, 1, 0, 1, 0]
,其中前 3 个条目对应于第一个子树的叶节点,后 2 个条目对应于第二个子树的叶节点。"
...
"我们可以将基于提升决策树的变换理解为一种有监督的特征编码,它将实值向量转换为紧凑的二进制值向量。从根节点到叶节点的遍历表示对某些特征的应用规则。"
示例
data(agaricus.train, package = "xgboost")
data(agaricus.test, package = "xgboost")
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
param <- list(max_depth = 2, learning_rate = 1, objective = 'binary:logistic', nthread = 1)
nrounds = 4
bst <- xgb.train(params = param, data = dtrain, nrounds = nrounds)
# Model accuracy without new features
accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) /
length(agaricus.test$label)
# Convert previous features to one hot encoding
new.features.train <- xgb.create.features(model = bst, agaricus.train$data)
new.features.test <- xgb.create.features(model = bst, agaricus.test$data)
# learning with new features
new.dtrain <- xgb.DMatrix(
data = new.features.train, label = agaricus.train$label, nthread = 1
)
new.dtest <- xgb.DMatrix(
data = new.features.test, label = agaricus.test$label, nthread = 1
)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds)
# Model accuracy with new features
accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) /
length(agaricus.test$label)
# Here the accuracy was already good and is now perfect.
cat(paste("The accuracy was", accuracy.before, "before adding leaf features and it is now",
accuracy.after, "!\n"))