Alternating decision tree

Jump to navigation Jump to search

An Alternating Decision Tree (ADTree) is a machine learning method for classification. The ADTree data structure and algorithm are a generalization of decision trees and have connections to boosting. ADTrees were introduced by Yoav Freund and Llew Mason[1].

Motivation

Original boosting algorithms typically combined either decision stumps or decision trees. Boosting decision stumps creates a set of <math>T</math> weighted weak hypotheses (where <math>T</math> is the number of boosting iterations), which can be visualized as a set for reasonable values of <math>T</math>. Boosting decision trees could result in a final combined classifier with thousands (or millions) of nodes for modest values of <math>T</math>. Both of these scenarios produce final classifiers in which it is either difficult to visualize correlations or difficult to visualize at all. Alternating decision trees provide a method for visualizing decision stumps in an ordered and logical way to demonstrate correlations. In doing so, they simultaneously generalize decision trees and can be used to essentially grow boosted decision trees in parallel.

Description of the structure

The alternating decision tree structure consists of two components: decision nodes and prediction nodes. Decision nodes specify a predicate condition. Prediction nodes specify a value to add to the score based on the result of the decision node. Each decision node can be seen as a conjunction between a precondition (the decision node was reached) and the condition specified in the decision node.

Perhaps the easiest way to understand the interaction of decision and prediction nodes is through an example.The following example is taken from JBoost performing boosting for 6 iterations on the spambase dataset (available from the UCI Machine Learning Repository). Positive examples indicate that the message is spam and negative examples are not spam. During each iteration, a single node is added to the ADTree. The ADTree determined by the learning algorithm implemented in JBoost is:

An ADTree for 6 iterations on the Spambase dataset.
An ADTree for 6 iterations on the Spambase dataset.

The tree construction algorithm is described below in the Description of the algorithm section. We now show how to interpret the tree once it has been constructed. We focus on one specific instance:

An instance to be classified
Feature Value
char_freq_bang 0.08
word_freq_hp 0.4
capital_run_length_longest 4
char_freq_dollar 0
word_freq_remove 0.9
word_freq_george 0
Other features ...

For this instance, we obtain a score that determines the classification of the instance. This score not only acts as a classification, but also as a measure of confidence. The actual order that the ADTree nodes are evaluated will likely be different then the order in which they were created. That is, the node from iteration 4 can be evaluated before the node from iteration 1. There are constraints to this (e.g. node from iteration 2 must be evaluated before the node from iteration 5). In general, either breadth-first or depth-first evaluation will yield the correct interpretation.

The following table shows how the score is created (progressive score) for our above example instance:

Score for the above instance
Iteration 0 1 2 3 4 5 6
Instance values N/A .08 < .052 = n .4 < .195 = n 0 < .01 = y 0 < 0.005 = y N/A .9 < .225 = n
Prediction -0.093 0.74 -1.446 -0.38 0.176 0 1.66
Progressive Score -0.093 0.647 -0.799 -1.179 -1.003 -1.003 0.657


There are a few observations that we should make

  • The final classification of the example is positive (0.657), meaning that the example is considered to be spam.
  • All nodes at depth 1 have their predicate evaluated and one of their prediction nodes contributes to the score. Thus a tree with depth 1 is the equivalent of boosted decision stumps.
  • If a decision node is not reached (the node from iteration 5 in the above example) then the node's predicate and subsequent prediction nodes will not be evaluated.

Description of the algorithm

The alternating decision tree learning algorithm is described in the original paper[1]. The general idea involves a few main concepts:

  • The root decision node is always TRUE or FALSE
  • The tree is grown iteratively. The total number of iterations is generally decided prior to starting the algorithm.
  • Each decision node (<math>c_2</math>) is selected by the algorithm based on how well it discriminates between positive and negative examples.
  • Once a decision node is created, the prediction node is determined by how good the decision node discriminates.


Before the algorithm, we first define some notation. Let <math>c</math> be a predicate, then

  • <math>W_+(c)</math> is the weight of all positively labeled examples that satisfy <math>c</math>
  • <math>W_-(c)</math> is the weight of all negatively labeled examples that satisfy <math>c</math>
  • <math>W(c)</math> is the weight of all examples that satisfy <math>c</math>
  • We call <math>c</math> a precondition when it is a conjunction of previous base conditions and negations of previous base conditions


The exact algorithm is:

INPUT: <math>m</math> examples and labels <math>(x_1,y_1),\ldots,(x_m,y_m)</math>

Set the weight of all examples to <math>W_1(x_j) = 1/m</math>

Set the margin of all examples to <math>r_1(x_j) = 0</math>

The root decision node is always <math>c=</math>TRUE, with a single prediction node

<math>a=\frac{1}{2}\textrm{ln}\frac{W_+(T)}{W_-(T)}</math>

For <math>t = 1, \ldots, T</math> do:

  • Let <math>c_1 \in P_t</math> be a precondition (that is, the node being created can be reached via <math>c_1</math>) and <math>c_2</math> be a condition (the new node). Then each decision node (<math>c_2</math>) is selected by the algorithm based on how well it discriminates between positive and negative examples. The original ADTree algorithm minimizes the criterion <math>2 \left( \sqrt{W_+(c_1\wedge c_2) W_-(c_1 \wedge c_2)} + \sqrt{W_+(c_1\wedge \neg c_2) W_-(c_1 \wedge \neg c_2)} \right) +W(\neg c_2) </math>.
  • Once a decision node is created, the prediction nodes are determined by <math>a=\frac{1}{2}\textrm{ln}\frac{W_+(c_1\wedge c_2)}{W_-(c_1 \wedge c_2)}</math> and <math>b=\frac{1}{2}\textrm{ln}\frac{W_+(c_1\wedge \neg c_2)}{W_-(c_1 \wedge \neg c_2)}</math>
  • Add the conditions <math>c_1 \wedge c_2</math> and <math>c_1 \wedge \neg c_2</math> to the set of possible preconditions <math>P_{t+1}</math>
  • Update the weights: <math>W_{t+1}(x_j) = W_{t}(x_j)e^{r_t(x_j)y_j}</math>

Empirical Results

Figure 6 in the original paper[1] demonstrates that ADTrees are typically as robust as boosted decision trees and boosted decision stumps.

References

  1. 1.0 1.1 1.2 Yoav Freund and Llew Mason. The Alternating Decision Tree Algorithm. Proceedings of the 16th International Conference on Machine Learning, pages 124-133 (1999)