← Back to Blog
🇬🇧 English

Decision Tree Analysis in SPSS: CHAID and CRT Classification Trees

Decision Tree Analysis in SPSS: CHAID and CRT Classification Trees
IBM SPSS Statistics 27 File Edit View Data Transform Analyze Graphs Utilities Classify ▶ ▶ Tree Menü Yolu: Analyze → Classify → Tree Yukarıdaki menü yolunu takip ederek analiz penceresini açın

📸 Decision Tree (Classify → Tree) menu in SPSS

What Is a Decision Tree?

A decision tree partitions data into subgroups using a series of binary splits based on predictor variables, building a tree-shaped classification or prediction model. Each internal node represents a split rule; each leaf node represents an outcome class. Decision trees are highly interpretable — even non-statisticians can follow the logic.

CHAID vs. CRT

Running in SPSS

Step 1: Analyze → Classify → Tree.
Step 2: Add DV to Dependent Variable. Add all candidate predictors to Independent Variables.
Step 3: Growing Method: CHAID or CRT. Set Maximum tree depth (5 is a good starting point).
Step 4: Validation: Cross-validation (10-fold) or holdout sample (70/30 split) → OK.
SPSS Statistics Output Viewer Classification Results Predicted: No Predicted: Yes % Correct Observed: No 142 18 88.8% Observed: Yes 21 89 80.9% Overall Accuracy 85.5% * p < .05 anlamlı sonuç gösterir

📸 Classification matrix — 85.5% overall accuracy

APA Reporting

CHAID decision tree analysis achieved 85.5% overall classification accuracy, with 88.8% specificity and 80.9% sensitivity in cross-validation. The root node predictor was CRP level (χ²=42.3, p<.001), explaining the largest proportion of outcome variance.

Professional Statistical Analysis Consulting

Let's run your analyses together with SPSS, GraphPad, and R.

WhatsApp Contact →