Decision trees improve predictive accuracy by leveraging a hierarchical structure that systematically splits data into increasingly homogeneous subsets based on feature values. This method allows the model to capture complex patterns and interactions within the data by breaking it down into simpler decision rules, leading to better generalization on unseen data. Moreover, decision trees do not require extensive data preprocessing, such as normalization or imputation, making them versatile for various types of data. They inherently perform feature selection through the splitting process, highlighting the most influential predictors, which can enhance interpretability and reduce overfitting when appropriately pruned or refined through ensemble methods like Random Forests or Gradient Boosting.