SigmaWay Blog

SigmaWay Blog tries to aggregate original and third party content for the site users. It caters to articles on Process Improvement, Lean Six Sigma, Analytics, Market Intelligence, Training ,IT Services and industries which SigmaWay caters to

Random forests: a collection of Decision trees!

In literal sense, a forest is an area full of trees. Likewise, in technical sense, a Random Forest is essentially a collection of Decision Trees. Although both are classification algorithms which are supervised in nature, which one is better to use?

A Decision Tree is built on an entire data set, using all the features/variables while a Random forest randomly (as the name suggests) selects observations/rows and specific features/variables to build several decision trees and then average the results. Each tree “votes” or chooses the  class and the one receiving the most votes by majority is the “winner” or the predicted class.

A Decision tree is comparatively easier to interpret and visualize, works well on large datasets and can handle categorical as well as numerical data. However, choosing a comfortable algorithm for optimal choice at each node and decision trees are also vulnerable to over fitting.

Random Forests come to our rescue in such situations. Since they select samples and the results are aggregated and averaged, they are more robust than decision trees. Random Forests are a strong modelling technique than Decision Trees.

Read more at:

Rate this blog entry:
IoT explained!
Cyber security tips

Related Posts



No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Tuesday, 21 September 2021
If you'd like to register, please fill in the username, password and name fields.

Sigma Connect

sigmaway forums


Raise a question

Access Now

sigmaway blogs


Blog on cutting edge topics

Read More

sigmaway events


Hangout with us

Learn More

sigmaway newsletter


Start your subscription

Signup Now

Sign up for our newsletter

Follow us