Random Forest Classifier¶
A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap is True (default).
| Configuration: | 
 | 
|---|---|
| Attributes: | 
 | 
| Inputs: | |
| Outputs: | 
 | 
- Output ports:
- model: - model - Model 
- Configuration:
- n_estimators
- The number of trees in the forest. - Changed in version 0.20: The default value of - n_estimatorswill change from 10 in version 0.20 to 100 in version 0.22.
- criterion
- The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. Note: this parameter is tree-specific.
- bootstrap
- Whether bootstrap samples are used when building trees. If False, the whole datset is used to build each tree.
- oob_score
- Whether to use out-of-bag samples to estimate the generalization accuracy.
- n_jobs
- The number of jobs to run in parallel for both fit and predict.
Nonemeans 1 unless in ajoblib.parallel_backendcontext.-1means using all processors. See n_jobs for more details.
- max_features
- The number of features to consider when looking for the best split: - If int, then consider max_features features at each split.
- If float, then max_features is a fraction and int(max_features * n_features) features are considered at each split.
- If “auto”, then max_features=sqrt(n_features).
- If “sqrt”, then max_features=sqrt(n_features) (same as “auto”).
- If “log2”, then max_features=log2(n_features).
- If None, then max_features=n_features.
 - Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than - max_featuresfeatures.
- max_depth
- The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
- min_samples_split
- The minimum number of samples required to split an internal node: - If int, then consider min_samples_split as the minimum number.
- If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.
 - Changed in version 0.18: Added float values for fractions. 
- min_samples_leaf
- The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least - min_samples_leaftraining samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.- If int, then consider min_samples_leaf as the minimum number.
- If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
 - Changed in version 0.18: Added float values for fractions. 
- min_weight_fraction_leaf
- The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
- max_leaf_nodes
- Grow trees with max_leaf_nodesin best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
- min_impurity_split
- Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf. - Deprecated since version 0.19: - min_impurity_splithas been deprecated in favor of- min_impurity_decreasein 0.19. The default value of- min_impurity_splitwill change from 1e-7 to 0 in 0.23 and it will be removed in 0.25. Use- min_impurity_decreaseinstead.
- min_impurity_decrease
- A node will be split if this split induces a decrease of the impurity greater than or equal to this value. - The weighted impurity decrease equation is the following: - N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity) - where - Nis the total number of samples,- N_tis the number of samples at the current node,- N_t_Lis the number of samples in the left child, and- N_t_Ris the number of samples in the right child.- N,- N_t,- N_t_Rand- N_t_Lall refer to the weighted sum, if- sample_weightis passed.- New in version 0.19. 
- random_state
- If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
- warm_start
- When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See warm_start.
 
Some of the docstrings for this module have been automatically extracted from the scikit-learn library and are covered by their respective licenses.
