Random Forest Classifier¶
A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap is True (default).
Documentation¶
Attributes¶
- classes_
The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).
- feature_importances_
The impurity-based feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance.
Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See
sklearn.inspection.permutation_importance()
as an alternative.- n_classes_
The number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem).
n_features_
- n_outputs_
The number of outputs when
fit
is performed.- oob_decision_function_
Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case, oob_decision_function_ might contain NaN. This attribute exists only when
oob_score
is True.- oob_score_
Score of the training dataset obtained using an out-of-bag estimate. This attribute exists only when
oob_score
is True.
Definition¶
Output ports¶
- model model
Model
Configuration¶
- Bootstrap (bootstrap)
Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
- Split quality criterion (criterion)
The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “log_loss” and “entropy” both for the Shannon information gain, see tree_mathematical_formulation. Note: This parameter is tree-specific.
- Maximum tree depth (max_depth)
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
- Maximum number of features (max_features)
The number of features to consider when looking for the best split:
If int, then consider max_features features at each split.
If float, then max_features is a fraction and max(1, int(max_features * n_features_in_)) features are considered at each split.
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
Changed in version 1.1: The default of max_features changed from “auto” to “sqrt”.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than
max_features
features.- Maximum leaf nodes (max_leaf_nodes)
Grow trees with
max_leaf_nodes
in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.- Minimum impurity decrease (min_impurity_decrease)
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity)where
N
is the total number of samples,N_t
is the number of samples at the current node,N_t_L
is the number of samples in the left child, andN_t_R
is the number of samples in the right child.
N
,N_t
,N_t_R
andN_t_L
all refer to the weighted sum, ifsample_weight
is passed.Added in version 0.19.
- Growth threshold (min_impurity_split)
(no description)
- Minimum number of samples for leaf node (min_samples_leaf)
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaf
training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.
If int, then consider min_samples_leaf as the minimum number.
If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
Changed in version 0.18: Added float values for fractions.
- Minimum samples for split (min_samples_split)
The minimum number of samples required to split an internal node:
If int, then consider min_samples_split as the minimum number.
If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.
Changed in version 0.18: Added float values for fractions.
- Minimum leaf weight fraction (min_weight_fraction_leaf)
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
- Trees in forest (n_estimators)
The number of trees in the forest.
Changed in version 0.22: The default value of
n_estimators
changed from 10 to 100 in 0.22.- Number of jobs (n_jobs)
The number of jobs to run in parallel.
fit()
,predict()
,decision_path()
andapply()
are all parallelized over the trees.None
means 1 unless in ajoblib.parallel_backend
context.-1
means using all processors. See Glossary for more details.- Use out-of-bad samples (oob_score)
Whether to use out-of-bag samples to estimate the generalization score. By default,
accuracy_score()
is used. Provide a callable with signature metric(y_true, y_pred) to use a custom metric. Only available if bootstrap=True.- Random Seed (random_state)
Controls both the randomness of the bootstrapping of the samples used when building trees (if
bootstrap=True
) and the sampling of the features to consider when looking for the best split at each node (ifmax_features < n_features
). See random_state for details.- Warm start (warm_start)
When set to
True
, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See warm_start and tree_ensemble_warm_start for details.
Implementation¶
- class node_RandomForestClassifier.RandomForestClassifier[source]