Random Forest Classifier¶
Some of the docstrings for this module have been automatically extracted from the scikit-learn library and are covered by their respective licenses.
- 
class node_RandomForestClassifier.RandomForestClassifier[source]¶
- A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap is True (default). - Configuration: - n_estimators - The number of trees in the forest. 
- criterion - The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. Note: this parameter is tree-specific. 
- bootstrap - Whether bootstrap samples are used when building trees. 
- oob_score - Whether to use out-of-bag samples to estimate the generalization accuracy. 
- n_jobs - The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the number of cores. 
- max_features - The number of features to consider when looking for the best split: - If int, then consider max_features features at each split.
- If float, then max_features is a percentage and int(max_features * n_features) features are considered at each split.
- If “auto”, then max_features=sqrt(n_features).
- If “sqrt”, then max_features=sqrt(n_features) (same as “auto”).
- If “log2”, then max_features=log2(n_features).
- If None, then max_features=n_features.
 - Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than - max_featuresfeatures.
- max_depth - The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. 
- min_samples_split - The minimum number of samples required to split an internal node: - If int, then consider min_samples_split as the minimum number.
- If float, then min_samples_split is a percentage and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.
 - Changed in version 0.18: Added float values for percentages. 
- min_samples_leaf - The minimum number of samples required to be at a leaf node: - If int, then consider min_samples_leaf as the minimum number.
- If float, then min_samples_leaf is a percentage and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
 - Changed in version 0.18: Added float values for percentages. 
- min_samples_leaf - The minimum number of samples required to be at a leaf node: - If int, then consider min_samples_leaf as the minimum number.
- If float, then min_samples_leaf is a percentage and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
 - Changed in version 0.18: Added float values for percentages. 
- min_weight_fraction_leaf - The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided. 
- max_leaf_nodes - Grow trees with - max_leaf_nodesin best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
- min_impurity_decrease - A node will be split if this split induces a decrease of the impurity greater than or equal to this value. - The weighted impurity decrease equation is the following: - N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity) - where - Nis the total number of samples,- N_tis the number of samples at the current node,- N_t_Lis the number of samples in the left child, and- N_t_Ris the number of samples in the right child.- N,- N_t,- N_t_Rand- N_t_Lall refer to the weighted sum, if- sample_weightis passed.- New in version 0.19. 
- min_impurity_split - Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf. - Deprecated since version 0.19: - min_impurity_splithas been deprecated in favor of- min_impurity_decreasein 0.19 and will be removed in 0.21. Use- min_impurity_decreaseinstead.
- random_state - If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. 
- warm_start - When set to - True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest.
 - Attributes: - classes_ - The classes labels (single output problem), or a list of arrays of class labels (multi-output problem). 
- feature_importances_ - The feature importances (the higher, the more important the feature). 
- n_classes_ - The number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem). 
- n_features_ - The number of features when - fitis performed.
- n_outputs_ - The number of outputs when - fitis performed.
- oob_score_ - Score of the training dataset obtained using an out-of-bag estimate. 
- oob_decision_function_ - Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case, oob_decision_function_ might contain NaN. 
 - Inputs: - Outputs: - model : model
- Model