Multi-Layer Perceptron Classifier

../../../../_images/neuralnetwork.svg

Multi-layer perceptron classifier

Documentation

Attributes

classes_

Class labels for each output.

coefs_

The ith element in the list represents the weight matrix corresponding to layer i.

intercepts_

The ith element in the list represents the bias vector corresponding to layer i + 1.

loss_

The current loss computed with the loss function.

n_iter_

The number of iterations the solver has run.

n_layers_

Number of layers.

n_outputs_

Number of outputs.

out_activation_

Name of the output activation function.

Definition

Output ports

model model

Model

Configuration

Activation Function (activation)

Activation function for the hidden layer.

  • ‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x

  • ‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)).

  • ‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x).

  • ‘relu’, the rectified linear unit function, returns f(x) = max(0, x)

Alpha (alpha)

Strength of the L2 regularization term. The L2 regularization term is divided by the sample size when added to the loss.

Batch size (batch_size)

Size of minibatches for stochastic optimizers. If the solver is ‘lbfgs’, the classifier will not use minibatch. When set to “auto”, batch_size=min(200, n_samples).

First moment vector decay rate (beta_1)

Exponential decay rate for estimates of first moment vector in adam, should be in [0, 1). Only used when solver=’adam’.

Second moment vector decay rate (beta_2)

Exponential decay rate for estimates of second moment vector in adam, should be in [0, 1). Only used when solver=’adam’.

Use early stopping (early_stopping)

Whether to use early stopping to terminate training when validation score is not improving. If set to true, it will automatically set aside 10% of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. The split is stratified, except in a multilabel setting. If early stopping is False, then the training stops when the training loss does not improve by more than tol for n_iter_no_change consecutive passes over the training set. Only effective when solver=’sgd’ or ‘adam’.

Numberical stability (epsilon)

Value for numerical stability in adam. Only used when solver=’adam’.

Hidden layer number and sizes (hidden_layer_sizes)

The ith element represents the number of neurons in the ith hidden layer.

Learning rate (learning_rate)

Learning rate schedule for weight updates.

  • ‘constant’ is a constant learning rate given by ‘learning_rate_init’.

  • ‘invscaling’ gradually decreases the learning rate at each time step ‘t’ using an inverse scaling exponent of ‘power_t’. effective_learning_rate = learning_rate_init / pow(t, power_t)

  • ‘adaptive’ keeps the learning rate constant to ‘learning_rate_init’ as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if ‘early_stopping’ is on, the current learning rate is divided by 5.

Only used when solver='sgd'.

Initial learning rate (learning_rate_init)

The initial learning rate used. It controls the step-size in updating the weights. Only used when solver=’sgd’ or ‘adam’.

Maximum iterations (max_iter)

Maximum number of iterations. The solver iterates until convergence (determined by ‘tol’) or this number of iterations. For stochastic solvers (‘sgd’, ‘adam’), note that this determines the number of epochs (how many times each data point will be used), not the number of gradient steps.

Momentum (momentum)

Momentum for gradient descent update. Should be between 0 and 1. Only used when solver=’sgd’.

Max iterations without loss improvement (n_iter_no_change)

Maximum number of epochs to not meet tol improvement. Only effective when solver=’sgd’ or ‘adam’.

New in version 0.20.

Use Nesterov’s momentum (nesterovs_momentum)

Whether to use Nesterov’s momentum. Only used when solver=’sgd’ and momentum > 0.

Inverse scaling learning rate exponent (power_t)

The exponent for inverse scaling learning rate. It is used in updating effective learning rate when the learning_rate is set to ‘invscaling’. Only used when solver=’sgd’.

Random seed (random_state)

Determines random number generation for weights and bias initialization, train-test split if early stopping is used, and batch sampling when solver=’sgd’ or ‘adam’. Pass an int for reproducible results across multiple function calls. See random_state.

Shuffle samples (shuffle)

Whether to shuffle samples in each iteration. Only used when solver=’sgd’ or ‘adam’.

Solver (solver)

The solver for weight optimization.

  • ‘lbfgs’ is an optimizer in the family of quasi-Newton methods.

  • ‘sgd’ refers to stochastic gradient descent.

  • ‘adam’ refers to a stochastic gradient-based optimizer proposed by Kingma, Diederik, and Jimmy Ba

Note: The default solver ‘adam’ works pretty well on relatively large datasets (with thousands of training samples or more) in terms of both training time and validation score. For small datasets, however, ‘lbfgs’ can converge faster and perform better.

Tolerance (tol)

Tolerance for the optimization. When the loss or score is not improving by at least tol for n_iter_no_change consecutive iterations, unless learning_rate is set to ‘adaptive’, convergence is considered to be reached and training stops.

Validation fraction (validation_fraction)

The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True.

Warm start (warm_start)

When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See warm_start.

Implementation

class node_MLPClassifier.MLPClassifier[source]