K-means Clustering¶
Clusters data by trying to separate samples in n groups of equal variance
Configuration:
n_clusters
The number of clusters to form as well as the number of centroids to generate.
max_iter
Maximum number of iterations of the k-means algorithm for a single run.
n_init
Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia.
init
Method for initialization, defaults to ‘k-means++’:
‘k-means++’ : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k_init for more details.
‘random’: choose k observations (rows) at random from data for the initial centroids.
If an ndarray is passed, it should be of shape (n_clusters, n_features) and gives the initial centers.
algorithm
K-means algorithm to use. The classical EM-style algorithm is “full”. The “elkan” variation is more efficient by using the triangle inequality, but currently doesn’t support sparse data. “auto” chooses “elkan” for dense data and “full” for sparse data.
precompute_distances
Precompute distances (faster but takes more memory).
‘auto’ : do not precompute distances if n_samples * n_clusters > 12 million. This corresponds to about 100MB overhead per job using double precision.
True : always precompute distances
False : never precompute distances
tol
Relative tolerance with regards to inertia to declare convergence
n_jobs
The number of jobs to use for the computation. This works by computing each of the n_init runs in parallel.
None
means 1 unless in ajoblib.parallel_backend
context.-1
means using all processors. See n_jobs for more details.random_state
Determines random number generation for centroid initialization. Use an int to make the randomness deterministic. See random_state.
Attributes:
cluster_centers_
Coordinates of cluster centers. If the algorithm stops before fully converging (see
tol
andmax_iter
), these will not be consistent withlabels_
.labels_
Labels of each point
inertia_
Sum of squared distances of samples to their closest cluster center.
Input ports:
- Output ports:
- model : model
- Model
- n_clusters (n_clusters)
- The number of clusters to form as well as the number of centroids to generate.
- max_iter (max_iter)
- Maximum number of iterations of the k-means algorithm for a single run.
- n_init (n_init)
- Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia.
- init (init)
Method for initialization, defaults to ‘k-means++’:
‘k-means++’ : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k_init for more details.
‘random’: choose k observations (rows) at random from data for the initial centroids.
If an ndarray is passed, it should be of shape (n_clusters, n_features) and gives the initial centers.
- algorithm (algorithm)
- K-means algorithm to use. The classical EM-style algorithm is “full”. The “elkan” variation is more efficient by using the triangle inequality, but currently doesn’t support sparse data. “auto” chooses “elkan” for dense data and “full” for sparse data.
- precompute_distances (precompute_distances)
Precompute distances (faster but takes more memory).
‘auto’ : do not precompute distances if n_samples * n_clusters > 12 million. This corresponds to about 100MB overhead per job using double precision.
True : always precompute distances
False : never precompute distances
- tol (tol)
- Relative tolerance with regards to inertia to declare convergence
- n_jobs (n_jobs)
The number of jobs to use for the computation. This works by computing each of the n_init runs in parallel.
None
means 1 unless in ajoblib.parallel_backend
context.-1
means using all processors. See n_jobs for more details.- random_state (random_state)
- Determines random number generation for centroid initialization. Use an int to make the randomness deterministic. See random_state.
Some of the docstrings for this module have been automatically extracted from the scikit-learn library and are covered by their respective licenses.