Direct methods: consists of optimizing a criterion, such as the within cluster sums of squares or the average silhouette. The corresponding methods are named elbow and silhouette methods, respectively. Statistical testing methods: consists of comparing evidence against null hypothesis.

Common methods

The elbow method: The method consists of plotting the explained variation as a function of the number of clusters and picking the elbow of the curve as the number of clusters to use.

The optimization of the silhouette coefficient: The component uses the Parameter Optimization Loop which retrains k-Means with a different k at each iteration.

The gap statistic: The idea of the Gap statistic is to compare the within-cluster dispersion to its expectation under an appropriate null reference distribution (Tibshirani et al., 2001).

The means are the centroids and outliers tend to bias the mean. To make it better, we could remove outliers before applying K-means clustering.

The algorithm can be stopped when the cluster centers no longer change significantly between iterations. This can be measured using a distance metric, such as the Euclidean distance, between the old and new cluster centers.

K-means clustering is suitable for situations where clustering is Centroid base cluttering. But is suitable in the case that cluttering is not a Centroid base cluttering such as hierarchical clustering or DBSCAN Here is some examples: Suppose we have a dataset of crime incidents in a city, and we want to identify areas where crimes are more likely to occur. The dataset contains information about the location of each incident, the type of crime, and the time and date of the incident. In this situation, DBSCAN would be better suited than k-means for reasons that DBSCAN is a density-based clustering algorithm, which means it can identify clusters based on regions of higher density in the data. In the case of crime incidents, we may be more interested in identifying areas with a higher concentration of crime, rather than areas that simply have similar characteristics. And DBSCAN does not require the user to specify the number of clusters in advance, unlike k-means, where the user must choose the number of clusters before running the algorithm. In the case of crime incidents, it may be difficult to know in advance how many clusters there are or what size they should be.

Hard clustering: One data point can belong to one cluster only ex. K-means Soft clustering: One data point can belong to multiple clusters We should use soft clustering when we want to find how similar an item is to a number of given groups.