# Stop Thinking, Just Do!

Sung-Soo Kim's Blog

# 1 Introduction

• What is Clustering?
• Clustering is the classification of data objects into similarity groups (clusters) according to a defined distance measure.
• It is used in many fields, such as machine learning, data mining, pattern recognition, image analysis, genomics, systems biology, etc.
• Machine learning typically regards data clustering as a form of unsupervised learning.
• Why Clustering and Data Mining in R?}
• Efficient data structures and functions for clustering
• Reproducible and programmable
• Comprehensive set of clustering and machine learning libraries
• Integration with many other data analysis tools

# 2 Data Preprocessing

## 2.1 Data Transformations

Choice depends on data set!

• Center and standardize
1. Center: subtract from each value the mean of the corresponding vector
2. Standardize: devide by standard deviation
• Result: Mean = 0 and STDEV = 1
• Center and scale with the ‘scale()’ function
1. Center: subtract from each value the mean of the corresponding vector
2. Scale: divide centered vector by their root mean square (rms):

$x_{rms} = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}{x_{i}{^2}}}$

• Result: Mean = 0 and STDEV = 1
• Log transformation
• Rank transformation: replace measured values by ranks
• No transformation

## 2.2 Distance Methods

List of most common ones!

• Euclidean distance for two profiles X and Y:  $d(X,Y) = \sqrt{\sum_{i=1}^{n}{(x_{i}-y_i})^2}$
• Disadvantages: not scale invariant, not for negative correlations
• Maximum, Manhattan, Canberra, binary, Minowski, …
• Correlation-based distance: 1-r
• Pearson correlation coefficient (PCC):  $r = \frac{n\sum_{i=1}^{n}{x_iy_i} - \sum_{i=1}^{n}{x_i} \sum_{i=1}^{n}{y_i}}{\sqrt{(\sum_{i=1}^{n}{x_i^2} - (\sum_{i=1}^{n}{x_i)^2}) (\sum_{i=1}^{n}{y_i^2} - (\sum_{i=1}^{n}{y_i)^2})}}$
• Disadvantage: outlier sensitive
• Spearman correlation coefficient (SCC)
• Same calculation as PCC but with ranked values!

There are many more distance measures

• If the distances among items are quantifiable, then clustering is possible.
• Choose the most accurate and meaningful distance measure for a given field of application.
• If uncertain then choose several distance measures and compare the results.

# 3 Clustering Algorithms

## 3.1 Hierarchical Clustering

### 3.1.1 Overview of algorithm

1. Identify clusters (items) with closest distance
2. Join them to new clusters
3. Compute distance between clusters (items)

#### 3.1.1.2 Hierarchical Clustering with Heatmap

• A heatmap is a color coded table. To visually identify patterns, the rows and columns of a heatmap are often sorted by hierarchical clustering trees.\
• In case of gene expression data, the row tree usually represents the genes, the column tree the treatments and the colors in the heat table represent the intensities or ratios of the underlying gene expression data set.

### 3.1.2 Hierarchical Clustering Approaches

1. Agglomerative approach (bottom-up)
• R functions: ‘hclust()’ and ‘agnes()’
2. Divisive approach (top-down)
• R function: ‘diana()’

### 3.1.3 Tree Cutting to Obtain Discrete Clusters

1. Node height in tree
2. Number of clusters
3. Search tree nodes by distance cutoff

### 3.1.4 Examples

#### 3.1.4.1 Using ‘hclust’ and ‘heatmap.2’

library(gplots)
y <- matrix(rnorm(500), 100, 5, dimnames=list(paste("g", 1:100, sep=""), paste("t", 1:5, sep="")))
heatmap.2(y) # Shortcut to final result


#### 3.1.4.2 Stepwise Approach with Tree Cutting

## Row- and column-wise clustering
hr <- hclust(as.dist(1-cor(t(y), method="pearson")), method="complete")
hc <- hclust(as.dist(1-cor(y, method="spearman")), method="complete")
## Tree cutting
mycl <- cutree(hr, h=max(hr$height)/1.5); mycolhc <- rainbow(length(unique(mycl)), start=0.1, end=0.9); mycolhc <- mycolhc[as.vector(mycl)] ## Plot heatmap mycol <- colorpanel(40, "darkblue", "yellow", "white") # or try redgreen(75) heatmap.2(y, Rowv=as.dendrogram(hr), Colv=as.dendrogram(hc), col=mycol, scale="row", density.info="none", trace="none", RowSideColors=mycolhc)  ## 3.2 K-Means Clustering ### 3.2.1 Overview of algorithm 1. Choose the number of k clusters 2. Randomly assign items to the k clusters 3. Calculate new centroid for each of the k clusters 4. Calculate the distance of all items to the k centroids 5. Assign items to closest centroid 6. Repeat until clusters assignments are stable ### 3.2.2 Examples km <- kmeans(t(scale(t(y))), 3) km$cluster

##   g1   g2   g3   g4   g5   g6   g7   g8   g9  g10  g11  g12  g13  g14  g15  g16  g17  g18  g19  g20
##    1    1    2    3    1    2    2    2    3    2    2    3    3    3    3    2    3    2    3    3
##  g21  g22  g23  g24  g25  g26  g27  g28  g29  g30  g31  g32  g33  g34  g35  g36  g37  g38  g39  g40
##    3    2    2    3    3    1    3    2    1    2    2    2    1    2    1    3    3    1    3    3
##  g41  g42  g43  g44  g45  g46  g47  g48  g49  g50  g51  g52  g53  g54  g55  g56  g57  g58  g59  g60
##    1    1    1    3    2    2    2    3    2    3    3    1    2    3    2    3    3    3    3    1
##  g61  g62  g63  g64  g65  g66  g67  g68  g69  g70  g71  g72  g73  g74  g75  g76  g77  g78  g79  g80
##    3    3    2    1    3    1    3    1    2    2    2    2    3    3    2    3    2    1    1    2
##  g81  g82  g83  g84  g85  g86  g87  g88  g89  g90  g91  g92  g93  g94  g95  g96  g97  g98  g99 g100
##    1    2    2    3    3    2    2    3    3    2    2    1    1    2    3    3    1    1    1    3


## 3.3 Fuzzy C-Means Clustering

• In contrast to strict (hard) clustering approaches, fuzzy (soft) clustering methods allow multiple cluster memberships of the clustered items (Hathaway, Bezdek, and Pal 1996).
• This is commonly achieved by assigning to each item a weight of belonging to each cluster.
• Thus, items at the edge of a cluster, may be in a cluster to a lesser degree than items at the center of a cluster.
• Typically, each item has as many coefficients (weights) as there are clusters that sum up for each item to one.

### 3.3.1 Examples

#### 3.3.1.1 Fuzzy Clustering with ‘fanny’

library(cluster) # Loads the cluster library.
fannyy <- fanny(y, k=4, metric = "euclidean", memb.exp = 1.2)
round(fannyy$membership, 2)[1:4,]  ## [,1] [,2] [,3] [,4] ## g1 0.85 0.06 0.01 0.08 ## g2 0.10 0.69 0.16 0.05 ## g3 0.02 0.01 0.95 0.02 ## g4 0.01 0.02 0.01 0.96  fannyy$clustering

##   g1   g2   g3   g4   g5   g6   g7   g8   g9  g10  g11  g12  g13  g14  g15  g16  g17  g18  g19  g20
##    1    2    3    4    2    3    1    1    2    3    3    2    2    4    4    1    2    3    4    4
##  g21  g22  g23  g24  g25  g26  g27  g28  g29  g30  g31  g32  g33  g34  g35  g36  g37  g38  g39  g40
##    4    3    3    2    4    4    4    3    1    3    3    3    1    3    4    2    4    2    2    2
##  g41  g42  g43  g44  g45  g46  g47  g48  g49  g50  g51  g52  g53  g54  g55  g56  g57  g58  g59  g60
##    1    2    3    2    1    3    3    2    1    2    1    2    1    2    3    4    2    2    2    1
##  g61  g62  g63  g64  g65  g66  g67  g68  g69  g70  g71  g72  g73  g74  g75  g76  g77  g78  g79  g80
##    4    4    1    2    2    4    2    4    3    3    4    3    2    2    1    2    2    2    1    1
##  g81  g82  g83  g84  g85  g86  g87  g88  g89  g90  g91  g92  g93  g94  g95  g96  g97  g98  g99 g100
##    3    3    3    2    4    3    3    4    4    3    3    3    2    3    4    1    1    1    2    4


## 3.4 Principal Component Analysis (PCA)

Principal components analysis (PCA) is a data reduction technique that allows to simplify multidimensional data sets to 2 or 3 dimensions for plotting purposes and visual variance analysis.

### 3.4.1 Basic Steps

• Center (and standardize) data
• First principal component axis
• Across centroid of data cloud
• Distance of each point to that line is minimized, so that it crosses the maximum variation of the data cloud
• Second principal component axis
• Orthogonal to first principal component
• Along maximum variation in the data
• First PCA axis becomes x-axis and second PCA axis y-axis
• Continue process until the necessary number of principal components is obtained

### 3.4.2 Example

pca <- prcomp(y, scale=T)
summary(pca) # Prints variance summary for all principal components

## Importance of components%s:
##                           PC1    PC2    PC3    PC4    PC5
## Standard deviation     1.1165 1.0418 1.0237 0.9311 0.8679
## Proportion of Variance 0.2493 0.2170 0.2096 0.1734 0.1506
## Cumulative Proportion  0.2493 0.4664 0.6760 0.8494 1.0000

plot(pca$x, pch=20, col="blue", type="n") # To plot dots, drop type="n" text(pca$x, rownames(pca$x), cex=0.8)  1st and 2nd principal components explain x% of variance in data. ## 3.5 Multidimensional Scaling (MDS) • Alternative dimensionality reduction approach • Represents distances in 2D or 3D space • Starts from distance matrix (PCA uses data points) ### 3.5.1 Example The following example performs MDS analysis with ‘cmdscale’ on the geographic distances among European cities. loc <- cmdscale(eurodist) plot(loc[,1], -loc[,2], type="n", xlab="", ylab="", main="cmdscale(eurodist)") text(loc[,1], -loc[,2], rownames(loc), cex=0.8)  ## 3.6 Biclustering Finds in matrix subgroups of rows and columns which are as similar as possible to each other and as different as possible to the remaining data points. Unclustered ————————–> Clustered ## 3.7 Similarity Measures for Clusters • Compare the numbers of identical and unique item pairs appearing in cluster sets • Achieved by counting the number of item pairs found in both clustering sets (a) as well as the pairs appearing only in the first (b) or the second (c) set. • With this a similarity coefficient, such as the Jaccard index, can be computed. The latter is defined as the size of the intersect divided by the size of the union of two sample sets: a/(a+b+c). • In case of partitioning results, the Jaccard Index measures how frequently pairs of items are joined together in two clustering data sets and how often pairs are observed only in one set. • Related coefficient are the Rand Index and the Adjusted Rand Index. These indices also consider the number of pairs (d) that are not joined together in any of the clusters in both sets. ### 3.7.1 Example: #### 3.7.1.1 Jaccard index for cluster sets The following imports the ‘cindex()’ function and computes the Jaccard Index for two sample clusters. source("http://faculty.ucr.edu/~tgirke/Documents/R_BioCond/My_R_Scripts/clusterIndex.R") library(cluster); y <- matrix(rnorm(5000), 1000, 5, dimnames=list(paste("g", 1:1000, sep=""), paste("t", 1:5, sep=""))); clarax <- clara(y, 49); clV1 <- clarax$clustering; clarax <- clara(y, 50); clV2 <- clarax$clustering ci <- cindex(clV1=clV1, clV2=clV2, self=FALSE, minSZ=1, method="jaccard") ci[2:3] # Returns Jaccard index and variables used to compute it  ##$variables
##    a    b    c
## 9245 4238 3951
##
## $Jaccard_Index ## [1] 0.5302856  #### 3.7.1.2 Clustering cluster sets with Jaccard index The following example shows how one can cluster entire cluster result sets. First, 10 sample cluster results are created with Clara using k-values from 3 to 12. The results are stored as named clustering vectors in a list object. Then a nested sapply loop is used to generate a similarity matrix of Jaccard Indices for the clustering results. After converting the result into a distance matrix, hierarchical clustering is performed with ‘hclust’.} clVlist <- lapply(3:12, function(x) clara(y[1:30, ], k=x)$clustering); names(clVlist) <- paste("k", "=", 3:12)
d <- sapply(names(clVlist), function(x) sapply(names(clVlist), function(y) cindex(clV1=clVlist[[y]], clV2=clVlist[[x]], method="jaccard")[[3]]))
hv <- hclust(as.dist(1-d))
plot(as.dendrogram(hv), edgePar=list(col=3, lwd=4), horiz=T, main="Similarities of 10 Clara Clustering Results for k: 3-12")


• Remember: there are many additional clustering algorithms.
• Additional details can be found in the Clustering Section of the R/Bioconductor Manual.

# 4 Clustering Exercises

## 4.1 Data Preprocessing

### 4.1.1 Scaling

## Sample data set
set.seed(1410)
y <- matrix(rnorm(50), 10, 5, dimnames=list(paste("g", 1:10, sep=""),
paste("t", 1:5, sep="")))
dim(y)

## [1] 10  5

## Scaling
yscaled <- t(scale(t(y))) # Centers and scales y row-wise
apply(yscaled, 1, sd)

##  g1  g2  g3  g4  g5  g6  g7  g8  g9 g10
##   1   1   1   1   1   1   1   1   1   1


## 4.2 Distance Matrices

### 4.2.1 Euclidean distance matrix

dist(y[1:4,], method = "euclidean")

##          g1       g2       g3
## g2 4.793697
## g3 4.932658 6.354978
## g4 4.033789 4.788508 1.671968


### 4.2.2 Correlation-based distance matrix

Correlation matrix

c <- cor(t(y), method="pearson")
as.matrix(c)[1:4,1:4]

##             g1         g2          g3         g4
## g1  1.00000000 -0.2965885 -0.00206139 -0.4042011
## g2 -0.29658847  1.0000000 -0.91661118 -0.4512912
## g3 -0.00206139 -0.9166112  1.00000000  0.7435892
## g4 -0.40420112 -0.4512912  0.74358925  1.0000000


Correlation-based distance matrix

d <- as.dist(1-c)
as.matrix(d)[1:4,1:4]

##          g1       g2        g3        g4
## g1 0.000000 1.296588 1.0020614 1.4042011
## g2 1.296588 0.000000 1.9166112 1.4512912
## g3 1.002061 1.916611 0.0000000 0.2564108
## g4 1.404201 1.451291 0.2564108 0.0000000


## 4.3 Hierarchical Clustering with ‘hclust’

Hierarchical clustering with complete linkage and basic tree plotting

hr <- hclust(d, method = "complete", members=NULL)
names(hr)

## [1] "merge"       "height"      "order"       "labels"      "method"      "call"
## [7] "dist.method"

par(mfrow = c(1, 2)); plot(hr, hang = 0.1); plot(hr, hang = -1)


### 4.3.1 Tree plotting I

plot(as.dendrogram(hr), edgePar=list(col=3, lwd=4), horiz=T)