Sklearn AgGlomerative Clustering Matrix Matrix -- python поле с участием scikit-learn поле с участием cluster-analysis поле с участием dendrogram пол Связанный проблема

sklearn agglomerative clustering linkage matrix


32
vote

проблема

русский

Я пытаюсь нарисовать полную ссылку <Код> scipy.cluster.hierarchy.dendrogram , и я обнаружил, что <Код> scipy.cluster.hierarchy.linkage медленнее, чем <Код> sklearn.AgglomerativeClustering .

Тем не менее, <код> sklearn.AgglomerativeClustering не Верните расстояние между кластерами и количеством оригинальных наблюдений, которые <код> scipy.cluster.hierarchy.dendrogram потребностей. Есть ли способ взять их?

Английский оригинал

I'm trying to draw a complete-link scipy.cluster.hierarchy.dendrogram, and I found that scipy.cluster.hierarchy.linkage is slower than sklearn.AgglomerativeClustering.

However, sklearn.AgglomerativeClustering doesn't return the distance between clusters and the number of original observations, which scipy.cluster.hierarchy.dendrogram needs. Is there a way to take them?

</div
           
   
   

Список ответов

11
 
vote
vote
Лучший ответ
 

Я сделал stip, чтобы сделать это без модификации Sklearn и без рекурсивных функций. Перед использованием обратите внимание, что:

    .
  • расстояние слияния иногда может уменьшаться в отношении детей Расстояние слияния. Я добавил три способа справиться с этими случаями: принять Макс, ничего не делай и не увеличивайте с нормой L2. Логика L2 нормы еще не подтверждена. Пожалуйста, проверьте себя, что вам подходит лучше всего.

<Сильные> Импортируйте пакеты:

 <код> from sklearn.cluster import AgglomerativeClustering import numpy as np import matplotlib.pyplot as plt from scipy.cluster.hierarchy import dendrogram   

<Сильная> Функция для вычисления веса и расстояния:

 <код> def get_distances(X,model,mode='l2'):     distances = []     weights = []     children=model.children_     dims = (X.shape[1],1)     distCache = {}     weightCache = {}     for childs in children:         c1 = X[childs[0]].reshape(dims)         c2 = X[childs[1]].reshape(dims)         c1Dist = 0         c1W = 1         c2Dist = 0         c2W = 1         if childs[0] in distCache.keys():             c1Dist = distCache[childs[0]]             c1W = weightCache[childs[0]]         if childs[1] in distCache.keys():             c2Dist = distCache[childs[1]]             c2W = weightCache[childs[1]]         d = np.linalg.norm(c1-c2)         cc = ((c1W*c1)+(c2W*c2))/(c1W+c2W)          X = np.vstack((X,cc.T))          newChild_id = X.shape[0]-1          # How to deal with a higher level cluster merge with lower distance:         if mode=='l2':  # Increase the higher level cluster size suing an l2 norm             added_dist = (c1Dist**2+c2Dist**2)**0.5              dNew = (d**2 + added_dist**2)**0.5         elif mode == 'max':  # If the previrous clusters had higher distance, use that one             dNew = max(d,c1Dist,c2Dist)         elif mode == 'actual':  # Plot the actual distance.             dNew = d           wNew = (c1W + c2W)         distCache[newChild_id] = dNew         weightCache[newChild_id] = wNew          distances.append(dNew)         weights.append( wNew)     return distances, weights   

Сделайте примеры данных 2 кластеров с 2 подкластерами:

 <код> # Make 4 distributions, two of which form a bigger cluster X1_1 = np.random.randn(25,2)+[8,1.5] X1_2 = np.random.randn(25,2)+[8,-1.5] X2_1 = np.random.randn(25,2)-[8,3] X2_2 = np.random.randn(25,2)-[8,-3]  # Merge the four distributions X = np.vstack([X1_1,X1_2,X2_1,X2_2])  # Plot the clusters colors = ['r']*25 + ['b']*25 + ['g']*25 + ['y']*25 plt.scatter(X[:,0],X[:,1],c=colors)   

<Сильные> Пример данных:

Кластеры Пример данных

<Сильные> Установите модель кластеризации

 <код> model = AgglomerativeClustering(n_clusters=2,linkage="ward") model.fit(X)   

Вызовите функцию, чтобы найти расстояния, и передайте ее на дендограмму

 <код> distance, weight = get_distances(X,model) linkage_matrix = np.column_stack([model.children_, distance, weight]).astype(float) plt.figure(figsize=(20,10)) dendrogram(linkage_matrix) plt.show()   

<Сильная> Опутанская дендограмма: Введите описание изображения здесь

 

I made a scipt to do it without modifying sklearn and without recursive functions. Before using note that:

  • Merge distance can sometimes decrease with respect to the children merge distance. I added three ways to handle those cases: Take the max, do nothing or increase with the l2 norm. The l2 norm logic has not been verified yet. Please check yourself what suits you best.

Import the packages:

from sklearn.cluster import AgglomerativeClustering import numpy as np import matplotlib.pyplot as plt from scipy.cluster.hierarchy import dendrogram 

Function to compute weights and distances:

def get_distances(X,model,mode='l2'):     distances = []     weights = []     children=model.children_     dims = (X.shape[1],1)     distCache = {}     weightCache = {}     for childs in children:         c1 = X[childs[0]].reshape(dims)         c2 = X[childs[1]].reshape(dims)         c1Dist = 0         c1W = 1         c2Dist = 0         c2W = 1         if childs[0] in distCache.keys():             c1Dist = distCache[childs[0]]             c1W = weightCache[childs[0]]         if childs[1] in distCache.keys():             c2Dist = distCache[childs[1]]             c2W = weightCache[childs[1]]         d = np.linalg.norm(c1-c2)         cc = ((c1W*c1)+(c2W*c2))/(c1W+c2W)          X = np.vstack((X,cc.T))          newChild_id = X.shape[0]-1          # How to deal with a higher level cluster merge with lower distance:         if mode=='l2':  # Increase the higher level cluster size suing an l2 norm             added_dist = (c1Dist**2+c2Dist**2)**0.5              dNew = (d**2 + added_dist**2)**0.5         elif mode == 'max':  # If the previrous clusters had higher distance, use that one             dNew = max(d,c1Dist,c2Dist)         elif mode == 'actual':  # Plot the actual distance.             dNew = d           wNew = (c1W + c2W)         distCache[newChild_id] = dNew         weightCache[newChild_id] = wNew          distances.append(dNew)         weights.append( wNew)     return distances, weights 

Make sample data of 2 clusters with 2 subclusters:

# Make 4 distributions, two of which form a bigger cluster X1_1 = np.random.randn(25,2)+[8,1.5] X1_2 = np.random.randn(25,2)+[8,-1.5] X2_1 = np.random.randn(25,2)-[8,3] X2_2 = np.random.randn(25,2)-[8,-3]  # Merge the four distributions X = np.vstack([X1_1,X1_2,X2_1,X2_2])  # Plot the clusters colors = ['r']*25 + ['b']*25 + ['g']*25 + ['y']*25 plt.scatter(X[:,0],X[:,1],c=colors) 

Sample data:

Clusters sample data

Fit the clustering model

model = AgglomerativeClustering(n_clusters=2,linkage="ward") model.fit(X) 

Call the function to find the distances, and pass it to the dendogram

distance, weight = get_distances(X,model) linkage_matrix = np.column_stack([model.children_, distance, weight]).astype(float) plt.figure(figsize=(20,10)) dendrogram(linkage_matrix) plt.show() 

Ouput dendogram: enter image description here

</div
 
 
   
   
15
 
vote
<Р> Это возможно, но это не очень. Это требует (как минимум) небольшой переписать <код> AgglomerativeClustering.fit ( источник ). Трудность состоит в том, что этот метод требует много импорта, так что в конечном итоге становится немного противно смотреть. Чтобы добавить в эту функцию:
    <Литий> <р> Вставьте следующую строку после строки 748: <Р> kwargs [ 'return_distance'] = True
  1. Заменить строку 752 с:

    <Р> self.children_, self.n_components_, self.n_leaves_, родители, self.distance =
<Р> Это даст вам новый атрибут <код> distance , который можно легко вызвать. <Р> Пару вещей отметить следующее:
  1. При этом, я побежал в это вопрос о <код> check_array функция на линии 711. Это может быть исправлено с помощью <код> check_arrays (<код> from sklearn.utils.validation import check_arrays ). Вы можете изменить эту строку, чтобы стать <код> X = check_arrays(X)[0] . Это, кажется, ошибка (я до сих пор этот вопрос в самой последней версии scikit учиться).

  2. В зависимости от того, какая версия <код> sklearn.cluster.hierarchical.linkage_tree у вас есть, вам может понадобиться изменить его, чтобы быть один предусмотрено в источник .

<Р> Для того, чтобы сделать вещи проще для всех, вот полный код, который нужно будет использовать:
 <код> from heapq import heapify, heappop, heappush, heappushpop import warnings import sys  import numpy as np from scipy import sparse  from sklearn.base import BaseEstimator, ClusterMixin from sklearn.externals.joblib import Memory from sklearn.externals import six from sklearn.utils.validation import check_arrays from sklearn.utils.sparsetools import connected_components from sklearn.cluster import _hierarchical from sklearn.cluster.hierarchical import ward_tree from sklearn.cluster._feature_agglomeration import AgglomerationTransform from sklearn.utils.fast_dict import IntFloatDict  def _fix_connectivity(X, connectivity, n_components=None,                       affinity="euclidean"):     """     Fixes the connectivity matrix         - copies it         - makes it symmetric         - converts it to LIL if necessary         - completes it if necessary     """     n_samples = X.shape[0]     if (connectivity.shape[0] != n_samples or         connectivity.shape[1] != n_samples):         raise ValueError('Wrong shape for connectivity matrix: %s '                          'when X is %s' % (connectivity.shape, X.shape))      # Make the connectivity matrix symmetric:     connectivity = connectivity + connectivity.T      # Convert connectivity matrix to LIL     if not sparse.isspmatrix_lil(connectivity):         if not sparse.isspmatrix(connectivity):             connectivity = sparse.lil_matrix(connectivity)         else:             connectivity = connectivity.tolil()      # Compute the number of nodes     n_components, labels = connected_components(connectivity)      if n_components > 1:         warnings.warn("the number of connected components of the "                       "connectivity matrix is %d > 1. Completing it to avoid "                       "stopping the tree early." % n_components,                       stacklevel=2)         # XXX: Can we do without completing the matrix?         for i in xrange(n_components):             idx_i = np.where(labels == i)[0]             Xi = X[idx_i]             for j in xrange(i):                 idx_j = np.where(labels == j)[0]                 Xj = X[idx_j]                 D = pairwise_distances(Xi, Xj, metric=affinity)                 ii, jj = np.where(D == np.min(D))                 ii = ii[0]                 jj = jj[0]                 connectivity[idx_i[ii], idx_j[jj]] = True                 connectivity[idx_j[jj], idx_i[ii]] = True      return connectivity, n_components  # average and complete linkage def linkage_tree(X, connectivity=None, n_components=None,                  n_clusters=None, linkage='complete', affinity="euclidean",                  return_distance=False):     """Linkage agglomerative clustering based on a Feature matrix.     The inertia matrix uses a Heapq-based representation.     This is the structured version, that takes into account some topological     structure between samples.     Parameters     ----------     X : array, shape (n_samples, n_features)         feature matrix representing n_samples samples to be clustered     connectivity : sparse matrix (optional).         connectivity matrix. Defines for each sample the neighboring samples         following a given structure of the data. The matrix is assumed to         be symmetric and only the upper triangular half is used.         Default is None, i.e, the Ward algorithm is unstructured.     n_components : int (optional)         Number of connected components. If None the number of connected         components is estimated from the connectivity matrix.         NOTE: This parameter is now directly determined directly         from the connectivity matrix and will be removed in 0.18     n_clusters : int (optional)         Stop early the construction of the tree at n_clusters. This is         useful to decrease computation time if the number of clusters is         not small compared to the number of samples. In this case, the         complete tree is not computed, thus the 'children' output is of         limited use, and the 'parents' output should rather be used.         This option is valid only when specifying a connectivity matrix.     linkage : {"average", "complete"}, optional, default: "complete"         Which linkage critera to use. The linkage criterion determines which         distance to use between sets of observation.             - average uses the average of the distances of each observation of               the two sets             - complete or maximum linkage uses the maximum distances between               all observations of the two sets.     affinity : string or callable, optional, default: "euclidean".         which metric to use. Can be "euclidean", "manhattan", or any         distance know to paired distance (see metric.pairwise)     return_distance : bool, default False         whether or not to return the distances between the clusters.     Returns     -------     children : 2D array, shape (n_nodes-1, 2)         The children of each non-leaf node. Values less than `n_samples`         correspond to leaves of the tree which are the original samples.         A node `i` greater than or equal to `n_samples` is a non-leaf         node and has children `children_[i - n_samples]`. Alternatively         at the i-th iteration, children[i][0] and children[i][1]         are merged to form node `n_samples + i`     n_components : int         The number of connected components in the graph.     n_leaves : int         The number of leaves in the tree.     parents : 1D array, shape (n_nodes, ) or None         The parent of each node. Only returned when a connectivity matrix         is specified, elsewhere 'None' is returned.     distances : ndarray, shape (n_nodes-1,)         Returned when return_distance is set to True.         distances[i] refers to the distance between children[i][0] and         children[i][1] when they are merged.     See also     --------     ward_tree : hierarchical clustering with ward linkage     """     X = np.asarray(X)     if X.ndim == 1:         X = np.reshape(X, (-1, 1))     n_samples, n_features = X.shape      linkage_choices = {'complete': _hierarchical.max_merge,                        'average': _hierarchical.average_merge,                       }     try:         join_func = linkage_choices[linkage]     except KeyError:         raise ValueError(             'Unknown linkage option, linkage should be one '             'of %s, but %s was given' % (linkage_choices.keys(), linkage))      if connectivity is None:         from scipy.cluster import hierarchy  # imports PIL          if n_clusters is not None:             warnings.warn('Partial build of the tree is implemented '                           'only for structured clustering (i.e. with '                           'explicit connectivity). The algorithm '                           'will build the full tree and only '                           'retain the lower branches required '                           'for the specified number of clusters',                           stacklevel=2)          if affinity == 'precomputed':             # for the linkage function of hierarchy to work on precomputed             # data, provide as first argument an ndarray of the shape returned             # by pdist: it is a flat array containing the upper triangular of             # the distance matrix.             i, j = np.triu_indices(X.shape[0], k=1)             X = X[i, j]         elif affinity == 'l2':             # Translate to something understood by scipy             affinity = 'euclidean'         elif affinity in ('l1', 'manhattan'):             affinity = 'cityblock'         elif callable(affinity):             X = affinity(X)             i, j = np.triu_indices(X.shape[0], k=1)             X = X[i, j]         out = hierarchy.linkage(X, method=linkage, metric=affinity)         children_ = out[:, :2].astype(np.int)          if return_distance:             distances = out[:, 2]             return children_, 1, n_samples, None, distances         return children_, 1, n_samples, None      if n_components is not None:         warnings.warn(             "n_components is now directly calculated from the connectivity "             "matrix and will be removed in 0.18",             DeprecationWarning)     connectivity, n_components = _fix_connectivity(X, connectivity)      connectivity = connectivity.tocoo()     # Put the diagonal to zero     diag_mask = (connectivity.row != connectivity.col)     connectivity.row = connectivity.row[diag_mask]     connectivity.col = connectivity.col[diag_mask]     connectivity.data = connectivity.data[diag_mask]     del diag_mask      if affinity == 'precomputed':         distances = X[connectivity.row, connectivity.col]     else:         # FIXME We compute all the distances, while we could have only computed         # the "interesting" distances         distances = paired_distances(X[connectivity.row],                                      X[connectivity.col],                                      metric=affinity)     connectivity.data = distances      if n_clusters is None:         n_nodes = 2 * n_samples - 1     else:         assert n_clusters <= n_samples         n_nodes = 2 * n_samples - n_clusters      if return_distance:         distances = np.empty(n_nodes - n_samples)     # create inertia heap and connection matrix     A = np.empty(n_nodes, dtype=object)     inertia = list()      # LIL seems to the best format to access the rows quickly,     # without the numpy overhead of slicing CSR indices and data.     connectivity = connectivity.tolil()     # We are storing the graph in a list of IntFloatDict     for ind, (data, row) in enumerate(zip(connectivity.data,                                           connectivity.rows)):         A[ind] = IntFloatDict(np.asarray(row, dtype=np.intp),                               np.asarray(data, dtype=np.float64))         # We keep only the upper triangular for the heap         # Generator expressions are faster than arrays on the following         inertia.extend(_hierarchical.WeightedEdge(d, ind, r)                        for r, d in zip(row, data) if r < ind)     del connectivity      heapify(inertia)      # prepare the main fields     parent = np.arange(n_nodes, dtype=np.intp)     used_node = np.ones(n_nodes, dtype=np.intp)     children = []      # recursive merge loop     for k in xrange(n_samples, n_nodes):         # identify the merge         while True:             edge = heappop(inertia)             if used_node[edge.a] and used_node[edge.b]:                 break         i = edge.a         j = edge.b          if return_distance:             # store distances             distances[k - n_samples] = edge.weight          parent[i] = parent[j] = k         children.append((i, j))         # Keep track of the number of elements per cluster         n_i = used_node[i]         n_j = used_node[j]         used_node[k] = n_i + n_j         used_node[i] = used_node[j] = False          # update the structure matrix A and the inertia matrix         # a clever 'min', or 'max' operation between A[i] and A[j]         coord_col = join_func(A[i], A[j], used_node, n_i, n_j)         for l, d in coord_col:             A[l].append(k, d)             # Here we use the information from coord_col (containing the             # distances) to update the heap             heappush(inertia, _hierarchical.WeightedEdge(d, k, l))         A[k] = coord_col         # Clear A[i] and A[j] to save memory         A[i] = A[j] = 0      # Separate leaves in children (empty lists up to now)     n_leaves = n_samples      # # return numpy array for efficient caching     children = np.array(children)[:, ::-1]      if return_distance:         return children, n_components, n_leaves, parent, distances     return children, n_components, n_leaves, parent  # Matching names to tree-building strategies def _complete_linkage(*args, **kwargs):     kwargs['linkage'] = 'complete'       return linkage_tree(*args, **kwargs)  def _average_linkage(*args, **kwargs):     kwargs['linkage'] = 'average'     return linkage_tree(*args, **kwargs)  _TREE_BUILDERS = dict(     ward=ward_tree,     complete=_complete_linkage,     average=_average_linkage,     )  def _hc_cut(n_clusters, children, n_leaves):     """Function cutting the ward tree for a given number of clusters.     Parameters     ----------     n_clusters : int or ndarray         The number of clusters to form.     children : list of pairs. Length of n_nodes         The children of each non-leaf node. Values less than `n_samples` refer         to leaves of the tree. A greater value `i` indicates a node with         children `children[i - n_samples]`.     n_leaves : int         Number of leaves of the tree.     Returns     -------     labels : array [n_samples]         cluster labels for each point     """     if n_clusters > n_leaves:         raise ValueError('Cannot extract more clusters than samples: '                          '%s clusters where given for a tree with %s leaves.'                          % (n_clusters, n_leaves))     # In this function, we store nodes as a heap to avoid recomputing     # the max of the nodes: the first element is always the smallest     # We use negated indices as heaps work on smallest elements, and we     # are interested in largest elements     # children[-1] is the root of the tree     nodes = [-(max(children[-1]) + 1)]     for i in xrange(n_clusters - 1):         # As we have a heap, nodes[0] is the smallest element         these_children = children[-nodes[0] - n_leaves]         # Insert the 2 children and remove the largest node         heappush(nodes, -these_children[0])         heappushpop(nodes, -these_children[1])     label = np.zeros(n_leaves, dtype=np.intp)     for i, node in enumerate(nodes):         label[_hierarchical._hc_get_descendent(-node, children, n_leaves)] = i     return label  class AgglomerativeClustering(BaseEstimator, ClusterMixin):     """     Agglomerative Clustering     Recursively merges the pair of clusters that minimally increases     a given linkage distance.     Parameters     ----------     n_clusters : int, default=2         The number of clusters to find.     connectivity : array-like or callable, optional         Connectivity matrix. Defines for each sample the neighboring         samples following a given structure of the data.         This can be a connectivity matrix itself or a callable that transforms         the data into a connectivity matrix, such as derived from         kneighbors_graph. Default is None, i.e, the         hierarchical clustering algorithm is unstructured.     affinity : string or callable, default: "euclidean"         Metric used to compute the linkage. Can be "euclidean", "l1", "l2",         "manhattan", "cosine", or 'precomputed'.         If linkage is "ward", only "euclidean" is accepted.     memory : Instance of joblib.Memory or string (optional)         Used to cache the output of the computation of the tree.         By default, no caching is done. If a string is given, it is the         path to the caching directory.     n_components : int (optional)         Number of connected components. If None the number of connected         components is estimated from the connectivity matrix.         NOTE: This parameter is now directly determined from the connectivity         matrix and will be removed in 0.18     compute_full_tree : bool or 'auto' (optional)         Stop early the construction of the tree at n_clusters. This is         useful to decrease computation time if the number of clusters is         not small compared to the number of samples. This option is         useful only when specifying a connectivity matrix. Note also that         when varying the number of clusters and using caching, it may         be advantageous to compute the full tree.     linkage : {"ward", "complete", "average"}, optional, default: "ward"         Which linkage criterion to use. The linkage criterion determines which         distance to use between sets of observation. The algorithm will merge         the pairs of cluster that minimize this criterion.         - ward minimizes the variance of the clusters being merged.         - average uses the average of the distances of each observation of           the two sets.         - complete or maximum linkage uses the maximum distances between           all observations of the two sets.     pooling_func : callable, default=np.mean         This combines the values of agglomerated features into a single         value, and should accept an array of shape [M, N] and the keyword         argument ``axis=1``, and reduce it to an array of size [M].     Attributes     ----------     labels_ : array [n_samples]         cluster labels for each point     n_leaves_ : int         Number of leaves in the hierarchical tree.     n_components_ : int         The estimated number of connected components in the graph.     children_ : array-like, shape (n_nodes-1, 2)         The children of each non-leaf node. Values less than `n_samples`         correspond to leaves of the tree which are the original samples.         A node `i` greater than or equal to `n_samples` is a non-leaf         node and has children `children_[i - n_samples]`. Alternatively         at the i-th iteration, children[i][0] and children[i][1]         are merged to form node `n_samples + i`     """      def __init__(self, n_clusters=2, affinity="euclidean",                  memory=Memory(cachedir=None, verbose=0),                  connectivity=None, n_components=None,                  compute_full_tree='auto', linkage='ward',                  pooling_func=np.mean):         self.n_clusters = n_clusters         self.memory = memory         self.n_components = n_components         self.connectivity = connectivity         self.compute_full_tree = compute_full_tree         self.linkage = linkage         self.affinity = affinity         self.pooling_func = pooling_func      def fit(self, X, y=None):         """Fit the hierarchical clustering on the data         Parameters         ----------         X : array-like, shape = [n_samples, n_features]             The samples a.k.a. observations.         Returns         -------         self         """         X = check_arrays(X)[0]         memory = self.memory         if isinstance(memory, six.string_types):             memory = Memory(cachedir=memory, verbose=0)          if self.linkage == "ward" and self.affinity != "euclidean":             raise ValueError("%s was provided as affinity. Ward can only "                              "work with euclidean distances." %                              (self.affinity, ))          if self.linkage not in _TREE_BUILDERS:             raise ValueError("Unknown linkage type %s."                              "Valid options are %s" % (self.linkage,                                                        _TREE_BUILDERS.keys()))         tree_builder = _TREE_BUILDERS[self.linkage]          connectivity = self.connectivity         if self.connectivity is not None:             if callable(self.connectivity):                 connectivity = self.connectivity(X)             connectivity = check_arrays(                 connectivity, accept_sparse=['csr', 'coo', 'lil'])          n_samples = len(X)         compute_full_tree = self.compute_full_tree         if self.connectivity is None:             compute_full_tree = True         if compute_full_tree == 'auto':             # Early stopping is likely to give a speed up only for             # a large number of clusters. The actual threshold             # implemented here is heuristic             compute_full_tree = self.n_clusters < max(100, .02 * n_samples)         n_clusters = self.n_clusters         if compute_full_tree:             n_clusters = None          # Construct the tree         kwargs = {}         kwargs['return_distance'] = True         if self.linkage != 'ward':             kwargs['linkage'] = self.linkage             kwargs['affinity'] = self.affinity         self.children_, self.n_components_, self.n_leaves_, parents,              self.distance = memory.cache(tree_builder)(X, connectivity,                                        n_components=self.n_components,                                        n_clusters=n_clusters,                                        **kwargs)         # Cut the tree         if compute_full_tree:             self.labels_ = _hc_cut(self.n_clusters, self.children_,                                    self.n_leaves_)         else:             labels = _hierarchical.hc_get_heads(parents, copy=False)             # copy to avoid holding a reference on the original array             labels = np.copy(labels[:n_samples])             # Reasign cluster numbers             self.labels_ = np.searchsorted(np.unique(labels), labels)         return self   
<Р> Ниже приведен простой пример, показывающий, как использовать модифицированный <код> AgglomerativeClustering класса:
 <код> import numpy as np import AgglomerativeClustering # Make sure to use the new one!!! d = np.array(     [         [1, 2, 3],         [4, 5, 6],         [7, 8, 9]     ] )  clustering = AgglomerativeClustering(n_clusters=2, compute_full_tree=True,     affinity='euclidean', linkage='complete') clustering.fit(d) print clustering.distance   
<Р> Этот пример имеет следующий вывод:
 <код> distance0  
<Р> Это может быть затем по сравнению с <код> distance1 реализация:
 <код> distance2  

Выход:

 <код> distance3  
<Р> Только для пинков я решил следовать в выписке о выполнении:
 <код> distance4  
<Р> Это дало мне следующие результаты:
 <код> distance5  
<Р> В соответствии с этим, реализация от Scikit-Learn занимает 0.88x время выполнения реализации SciPy, т.е. реализация SciPy является 1.14x быстрее. Следует отметить, что:
  1. Я изменил первоначальный scikit учиться реализации

  2. Я только сделал небольшое число итераций

  3. <Литий> <р> Я только проверил небольшое количество тестовых случаев (как размер кластера, а также количество элементов в измерении должно быть проверено)
  4. Я побежал SciPy второй, поэтому он имел преимущество получения более попаданий на исходных данных

  5. Эти два метода точно не сделать то же самое.

<Р> С учетом всего этого в виду, вы должны реально оценивать, который выполняет метод лучше для конкретного применения. Есть также функциональные причины, чтобы пойти с одной реализации над другим.
 

It's possible, but it isn't pretty. It requires (at a minimum) a small rewrite of AgglomerativeClustering.fit (source). The difficulty is that the method requires a number of imports, so it ends up getting a bit nasty looking. To add in this feature:

  1. Insert the following line after line 748:

    kwargs['return_distance'] = True

  2. Replace line 752 with:

    self.children_, self.n_components_, self.n_leaves_, parents, self.distance =

This will give you a new attribute, distance, that you can easily call.

A couple things to note:

  1. When doing this, I ran into this issue about the check_array function on line 711. This can be fixed by using check_arrays (from sklearn.utils.validation import check_arrays). You can modify that line to become X = check_arrays(X)[0]. This appears to be a bug (I still have this issue on the most recent version of scikit-learn).

  2. Depending on which version of sklearn.cluster.hierarchical.linkage_tree you have, you may also need to modify it to be the one provided in the source.

To make things easier for everyone, here is the full code that you will need to use:

from heapq import heapify, heappop, heappush, heappushpop import warnings import sys  import numpy as np from scipy import sparse  from sklearn.base import BaseEstimator, ClusterMixin from sklearn.externals.joblib import Memory from sklearn.externals import six from sklearn.utils.validation import check_arrays from sklearn.utils.sparsetools import connected_components from sklearn.cluster import _hierarchical from sklearn.cluster.hierarchical import ward_tree from sklearn.cluster._feature_agglomeration import AgglomerationTransform from sklearn.utils.fast_dict import IntFloatDict  def _fix_connectivity(X, connectivity, n_components=None,                       affinity="euclidean"):     """     Fixes the connectivity matrix         - copies it         - makes it symmetric         - converts it to LIL if necessary         - completes it if necessary     """     n_samples = X.shape[0]     if (connectivity.shape[0] != n_samples or         connectivity.shape[1] != n_samples):         raise ValueError('Wrong shape for connectivity matrix: %s '                          'when X is %s' % (connectivity.shape, X.shape))      # Make the connectivity matrix symmetric:     connectivity = connectivity + connectivity.T      # Convert connectivity matrix to LIL     if not sparse.isspmatrix_lil(connectivity):         if not sparse.isspmatrix(connectivity):             connectivity = sparse.lil_matrix(connectivity)         else:             connectivity = connectivity.tolil()      # Compute the number of nodes     n_components, labels = connected_components(connectivity)      if n_components > 1:         warnings.warn("the number of connected components of the "                       "connectivity matrix is %d > 1. Completing it to avoid "                       "stopping the tree early." % n_components,                       stacklevel=2)         # XXX: Can we do without completing the matrix?         for i in xrange(n_components):             idx_i = np.where(labels == i)[0]             Xi = X[idx_i]             for j in xrange(i):                 idx_j = np.where(labels == j)[0]                 Xj = X[idx_j]                 D = pairwise_distances(Xi, Xj, metric=affinity)                 ii, jj = np.where(D == np.min(D))                 ii = ii[0]                 jj = jj[0]                 connectivity[idx_i[ii], idx_j[jj]] = True                 connectivity[idx_j[jj], idx_i[ii]] = True      return connectivity, n_components  # average and complete linkage def linkage_tree(X, connectivity=None, n_components=None,                  n_clusters=None, linkage='complete', affinity="euclidean",                  return_distance=False):     """Linkage agglomerative clustering based on a Feature matrix.     The inertia matrix uses a Heapq-based representation.     This is the structured version, that takes into account some topological     structure between samples.     Parameters     ----------     X : array, shape (n_samples, n_features)         feature matrix representing n_samples samples to be clustered     connectivity : sparse matrix (optional).         connectivity matrix. Defines for each sample the neighboring samples         following a given structure of the data. The matrix is assumed to         be symmetric and only the upper triangular half is used.         Default is None, i.e, the Ward algorithm is unstructured.     n_components : int (optional)         Number of connected components. If None the number of connected         components is estimated from the connectivity matrix.         NOTE: This parameter is now directly determined directly         from the connectivity matrix and will be removed in 0.18     n_clusters : int (optional)         Stop early the construction of the tree at n_clusters. This is         useful to decrease computation time if the number of clusters is         not small compared to the number of samples. In this case, the         complete tree is not computed, thus the 'children' output is of         limited use, and the 'parents' output should rather be used.         This option is valid only when specifying a connectivity matrix.     linkage : {"average", "complete"}, optional, default: "complete"         Which linkage critera to use. The linkage criterion determines which         distance to use between sets of observation.             - average uses the average of the distances of each observation of               the two sets             - complete or maximum linkage uses the maximum distances between               all observations of the two sets.     affinity : string or callable, optional, default: "euclidean".         which metric to use. Can be "euclidean", "manhattan", or any         distance know to paired distance (see metric.pairwise)     return_distance : bool, default False         whether or not to return the distances between the clusters.     Returns     -------     children : 2D array, shape (n_nodes-1, 2)         The children of each non-leaf node. Values less than `n_samples`         correspond to leaves of the tree which are the original samples.         A node `i` greater than or equal to `n_samples` is a non-leaf         node and has children `children_[i - n_samples]`. Alternatively         at the i-th iteration, children[i][0] and children[i][1]         are merged to form node `n_samples + i`     n_components : int         The number of connected components in the graph.     n_leaves : int         The number of leaves in the tree.     parents : 1D array, shape (n_nodes, ) or None         The parent of each node. Only returned when a connectivity matrix         is specified, elsewhere 'None' is returned.     distances : ndarray, shape (n_nodes-1,)         Returned when return_distance is set to True.         distances[i] refers to the distance between children[i][0] and         children[i][1] when they are merged.     See also     --------     ward_tree : hierarchical clustering with ward linkage     """     X = np.asarray(X)     if X.ndim == 1:         X = np.reshape(X, (-1, 1))     n_samples, n_features = X.shape      linkage_choices = {'complete': _hierarchical.max_merge,                        'average': _hierarchical.average_merge,                       }     try:         join_func = linkage_choices[linkage]     except KeyError:         raise ValueError(             'Unknown linkage option, linkage should be one '             'of %s, but %s was given' % (linkage_choices.keys(), linkage))      if connectivity is None:         from scipy.cluster import hierarchy  # imports PIL          if n_clusters is not None:             warnings.warn('Partial build of the tree is implemented '                           'only for structured clustering (i.e. with '                           'explicit connectivity). The algorithm '                           'will build the full tree and only '                           'retain the lower branches required '                           'for the specified number of clusters',                           stacklevel=2)          if affinity == 'precomputed':             # for the linkage function of hierarchy to work on precomputed             # data, provide as first argument an ndarray of the shape returned             # by pdist: it is a flat array containing the upper triangular of             # the distance matrix.             i, j = np.triu_indices(X.shape[0], k=1)             X = X[i, j]         elif affinity == 'l2':             # Translate to something understood by scipy             affinity = 'euclidean'         elif affinity in ('l1', 'manhattan'):             affinity = 'cityblock'         elif callable(affinity):             X = affinity(X)             i, j = np.triu_indices(X.shape[0], k=1)             X = X[i, j]         out = hierarchy.linkage(X, method=linkage, metric=affinity)         children_ = out[:, :2].astype(np.int)          if return_distance:             distances = out[:, 2]             return children_, 1, n_samples, None, distances         return children_, 1, n_samples, None      if n_components is not None:         warnings.warn(             "n_components is now directly calculated from the connectivity "             "matrix and will be removed in 0.18",             DeprecationWarning)     connectivity, n_components = _fix_connectivity(X, connectivity)      connectivity = connectivity.tocoo()     # Put the diagonal to zero     diag_mask = (connectivity.row != connectivity.col)     connectivity.row = connectivity.row[diag_mask]     connectivity.col = connectivity.col[diag_mask]     connectivity.data = connectivity.data[diag_mask]     del diag_mask      if affinity == 'precomputed':         distances = X[connectivity.row, connectivity.col]     else:         # FIXME We compute all the distances, while we could have only computed         # the "interesting" distances         distances = paired_distances(X[connectivity.row],                                      X[connectivity.col],                                      metric=affinity)     connectivity.data = distances      if n_clusters is None:         n_nodes = 2 * n_samples - 1     else:         assert n_clusters <= n_samples         n_nodes = 2 * n_samples - n_clusters      if return_distance:         distances = np.empty(n_nodes - n_samples)     # create inertia heap and connection matrix     A = np.empty(n_nodes, dtype=object)     inertia = list()      # LIL seems to the best format to access the rows quickly,     # without the numpy overhead of slicing CSR indices and data.     connectivity = connectivity.tolil()     # We are storing the graph in a list of IntFloatDict     for ind, (data, row) in enumerate(zip(connectivity.data,                                           connectivity.rows)):         A[ind] = IntFloatDict(np.asarray(row, dtype=np.intp),                               np.asarray(data, dtype=np.float64))         # We keep only the upper triangular for the heap         # Generator expressions are faster than arrays on the following         inertia.extend(_hierarchical.WeightedEdge(d, ind, r)                        for r, d in zip(row, data) if r < ind)     del connectivity      heapify(inertia)      # prepare the main fields     parent = np.arange(n_nodes, dtype=np.intp)     used_node = np.ones(n_nodes, dtype=np.intp)     children = []      # recursive merge loop     for k in xrange(n_samples, n_nodes):         # identify the merge         while True:             edge = heappop(inertia)             if used_node[edge.a] and used_node[edge.b]:                 break         i = edge.a         j = edge.b          if return_distance:             # store distances             distances[k - n_samples] = edge.weight          parent[i] = parent[j] = k         children.append((i, j))         # Keep track of the number of elements per cluster         n_i = used_node[i]         n_j = used_node[j]         used_node[k] = n_i + n_j         used_node[i] = used_node[j] = False          # update the structure matrix A and the inertia matrix         # a clever 'min', or 'max' operation between A[i] and A[j]         coord_col = join_func(A[i], A[j], used_node, n_i, n_j)         for l, d in coord_col:             A[l].append(k, d)             # Here we use the information from coord_col (containing the             # distances) to update the heap             heappush(inertia, _hierarchical.WeightedEdge(d, k, l))         A[k] = coord_col         # Clear A[i] and A[j] to save memory         A[i] = A[j] = 0      # Separate leaves in children (empty lists up to now)     n_leaves = n_samples      # # return numpy array for efficient caching     children = np.array(children)[:, ::-1]      if return_distance:         return children, n_components, n_leaves, parent, distances     return children, n_components, n_leaves, parent  # Matching names to tree-building strategies def _complete_linkage(*args, **kwargs):     kwargs['linkage'] = 'complete'       return linkage_tree(*args, **kwargs)  def _average_linkage(*args, **kwargs):     kwargs['linkage'] = 'average'     return linkage_tree(*args, **kwargs)  _TREE_BUILDERS = dict(     ward=ward_tree,     complete=_complete_linkage,     average=_average_linkage,     )  def _hc_cut(n_clusters, children, n_leaves):     """Function cutting the ward tree for a given number of clusters.     Parameters     ----------     n_clusters : int or ndarray         The number of clusters to form.     children : list of pairs. Length of n_nodes         The children of each non-leaf node. Values less than `n_samples` refer         to leaves of the tree. A greater value `i` indicates a node with         children `children[i - n_samples]`.     n_leaves : int         Number of leaves of the tree.     Returns     -------     labels : array [n_samples]         cluster labels for each point     """     if n_clusters > n_leaves:         raise ValueError('Cannot extract more clusters than samples: '                          '%s clusters where given for a tree with %s leaves.'                          % (n_clusters, n_leaves))     # In this function, we store nodes as a heap to avoid recomputing     # the max of the nodes: the first element is always the smallest     # We use negated indices as heaps work on smallest elements, and we     # are interested in largest elements     # children[-1] is the root of the tree     nodes = [-(max(children[-1]) + 1)]     for i in xrange(n_clusters - 1):         # As we have a heap, nodes[0] is the smallest element         these_children = children[-nodes[0] - n_leaves]         # Insert the 2 children and remove the largest node         heappush(nodes, -these_children[0])         heappushpop(nodes, -these_children[1])     label = np.zeros(n_leaves, dtype=np.intp)     for i, node in enumerate(nodes):         label[_hierarchical._hc_get_descendent(-node, children, n_leaves)] = i     return label  class AgglomerativeClustering(BaseEstimator, ClusterMixin):     """     Agglomerative Clustering     Recursively merges the pair of clusters that minimally increases     a given linkage distance.     Parameters     ----------     n_clusters : int, default=2         The number of clusters to find.     connectivity : array-like or callable, optional         Connectivity matrix. Defines for each sample the neighboring         samples following a given structure of the data.         This can be a connectivity matrix itself or a callable that transforms         the data into a connectivity matrix, such as derived from         kneighbors_graph. Default is None, i.e, the         hierarchical clustering algorithm is unstructured.     affinity : string or callable, default: "euclidean"         Metric used to compute the linkage. Can be "euclidean", "l1", "l2",         "manhattan", "cosine", or 'precomputed'.         If linkage is "ward", only "euclidean" is accepted.     memory : Instance of joblib.Memory or string (optional)         Used to cache the output of the computation of the tree.         By default, no caching is done. If a string is given, it is the         path to the caching directory.     n_components : int (optional)         Number of connected components. If None the number of connected         components is estimated from the connectivity matrix.         NOTE: This parameter is now directly determined from the connectivity         matrix and will be removed in 0.18     compute_full_tree : bool or 'auto' (optional)         Stop early the construction of the tree at n_clusters. This is         useful to decrease computation time if the number of clusters is         not small compared to the number of samples. This option is         useful only when specifying a connectivity matrix. Note also that         when varying the number of clusters and using caching, it may         be advantageous to compute the full tree.     linkage : {"ward", "complete", "average"}, optional, default: "ward"         Which linkage criterion to use. The linkage criterion determines which         distance to use between sets of observation. The algorithm will merge         the pairs of cluster that minimize this criterion.         - ward minimizes the variance of the clusters being merged.         - average uses the average of the distances of each observation of           the two sets.         - complete or maximum linkage uses the maximum distances between           all observations of the two sets.     pooling_func : callable, default=np.mean         This combines the values of agglomerated features into a single         value, and should accept an array of shape [M, N] and the keyword         argument ``axis=1``, and reduce it to an array of size [M].     Attributes     ----------     labels_ : array [n_samples]         cluster labels for each point     n_leaves_ : int         Number of leaves in the hierarchical tree.     n_components_ : int         The estimated number of connected components in the graph.     children_ : array-like, shape (n_nodes-1, 2)         The children of each non-leaf node. Values less than `n_samples`         correspond to leaves of the tree which are the original samples.         A node `i` greater than or equal to `n_samples` is a non-leaf         node and has children `children_[i - n_samples]`. Alternatively         at the i-th iteration, children[i][0] and children[i][1]         are merged to form node `n_samples + i`     """      def __init__(self, n_clusters=2, affinity="euclidean",                  memory=Memory(cachedir=None, verbose=0),                  connectivity=None, n_components=None,                  compute_full_tree='auto', linkage='ward',                  pooling_func=np.mean):         self.n_clusters = n_clusters         self.memory = memory         self.n_components = n_components         self.connectivity = connectivity         self.compute_full_tree = compute_full_tree         self.linkage = linkage         self.affinity = affinity         self.pooling_func = pooling_func      def fit(self, X, y=None):         """Fit the hierarchical clustering on the data         Parameters         ----------         X : array-like, shape = [n_samples, n_features]             The samples a.k.a. observations.         Returns         -------         self         """         X = check_arrays(X)[0]         memory = self.memory         if isinstance(memory, six.string_types):             memory = Memory(cachedir=memory, verbose=0)          if self.linkage == "ward" and self.affinity != "euclidean":             raise ValueError("%s was provided as affinity. Ward can only "                              "work with euclidean distances." %                              (self.affinity, ))          if self.linkage not in _TREE_BUILDERS:             raise ValueError("Unknown linkage type %s."                              "Valid options are %s" % (self.linkage,                                                        _TREE_BUILDERS.keys()))         tree_builder = _TREE_BUILDERS[self.linkage]          connectivity = self.connectivity         if self.connectivity is not None:             if callable(self.connectivity):                 connectivity = self.connectivity(X)             connectivity = check_arrays(                 connectivity, accept_sparse=['csr', 'coo', 'lil'])          n_samples = len(X)         compute_full_tree = self.compute_full_tree         if self.connectivity is None:             compute_full_tree = True         if compute_full_tree == 'auto':             # Early stopping is likely to give a speed up only for             # a large number of clusters. The actual threshold             # implemented here is heuristic             compute_full_tree = self.n_clusters < max(100, .02 * n_samples)         n_clusters = self.n_clusters         if compute_full_tree:             n_clusters = None          # Construct the tree         kwargs = {}         kwargs['return_distance'] = True         if self.linkage != 'ward':             kwargs['linkage'] = self.linkage             kwargs['affinity'] = self.affinity         self.children_, self.n_components_, self.n_leaves_, parents,              self.distance = memory.cache(tree_builder)(X, connectivity,                                        n_components=self.n_components,                                        n_clusters=n_clusters,                                        **kwargs)         # Cut the tree         if compute_full_tree:             self.labels_ = _hc_cut(self.n_clusters, self.children_,                                    self.n_leaves_)         else:             labels = _hierarchical.hc_get_heads(parents, copy=False)             # copy to avoid holding a reference on the original array             labels = np.copy(labels[:n_samples])             # Reasign cluster numbers             self.labels_ = np.searchsorted(np.unique(labels), labels)         return self 

Below is a simple example showing how to use the modified AgglomerativeClustering class:

import numpy as np import AgglomerativeClustering # Make sure to use the new one!!! d = np.array(     [         [1, 2, 3],         [4, 5, 6],         [7, 8, 9]     ] )  clustering = AgglomerativeClustering(n_clusters=2, compute_full_tree=True,     affinity='euclidean', linkage='complete') clustering.fit(d) print clustering.distance 

That example has the following output:

[  5.19615242  10.39230485] 

This can then be compared to a scipy.cluster.hierarchy.linkage implementation:

import numpy as np from scipy.cluster.hierarchy import linkage  d = np.array(         [             [1, 2, 3],             [4, 5, 6],             [7, 8, 9]         ] ) print linkage(d, 'complete') 

Output:

[[  1.           2.           5.19615242   2.        ]  [  0.           3.          10.39230485   3.        ]] 

Just for kicks I decided to follow up on your statement about performance:

import AgglomerativeClustering from scipy.cluster.hierarchy import linkage import numpy as np import time  l = 1000; iters = 50 d = [np.random.random(100) for _ in xrange(1000)]  t = time.time() for _ in xrange(iters):     clustering = AgglomerativeClustering(n_clusters=l-1,         affinity='euclidean', linkage='complete')     clustering.fit(d) scikit_time = (time.time() - t) / iters print 'scikit-learn Time: {0}s'.format(scikit_time)  t = time.time() for _ in xrange(iters):     linkage(d, 'complete') scipy_time = (time.time() - t) / iters print 'SciPy Time: {0}s'.format(scipy_time)  print 'scikit-learn Speedup: {0}'.format(scipy_time / scikit_time) 

This gave me the following results:

scikit-learn Time: 0.566560001373s SciPy Time: 0.497740001678s scikit-learn Speedup: 0.878530077083 

According to this, the implementation from Scikit-Learn takes 0.88x the execution time of the SciPy implementation, i.e. SciPy's implementation is 1.14x faster. It should be noted that:

  1. I modified the original scikit-learn implementation

  2. I only did a small number of iterations

  3. I only tested a small number of test cases (both cluster size as well as number of items per dimension should be tested)

  4. I ran SciPy second, so it is had the advantage of obtaining more cache hits on the source data

  5. The two methods don't exactly do the same thing.

With all of that in mind, you should really evaluate which method performs better for your specific application. There are also functional reasons to go with one implementation over the other.

</div
 
 
8
 
vote

Обновление: я рекомендую это решение - https://stackoverflow.com/a/47769506/1333621 , если вы нашли свою попытку полезной, пожалуйста, проверьте решение Arjun и пересмотреть свой голос

Вам нужно будет генерировать «матрицу связи» от детей где каждая строка в матрице связи имеет формат [IDX1, IDX2, расстояние, образец_count].

Это не предназначено для того, чтобы быть входящим и запущенным решением, я не отслеживаю то, что мне нужно импортировать - но в любом случае должно быть довольно ясно.

Вот один из способов генерировать необходимую структуру Z и визуализировать результат

<Код> X Ваш <код> n_samples x n_features входных данных

кластер

 <код> agg_cluster = sklearn.cluster.AgglomerativeClustering(n_clusters=n) agg_labels = agg_cluster.fit_predict(X)   

Некоторые пустые структуры данных

 <код> Z = [] # should really call this cluster dict node_dict = {} n_samples = len(X)   

Напишите рекурсивную функцию, чтобы собрать все узлы листьев, связанные с заданным кластером, вычислительными расстояниями и позициями центроида

 <код> def get_all_children(k, verbose=False):     i,j = agg_cluster.children_[k]      if k in node_dict:         return node_dict[k]['children']      if i < leaf_count:         left = [i]     else:         # read the AgglomerativeClustering doc. to see why I select i-n_samples         left = get_all_children(i-n_samples)      if j < leaf_count:         right = [j]     else:         right = get_all_children(j-n_samples)      if verbose:         print k,i,j,left, right     left_pos = np.mean(map(lambda ii: X[ii], left),axis=0)     right_pos = np.mean(map(lambda ii: X[ii], right),axis=0)      # this assumes that agg_cluster used euclidean distances     dist = metrics.pairwise_distances([left_pos,right_pos],metric='euclidean')[0,1]      all_children = [x for y in [left,right] for x in y]     pos = np.mean(map(lambda ii: X[ii], all_children),axis=0)      # store the results to speed up any additional or recursive evaluations     node_dict[k] = {'top_child':[i,j],'children':all_children, 'pos':pos,'dist':dist, 'node_i':k + n_samples}     return all_children     #return node_di|ct   

Заполните <код> node_dict и генерировать <код> Z - с расстоянием и n_samples на узел

 <код> for k,x in enumerate(agg_cluster.children_):        get_all_children(k,verbose=False)  # Every row in the linkage matrix has the format [idx1, idx2, distance, sample_count]. Z = [[v['top_child'][0],v['top_child'][1],v['dist'],len(v['children'])] for k,v in node_dict.iteritems()] # create a version with log scaled distances for easier visualization Z_log =[[v['top_child'][0],v['top_child'][1],np.log(1.0+v['dist']),len(v['children'])] for k,v in node_dict.iteritems()]   

Сюжет его с помощью Scipy Dendrogram

 <код>    from scipy.cluster import hierarchy    plt.figure()    dn = hierarchy.dendrogram(Z_log,p=4,truncate_mode='level')    plt.show()   

dendrogram

Будьте разочарованы тем, насколько непрозрачна эта визуализация и желает, чтобы вы могли интерактивно свернуть на большие кластеры и изучить направленные (не скалярные) расстояния между центром) :( - Может быть, существует решение Bokeh?

Ссылки

http: //docs.scipy .org / doc / scipy / Ссылка / сгенерировано / scipy.cluster.hierarchy.dendrogram.html

https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tustorial/#slecting-a-distance -Подтверждение-определение - число кластеров

 

Update: I recommend this solution - https://stackoverflow.com/a/47769506/1333621, if you found my attempt useful please examine Arjun's solution and re-examine your vote

You will need to generate a "linkage matrix" from children_ array where every row in the linkage matrix has the format [idx1, idx2, distance, sample_count].

This is not meant to be a paste-and-run solution, I'm not keeping track of what I needed to import - but it should be pretty clear anyway.

Here is one way to generate the required structure Z and visualize the result

X is your n_samples x n_features input data

cluster

agg_cluster = sklearn.cluster.AgglomerativeClustering(n_clusters=n) agg_labels = agg_cluster.fit_predict(X) 

some empty data structures

Z = [] # should really call this cluster dict node_dict = {} n_samples = len(X) 

write a recursive function to gather all leaf nodes associated with a given cluster, compute distances, and centroid positions

def get_all_children(k, verbose=False):     i,j = agg_cluster.children_[k]      if k in node_dict:         return node_dict[k]['children']      if i < leaf_count:         left = [i]     else:         # read the AgglomerativeClustering doc. to see why I select i-n_samples         left = get_all_children(i-n_samples)      if j < leaf_count:         right = [j]     else:         right = get_all_children(j-n_samples)      if verbose:         print k,i,j,left, right     left_pos = np.mean(map(lambda ii: X[ii], left),axis=0)     right_pos = np.mean(map(lambda ii: X[ii], right),axis=0)      # this assumes that agg_cluster used euclidean distances     dist = metrics.pairwise_distances([left_pos,right_pos],metric='euclidean')[0,1]      all_children = [x for y in [left,right] for x in y]     pos = np.mean(map(lambda ii: X[ii], all_children),axis=0)      # store the results to speed up any additional or recursive evaluations     node_dict[k] = {'top_child':[i,j],'children':all_children, 'pos':pos,'dist':dist, 'node_i':k + n_samples}     return all_children     #return node_di|ct 

populate node_dict and generate Z - with distance and n_samples per node

for k,x in enumerate(agg_cluster.children_):        get_all_children(k,verbose=False)  # Every row in the linkage matrix has the format [idx1, idx2, distance, sample_count]. Z = [[v['top_child'][0],v['top_child'][1],v['dist'],len(v['children'])] for k,v in node_dict.iteritems()] # create a version with log scaled distances for easier visualization Z_log =[[v['top_child'][0],v['top_child'][1],np.log(1.0+v['dist']),len(v['children'])] for k,v in node_dict.iteritems()] 

plot it using scipy dendrogram

   from scipy.cluster import hierarchy    plt.figure()    dn = hierarchy.dendrogram(Z_log,p=4,truncate_mode='level')    plt.show() 

dendrogram

be disappointed by how opaque this visualization is and wish you could interactively drill down into larger clusters and examine directional (not scalar) distances between centroids :( - maybe a bokeh solution exists?

references

http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html

https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tutorial/#Selecting-a-Distance-Cut-Off-aka-Determining-the-Number-of-Clusters

</div
 
 
     
     
1
 
vote

Я думаю, что официальный пример Sklearn на агломеративетелю будет полезен.

Участок иерархический кластеризационный дендрограмма :

 <код> import numpy as np  from matplotlib import pyplot as plt from scipy.cluster.hierarchy import dendrogram from sklearn.datasets import load_iris from sklearn.cluster import AgglomerativeClustering   def plot_dendrogram(model, **kwargs):     # Create linkage matrix and then plot the dendrogram      # create the counts of samples under each node     counts = np.zeros(model.children_.shape[0])     n_samples = len(model.labels_)     for i, merge in enumerate(model.children_):         current_count = 0         for child_idx in merge:             if child_idx < n_samples:                 current_count += 1  # leaf node             else:                 current_count += counts[child_idx - n_samples]         counts[i] = current_count      linkage_matrix = np.column_stack([model.children_, model.distances_,                                       counts]).astype(float)      # Plot the corresponding dendrogram     dendrogram(linkage_matrix, **kwargs)   iris = load_iris() X = iris.data  # setting distance_threshold=0 ensures we compute the full tree. model = AgglomerativeClustering(distance_threshold=0, n_clusters=None)  model = model.fit(X) plt.title('Hierarchical Clustering Dendrogram') # plot the top three levels of the dendrogram plot_dendrogram(model, truncate_mode='level', p=3) plt.xlabel("Number of points in node (or index of point if no parenthesis).") plt.show()   

NB Это решение опирается на <код> n_samples x n_features0 переменная, которая устанавливается только при вызове <код> n_samples x n_features1 с n_samples x n_features2 параметром.

 

I think the official example of sklearn on the AgglomerativeClustering would be helpful.

Plot Hierarchical Clustering Dendrogram:

import numpy as np  from matplotlib import pyplot as plt from scipy.cluster.hierarchy import dendrogram from sklearn.datasets import load_iris from sklearn.cluster import AgglomerativeClustering   def plot_dendrogram(model, **kwargs):     # Create linkage matrix and then plot the dendrogram      # create the counts of samples under each node     counts = np.zeros(model.children_.shape[0])     n_samples = len(model.labels_)     for i, merge in enumerate(model.children_):         current_count = 0         for child_idx in merge:             if child_idx < n_samples:                 current_count += 1  # leaf node             else:                 current_count += counts[child_idx - n_samples]         counts[i] = current_count      linkage_matrix = np.column_stack([model.children_, model.distances_,                                       counts]).astype(float)      # Plot the corresponding dendrogram     dendrogram(linkage_matrix, **kwargs)   iris = load_iris() X = iris.data  # setting distance_threshold=0 ensures we compute the full tree. model = AgglomerativeClustering(distance_threshold=0, n_clusters=None)  model = model.fit(X) plt.title('Hierarchical Clustering Dendrogram') # plot the top three levels of the dendrogram plot_dendrogram(model, truncate_mode='level', p=3) plt.xlabel("Number of points in node (or index of point if no parenthesis).") plt.show() 

NB This solution relies on distances_ variable which only is set when calling AgglomerativeClustering with the distance_threshold parameter.

</div
 
 
0
 
vote

Я столкнулся с той же проблемой при настройке n_clusters. Я думаю, что проблема в том, что если вы устанавливаете N_COLCERS, расстояния не оцениваются. Если вы устанавливаете n_clusters = none и установите расстояние_thtrehold, то он работает с кодом, указанным на Sklearn. Я понимаю, что это, вероятно, не поможет в вашей ситуации, но надеюсь, что исправление продолжается.

 

I ran into the same problem when setting n_clusters. I think the problem is that if you set n_clusters, the distances don't get evaluated. If you set n_clusters = None and set a distance_threshold, then it works with the code provided on sklearn. I understand that this will probably not help in your situation but I hope a fix is underway.

</div
 
 

Связанный проблема

0  Как умножить диагональные элементы друг другом, используя numpy?  ( How to multiply diagonal elements by each other using numpy ) 
Для целей этого упражнения давайте рассмотрим матрицу, где элемент <код> m_{i, j} дается правилом <код> m_{i, j} = i*j Если <код> i == j и <код > 0 else. ...

2  Создание кафельной карты с блендером  ( Creating a tiled map with blender ) 
Я смотрю на создание плитки карты на основе 3D-модели, сделанной в Blender, карта 16 х 16 в блендере. У меня есть 4 разных уровня зума, и каждая плитка со...

1  Вызов функции Python с параметрами из скрипта оболочки  ( Calling a python function with options from shell script ) 
У меня есть сценарий Python, который принимает различные варианты из командной строки e.g. -Runs с графическим интерфейсом <код> python myscript.py -gui...

0  Как можно включить против псевдонима в Pyqtgraph ImageView?  ( How can anti aliasing be enabled in a pyqtgraph imageview ) 
Я использую <код> pyqtgraph 's <код> ImageView widget, чтобы отобразить изображение, которое необходимо масштабировать в 1,25 до 1,5, чтобы быть удобным. Эт...

1  dataframe или sqlctx (sqlcontext) сгенерировали "попытка вызвать пакет" ошибка  ( Dataframe or sqlctx sqlcontext generated trying to call a package error ) 
Я использую Spark 1.3.1. В Pyspark я создал Dataframe от RDD и зарегистрировал схему, что-то вроде этого: <код> dataLen=sqlCtx.createDataFrame(myrdd, ["id",...

2  Использование OpenPyXL для поиска ячейки в одном столбце, а затем для распечатки строки для этой соответствующей ячейки  ( Using openpyxl to search for a cell in one column and then to print out the row ) 
Например, я хочу иметь возможность вводить в мою программу через пользователь ввода данных, а затем распечатать ряд, относящуюся к этой ячейке. В идеале, если...

25  Найти сломанные симличины с Python  ( Find broken symlinks with python ) 
Если я звоню <код> os.stat() на сломанный <код> 9988777663 , python бросает <код> OSError исключение. Это делает его полезным для их поиска. Тем не менее, е...

1  Cprofile принимает много памяти  ( Cprofile taking a lot of memory ) 
Я пытаюсь профилировать мой проект в Python, но у меня заканчивается память. Сам мой проект довольно памяти, но даже пробеги полумана с возможностью погибан...

33  Argparse «Обязательные» необязательные аргументы  ( Argparse compulsory optional arguments ) 
argparse модуль имеет то, что называются «дополнительными» аргументами. Все аргументы, имя которого начинается с <код> - или <код> -- необязательно по умо...

4  Django на Dreamhost с пассажиром: нет ответа на браузеры, без ошибок  ( Django on dreamhost with passenger no response to browsers no error ) 
Я пытаюсь получить некоторое тривиальное Джанго, чтобы бежать на моей учетной записи Dreamhost. Я сделал свою домашнюю работу, прежде чем выбрать Dreamshost, ...

0  Python получает глобальные модули вместо местных внутри виртуальны  ( Python is getting global modules instead of local ones inside of virtualenv ) 
Это мой первый раз, используя virtualenv и mysqldb, и я получаю странную ошибку. После того, как я настрою этот Virtualenv, я установил MySQLDB изнутри Virtua...

1  Создание метода класса Python с использованием закрытия  ( Creating a python class method using a closure ) 
Я использую модуль unittest unittest module (как я довольно новый python), и я оказываюсь, что вы выполняете те же утверждения испытаний снова и снова. Я х...

0  Как написать Pivot_Table в TXT файл Python  ( How to write the pivot table to txt file by python ) 
Я получаю pivot_table следующим образом: Есть места в таблице, Что я хочу написать на TXT: Как получить это? <код> WKWebView0 ...

52  Как пройти параметры функции при использовании timeit.timer ()  ( How to pass parameters of a function when using timeit timer ) 
Это план простая программа <код> # some pre-defined constants A = 1 B = 2 # function that does something critical def foo(num1, num2): # do something ...

2  Heroku Установить Letsencrypt - SU: ДОЛЖЕН БУДЬТ  ( Heroku install letsencrypt su must be run from a terminal ) 
Я пытаюсь создать сертификат SSL для Мой сайт , чтобы получить зеленый замок. . Во время передачи, как это сделать (никогда не делал ничего с сертификатами ...

Связанный проблема

0  Как умножить диагональные элементы друг другом, используя numpy? 
2  Создание кафельной карты с блендером 
1  Вызов функции Python с параметрами из скрипта оболочки 
0  Как можно включить против псевдонима в Pyqtgraph ImageView? 
1  dataframe или sqlctx (sqlcontext) сгенерировали "попытка вызвать пакет" ошибка 
2  Использование OpenPyXL для поиска ячейки в одном столбце, а затем для распечатки строки для этой соответствующей ячейки 
25  Найти сломанные симличины с Python 
1  Cprofile принимает много памяти 
33  Argparse «Обязательные» необязательные аргументы 
4  Django на Dreamhost с пассажиром: нет ответа на браузеры, без ошибок 
0  Python получает глобальные модули вместо местных внутри виртуальны 
1  Создание метода класса Python с использованием закрытия 
0  Как написать Pivot_Table в TXT файл Python 
52  Как пройти параметры функции при использовании timeit.timer () 
2  Heroku Установить Letsencrypt - SU: ДОЛЖЕН БУДЬТ