Rao Kotagiri’s research while affiliated with University of Melbourne and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (4)


Modelling Zeros in Blockmodelling
  • Chapter

May 2022

·

6 Reads

Lecture Notes in Computer Science

·

·

·

[...]

·

Rao Kotagiri

Blockmodelling is the process of determining community structure in a graph. Real graphs contain noise and so it is up to the blockmodelling method to allow for this noise and reconstruct the most likely role memberships and role relationships. Relationships are encoded in a graph using the absence and presence of edges. Two objects are considered similar if they each have edges to a third object. However, the information provided by missing edges is ambiguous and therefore can be measured in different ways. In this article, we examine the effect of the choice of block metric on blockmodelling accuracy and find that data relationships can be position based or set based. We hypothesise that this is due to the data containing either Hamming noise or Jaccard noise. Experiments performed on simulated data show that when no noise is present, the accuracy is independent of the choice of metric. But when noise is introduced, high accuracy results are obtained when the choice of metric matches the type of noise.


Fig. 2: Execution time w.r.t. different sampling.
Table 2 : Basic Description of Outlier Detection Methods.
Fig. 3: Performance w.r.t. different sampling.
Fig. 4: Performance w.r.t. different boundary.
Density Biased Sampling with Locality Sensitive Hashing for Outlier Detection: 19th International Conference, Dubai, United Arab Emirates, November 12-15, 2018, Proceedings, Part II
  • Chapter
  • Full-text available

November 2018

·

538 Reads

·

2 Citations

Lecture Notes in Computer Science

Outlier or anomaly detection is one of the major challenges in big data analytics since unusual but insightful patterns are often hidden in massive data sets such as sensing data and social networks. Sampling techniques have been a focus for outlier detection to address scalability on big data. The recent study has shown uniform random sampling with ensemble can boost outlier detection performance. However, uniform sampling assumes that all points are of equal importance, which usually fails to hold for outlier detection because some points are more sensitive to sampling than others. Thus, it is necessary and promising to utilise the density information of points to reflect their importance for sampling based detection. In this paper, we formally investigate density biased sampling for outlier detection, and propose a novel density biased sampling approach. To attain scalable density estimation, we use Locality Sensitive Hashing (LSH) for counting the nearest neighbours of a point. Extensive experiments on both synthetic and real-world data sets show that our approach significantly outperforms existing outlier detection methods based on uniform sampling.

Download

Fig. 2: Accuracy at different compression rates  
Fig. 3: Accuracy at different parameters for ISOLET. l is the number hash function, w is the bucket width and |C| is the average number of centroids computed  
Large Scale Metric Learning

July 2016

·

251 Reads

·

1 Citation

Many machine learning and pattern recognition algorithms rely heavily on good distance metrics to achieve competitive performance. While distance metrics can be learned, the computational expense of doing so is currently infeasible on large datasets. In this paper, we propose two efficient-and-effective approaches for selecting the training dataset using Locality-Sensitive Hashing (LSH) with discriminative information, and with K-Means clustering inside LSH buckets, for accelerating metric learning. Our methods yield a speedup factor of (N=C)2, where N is training set size and C N is the user-selected compressed set size, achieving quadratic speedup to metric learning often realized as a 1-2 or more orders of magnitude improvement. For example, our generic filter approach enables the current overall fastest Large Margin Nearest Neighbor (LMNN) to learn metrics on one million samples in 6.8 minutes down from 5.4hrs — a 48x speedup. LMNN and similar state-of-the-art methods use tree data structures to speed up nearest-neighbor queries — an advantage that degrades at higher dimensions. Our approach does not share this limitation.


Fig. 2: Example of a warping path, which determines the cost of aligning two sequences in order to determine similarity. Sequence A has length n = 7 and B has length m = 8.
Fast Trajectory Clustering using Hashing Methods

July 2016

·

866 Reads

·

15 Citations

There has been an explosion in the usage of trajectory data. Clustering is one of the simplest and most powerful approaches for knowledge discovery from trajectories. In order to produce meaningful clusters, well-defined metrics are required to capture the essence of similarity between trajectories. One such distance function is Dynamic Time Warping (DTW), which aligns two trajectories together in order to determine similarity. DTW has been widely accepted as a very good distance measure for trajectory data. However, trajectory clustering is very expensive due to the complexity of the similarity functions, for example, DTW has a high computational cost O(n^2), where n is the average length of the trajectory, which makes the clustering process very expensive. In this paper, we propose the use of hashing techniques based on Distance-Based Hashing (DBH) and Locality Sensitive Hashing (LSH) to produce approximate clusters and speed up the clustering process.

Citations (1)


... Gudmundsson et al. [6] proposed a sub-trajectory clustering approach based on Fréchet distance, considering a trajectory as a directed curve in 2D. Dynamic Time Warping (DTW) was used by Sanchez et al. [7] for fast trajectory similarity. They proposed hashing techniques named Distance-Based Hashing (DBH) and Locality Sensitive Hashing (LSH) by clustering the trajectories using the k-means algorithm. ...

Reference:

Which Way to Go - Finding Frequent Trajectories Through Clustering
Fast Trajectory Clustering using Hashing Methods