Science topic

# Unsupervised Learning - Science topic

Explore the latest questions and answers in Unsupervised Learning, and find Unsupervised Learning experts.

Questions related to Unsupervised Learning

I have an idea of

1. RNN Embedding

2. RNN with pack padded sequence

3. FastTest

4. Bi-LSTM

5. CNN

6. CNN-LSTM

7. BERT Transformer

these models.

I am looking model apart form these.

I am recently study clustering quality metrics like Normalized Mutual Information and Fowlkes-Mallows scores.

Both of the scoring metrics seem to focus on a summary of the entire clustering quality. I am wondering whether there is a standard way or variant of the metrics above to measure the quality of a certain cluster or a certain class? The basic idea is that even if the overall looks good but some certain cluster is problematic, the metrics will still give warnings.

PS: I am not looking for any intrinsic methods. More precise, let's assume what I have is, for each data point x_i belong to dataset X, there is a ground truth class mapping x_i -> y_i, and a clustering x_i -> z_i, where y_i, z_i indicates the membership and don't necessarily have the same cardinality. Besides, I would like to further assume there is no distance measure d(x_i, x_j) defined.

If I have collected data regarding say food-preferences from multiple sources and merged them.

How can I decide what kind of clustering to do if I want to find related preferences?

Whether to go for K means, hierarchical, density-based, etc. ?

Is there any process of selecting the clustering technique?

In an online website, some users may create multiple fake users to promote (like/comment) on their own comments/posts. For example, in Instagram to make their comment be seen at the top of the list of comments.

This action is called Sockpuppetry. https://en.wikipedia.org/wiki/Sockpuppet_(Internet)

What are some general algorithms in unsupervised learning to detect these users/behaviors?

While I am intrigued by the fact that unsupervised learning algorithms don't require label handouts yet computing astounding results, I wonder what is the stopping point in AI? Where do we know the machine 'learning' is out of our hands and we can't decode what we originally created?

Is there some method to know what our algorithm is learning and on what basis?

The use of cascaded neural networks for the reverse design of metamaterials and nanophotons can effectively alleviate the problems caused by one-to-many mapping, but the intermediate layer of the cascaded network is unsupervised learning, and an effective method is needed to limit output range of the intermediate layer.

UDA(https://github.com/google-research/uda) could achieve good accuracy by only 20 training data on text classification.

But I find it is hard to reproduce the result on my own dataset.

So I want to know the reason why UDA works. And I want to know what is the most important thing to reproduce the result.

Supervised learning is the basis of deep learning. But, human and animal learning are unsupervised. In order to make deep learning more effective in human life we need to discover approaches using Deep learning to handle unsupervised learning. How much of progress is made in this direction so far?

I have an input data set as a 5x100 matrix. 5 indicates the number of variables and 100 indicates the number of samples. I also have an target data set as a 1x100 matrix, which is continuous numbers. I want to design a model using input and target data set using a deep learning method. How can I enter my data (input and target) in this toolbox? Is it similar to the neural fitting ( nftool) toolbox?

Hi.

I'm dealing with clustering of data were the resulting clusters are, in general, non-spherical. Some of them are not convex.

What are the best internal metrics for evaluating these kind of clusters?

I know the silhouette index is a very common for evaluating the result of clustering process. However, it seems that silhouette index is biased towards spherical clusters.

Normalized Mutual Information (NMI) and B3 are used for extrinsic clustering evaluation metrics when each instance (sample) has only one label.

What are equivalent metrics when each instance (sample) has only one label?

For example, in first image, we see [apple, orange, pears], in second image, we see [orange, lime, lemon] and in third image, we see [apple], and in the forth image we see [orange]. Then, if put first image and last image in the one cluster it is good, and if put third and forth image in one cluster is bad.

Application: Many popular datasets for object detection or image segmentation have multi labels for each image. If we used this data for classification (not detection and not segmentation), we have multiple labels for each image.

Note: My task is unsupervised clustering, not supervised classification. I know that for supervised classification, we can use top-5 or top-10 score. But I do not know what will be in unsupervised clustering.

Dear researchers,

let's gather data regarding the Corona virus.

This could be used for analysis in a second step.

My first ideas:

1. Create predictive models

2. Search for similarities with Unsupervised Learning

3. Use Explainable AI for new insights.

What are your ideas?

Where did you find data?

It's an era of deep learning and especially unsupervised deep learning which basically revolves around ART, SOM and Autoencoders. Now, the question arises what issues does deep learning based methods focus, which are not properly handled in traditional unsupervised learning techniques?

Can AI learn from processes instead of data ? the question is valid for supervised and unsupervised learning. if so there is algorithms or approach for learning from process execution ?.

Hello, I'm a biologist interested in machine learning application in genomic data; specifically,

**I'm trying to apply clustering techniques to differential gene expression data.**I started by understand the basics of unsupervised learning and clustering algorithms with random datasets, but now I need to apply some of that algorithms (k-means, PAM, CLARA, SOM, DBSCAN...) to differential gene expression data and, honestly, I don't know where to begin, so

**I'd be grateful if someone can recommend me some tutorials or textbooks, or give me some tips.**Thank you for your time!

PD: I'm mainly using

**R language**, but if**Python**tutorials are also OK for me.I work on graph based knowledge representation. I would like to know, how we can apply Deep Learning on Resource Description Framework (RDF) data and what we can infer by this way ? Thanks in advance for your help!

Hi,

I am pursuing PhD and my area of work is pattern recognition by machine learning. I have covered all supervised and unsupervised learning (deep learning) during my Ph.D because of my topic. I have completed my all research work and waiting to submit the thesis. I hope, I'll be able to complete my thesis with in 3 year. I have published 5 articles (4 conference and 1 Scopes journal) and 5 unpublished articles.

Could you suggest me what type of option I can follow after to complete my PhD and why that options are good according to you (based on my profile)?

Thank you for your time.

What are the links in their definitions? How do you interconnect them? What are their similarities or differences? ...

I would be grateful if you could reply by referring to valid scientific literature sources.

I have PCAP files captured from network traffic. What should be done so that PCAP files can be done with machine learning tools? What steps are needed so that data can be analyzed with one of the unsupervised methods? Does the data have to be changed to CSV format?

Suppose we have users, for each user, we have: user_id, user_name, user_job title, user_skills, user_workExperience. I need to cluster the user based on their skill and work experience( long text data), put the users into groups. I was searching about how to clustering text data but still didn't find a good example to follow" step by step". Based on the data I have I think I should use unsupervised approach (as the data I have is not labeled), I found that I can use K-Mean or hierarchical clustering, but I'm stuck in how to find: K "number of clustering with K-Mean". Also, I don't know what is the best way to prepare the long text before fit into the clustering algorithm. Any idea or example that can help me, would be very appreciated. Thanks in advance.

which method extracts the better features for unsupervised learning: PCA or Auto Encoder?

I am presently working on unsupervised learning for text classification.

The data is entered by end users in the business domain and can be on varying subjects.

Any new subject can get triggered at any point in time - hence continuous learning for creating new clusters/ classes based on the text entered text is required.

Thus want to avoid having any seed values such as density/ epsilon/ number of clusters etc.

Is there any such algorithm already known to find number of clusters, and cluster the data incrementally (tried Gaussian measure till now with other basic clustering algos - kmeans, dbscan etc)

Hello,

Is any one already worked with unsupervised image segmentation? If so, please give me your suggestion. I am using an autoencoder for Unsupervised image segmentation and someone suggests me to use Normalized cut to segment the image... Is there any such algorithm other than Normalized cut??? Also please suggest me some reconstruction loss algorithms which are efficient to use.

Thanks in advance,

Dhanunjaya, Mitta

Hello,

Is anyone already worked with MR image data set??? If so, Is there any model to remove the motion artifacts in the MR image data set if contains??? What should we do if we have an MR image with motion artifacts??? Please give me your suggestions if it is possible to remove artifacts once the scan is produced.

Thanks in advance,

Dhanunjaya, Mitta

Hello,

I want to know the difference between Reinforcement learning from Supervised and Unsupervised learning. There is a Reinforcement learning technique called Q-Learning. Anybody please explain the working concept of Q learning method. Looking forward for useful comments on this.

Thanks

I'm a newbie in the field of Deep Reinforcement Learning with background in linear algebra, calculus, probability, data structure and algorithms. I've 2+ years of software development experience. In undergrad, I worked in tracking live objects from camera using C++,OpenCV. Currently, I'm intrigued by the work been done in Berkeley DeepDrive (https://deepdrive.berkeley.edu/project/deep-reinforcement-learning).

^{ }How do I gain the knowledge to build a theoretical model of a self-driving car ? What courses should I take? What projects should I do ?For one of my studies, I designed an unsupervised predictive clustering model, and now searching for some modification steps and post processing to use that clustering model for classification in a reliable way.

In MATLAB, clustering data using kmeans can be achieved as shown below:

L = kmeans(X,k,Name,Value)
where L is cluster indices which is for each data point.

It implies that is if I have 307 data points I'm to have a 307 x 1 array(L) which is the index for each data point.

However, while using SOM for clustering
I discovered to get the index you use the code snippet below:

net = selforgmap([dimension1 dimension2]);

% Train the Network

[net,tr] = train(net,X);

%get indices

L = vec2ind(net(X))';

for a Network with 5 x 5 dimension:

it returns L which is an array with the dimesion 25 x 1 instead of 307 x 1
for a Network with 10 x 10 dimension:

it returns L which is an array with the dimesion 100 x 1 instead of 307 x 1

What am I doing wrong???

**or to simply put, how do I compute the class vectors of each of the training inputs**?

**1.**What is the best approach to assign weights to queries?

**2.**Is it conventional to use same weights for entire query set?

Thanks

I'm new to Matlab, I'm wondering if someone can help me to get start with machine learning task.

I would like to perform Linear discriminant analysis (LDA) or support vector machine (SVM) classification on my small data set (matrix of features extracted from ECG signal), 8 features (attributes). The task is binary classification into a preictal state (class 1) and interictal state (class 2).

In Matlab, I found (Classification learner app), which enable using different kinds of classifiers including SVM, but I don't know if I can use the input data that I have to train the classifier in this app?. I'm not sure how to start? Do you have any idea about this app? please help!

Could anybody tell me how to use word embedding to expand initial query after getting the most similar words for each query term.?

I'm having a concrete problem I'm trying to solve but I'm not sure in which direction I should go.

- Goal: Identify formation of a soccer team based on a static positional data (x,y coordinates of each player) frame
- Input: Dataframe with player positions + possible other features
- Output: Formation for the given frame
- Limited, predefined formations (5-10) like 5-3-2 (5 defenders, 3 midfield players, 3 strickers)
- Possible to manually label a few examples per formation

I already tried k-means clustering on single frames, only considering the x-axis to identify defense, midfield and offense players which works ok but fails in some situations.

Since I don't have (much) labels im looking for unsupervised neural network architectures (like self organizing maps) which might be able to solve this problem better than simple k-means clustering on single frames.

I'm looking for an architecture which could utilize the additional information I have about the problem (number and type of formations, few already labeled frames, ..).

I applied supervised and unsupervised learning algorithms on the data set which is available at UCI repository. I want to know further whether can I find the dataset based on the location of the user ,history of previous transactions and time span between two consecutive transactions?

I have a dataset, which contains normal as well as abnormal data (counter data) behavior .

I did not use English but one of the under-resourced language in Africa. The challenge is the testing of unsupervised learning.

I am looking for a way to test/evaluate this model.

Refer me links and tutorials about testing/evaluating unsupervised learning.

Nowadays there are plenty of core technologies for TC (Text Classification). Among all the ML learning approaches, which one would you suggest for training models for a new language and a vertical domain (alike Sports, Politics or Economy)?

Dear all respectful researchers,

I am working on a structured biomedical dataset that consists of many data type inconsistency, outliers and missing values (instances) on seven independent variables (attributes). I am considering to perform pre-processing methods such as data standardization and also imputations to improve the issues mentioned above. However, there are two version of the pre-processing methods, that is, supervised and unsupervised ones.

My main two questions regarding the common practice are:

1. Should I perform unsupervised discretisation method on the dataset to solve data type issue when, subsequently, I conduct cluster analysis using k-means cluster algorithm?

2. After completing the first clustering task above, should I perform supervised discretisation method on the same dataset when I train the model for classification task using supervised machine learning algorithms?

Thank you for your information and experience.

In top journal papers, there are many works which are being carried out on accelerometer signals. Most of them undergo the following steps

1. Handling signals with different length --- Never mentioned in any paper

1. Pre-processing(filtering out noise) - Optional

2. Signal Segmentation

3. Feature extraction

4. Supervised (or) Unsupervised learning

Nevertheless, none of the papers mentioned the technique used by them to handle signals of different lengths for example 600 secs to 13,200 secs variation(with same sampling rate 100Hz). Since such missing information can lead to inaccurate comparisons, i'm surprised that top journals didn't give importance to this issue. I would like to know the best technique to handle varied signal lengths. Please don't consider the sampling rate issue since all signals have the same sampling rate. I would like to know the most commonly used technique to handle signals with different lengths.

with respect to unsupervised learning such as clustering, are there any metrics to evaluate the performance of unsupervised learning as well as supervised learning?

Deep Learning Multilayer Perceptron is based on supervised learning while Deep Belief Network is based on unsupervised learning? Looking at the malware detection situation, which method will be the best?

Hi,

Please, when do I force for unsupervised learning and what is the benefit of unsupervised learning techniques?

Thanks & Best wishes

Osman

I have implemented 3 bootstrapping models for learning product details, then I did the comparison with more different performance measure and found out which model learned good, but I want to do something like optimization/ensemble (is these possible with the models result?) or please suggest some other simple process to conclude my work. Moreover, the work was performed in an unsupervised manner. So please help me how to do improvisation in my models results (like tp,tn,fp,fn or learned product details). Thanks in advance.

As the basic concepts used in association rule learning are related to conditional probability and ratio to independence, I was wondering if Correspondence Analysis has been used in the literature with this. I understand the main motivation in association rule learning is efficiency in CPU time and memory usage but these days SVD (Singular Value Decomposition) is pretty fast and some algorithms can be very scarce in memory usage?

Is there any way to compare the accuracy or cost of these two methods with each other ? SVM and K-Means clustering ?

kindly tell me if i am using a dataset which have to produced binary class attributes like Student Result "Pass" or :Fail". i have data which have 80% pass students and 20% fail. in reality it is true because same ratio is observed in real life however due to problem of intention the classifier towards majority class here i think needs to be it balanced. the question is that in this case 50/ 50 balancing will be consider right whereas it is impossible or 60/40 should be right or other?

The system should use the context of the item to select relevant data (PCA), then, use k-means to do clustering and finally use IBCF to generate top-n recommendations. I need the detailed algorithm for this task (PCA+K-Means+IBCF).

At training phase, I applied k-means clustering algorithm on a dataset (k=10). I want to apply decision tree on each cluster and generate a separate performance model for each cluster

At testing phase, I want to compare each test instance with the centroid of each cluster and use appropriate model to classify the instance.

Is there a way I can achieve this in WEKA or WEKA API. Any link/resource will be appreciated

Hello Dear Members, please does anyone have an idea on feature selection? I will be very grateful for your ideas and information.

I get the number of components for my dataset using BIC but i want to know if the Silhouette coefficient method is the right option to validate my results.

Thanks!

Thanks to Prof. Erkki Oja it is well known that a neuron using simple Hebbian Learning (including a weight decay) learns to extract the first principal component of the input data.

However, I'd like to validate my intuitions about how this generalizes to a Hebbian network which makes use of competitive learning/lateral inhibition between neurons within a layer.

So, considering I have a competitive neural network model with multiple Hebbian neurons (arranged in a layer) I would assume that the neurons roughly learn to differentiate along the first principal component.

Could anybody please validate or reject this supposition or/and provide any literature regarding that topic? Most sources only consider single or chained (Sanger's rule) Hebbian neurons.

hi there , i am trying to perform clustering on some multi-variate continuous numerical data, just wondering if any one has tried to use R in this using deep learning algorithms ? i only found that autoencoders are the unsupervised learning algorithm in unsupervised learning.

I'm developing detector that searches road sign with Matlab, and the camera on the vehicle is moving. HoG-Cascade is firstly used to detect road sign candidates, as you can see on the attached image, but it has lots of "mis-detections", cuz it simply looks like rectangle as I concerned. So I trained HoG-SVM classifier to detect the arrow signs,"<" and ">", which in the road sign. The classifer detects the arrow signs based on sliding window with fixed size. The problem I'm facing is that the camera is moving so the arrow signs get larger as the road sign gets closer to camera(vehicle). Now I'd want to do "multi-scale search" with the SVM classifier, but I couldn't find any functions for that... Any help should be welcome!

including exaples and also the applicable fields in Technology.

I have the city of Edmonton property assessment data. The values were mostly strings so I basically grouped them into numbers (like each neighbourhood now is represented by a number from 1 to 391).

I'm trying unsupervised learning to see what I can learn but the data seem to be incorrect the plots aren't normal. They are mostly straight vertical lines. Should I proceed with the supervised learning and hope for the best? or is it just that the features aren't good enough (not as good as the Boston housing prices dataset) or my interpretation of neighbourhoods to numbers and like residential or not with 1's and 0's.

Hello. I am reasonably new to programming in general, so I'm not looking for detailed advice (code examples). I'm currently learning Python so would prefer answers to my question that are possible with Python (although I'll happily consider other suggestions that don't work with Python, too).

I have two separate numerical datasets. Specific trends in the first dataset cause a change in the second dataset some minutes, hours, days, or weeks later.

I would like to develop a program that will teach itself what these patterns are, with the ultimate goal of being able to predict changes in the second dataset.

Specifically, this would need to perform unsupervised learning, not supervised learning, as I hope it will be able to detect patterns in the datasets that I am not already aware of.

Could anyone please recommend appropriate tools to use or any good introductory books/websites/tutorials?

Thank you in advance for your advice. :)

When an algorithm like genetic algorithm has converged to a solution with a certain minimum value of merit function. How do we know the solution obtained is global minimum?

Hi all,

I am wondering which machine learning model would fit my problem here? I am not sure "online sequence prediction" is the correct term for my problem. Basically I would like to train a model that can predict the label of the current instance based on the features of this current instance and also its preceding labels.

For example, my training data would be sequences like this:

instance1 -> instance2 -> instance3 -> instance4 -> instance5

And for these instances, I have the features

**f**for each instance and the label_{i}**l**for each instance._{i}Now for the test data, the sequence of instances is feed to the model one by one.

For instance1, the model would predict the label

**l**according to the case that instance1 is in the first of the sequence and the features_{1}**f**And then for instance2, the model shall predict the label_{1.}**l**for the instance according to features_{2}**f**and its previous label_{2}**l**_{1 }So the model would feel a little bit like CRF, but through testing I do not have the whole sequence. I would need to do the prediction for each item of the sequence whenever they are fed into the system. Any idea what would be the model for this task? Thanks in advance!

I am trying to build a training model using an

**unlabeled**dataset, therefore looking for some "unsupervised learning algorithms". However, the data contain some**categorical**features, therefore conventional clustering methods, such as k-means, cannot be applied. Are there other**?***clustering algorithms which can be used for categorical data*The localization and autonomous navigation of robots in unknown environment is still a developing area. What is the best technique developed so far? What are the advantages and disadvantages of unsupervised learning technique for navigation?

Looking at a combination of supervised & unsupervised learning algorithms that would be best analyse accelerometer and GPS data form mobile devices for Gait recognition

I am working on a project which involves the segregation of citation meta-data in to separate parts such as Author,Title,Volume,Page numbers,and other useful information.Recently, i came across an article(link provided below) which seems to have the most efficient solution for solving my problem,and so i would like to implement it.However, being new to machine learning, i don't know where i should start/begin to implement it.

Qn

1)What tools/Programming language/Libraries can is preferable to implement the logic given in the article?

2)How can i implement it?

ARTICLE TITLE :FLUX-CiM: Flexible Unsupervised Extraction of Citation Metadata

ARTICLE DOI : 10.1145/1255175.1255219

E ;earning is a reality now. What will be the fate of e learning in higher education. will it survive?

Classification techniques can be categorized either into supervised and unsupervised learning or into linear, non-linear and decision tree classifications. Kindly help in categorizing the above mentioned classification techniques into the mentioned categories (Supervised, unsupervised, linear and nonlinear classifications).

In respect to the Reggio Emilia Approach as a product of Italy and the professionals who have worked so hard to provide such an informative model for the rest of the world.

U. S. schools continue to utilize teacher-directed, highly structured, and assessment-oriented instruction, even for very young children.

How can we seek more balance in our classrooms? Through strategies demonstrated by the Reggio Emilia Municipal Schools that more child-directed learning, in-depth project work, larger chunks of time for children to explore and ask questions, have parents become more integral in our classrooms, and come to value the learning process more than the final product or outcome?

Hello,

I'm learning Unsupervised learning and I would like to see a practical example of it in matlab to get a better understanding of it.

Can anyone direct me to any sample available online?

Hi guys ,

I've used a structural distance-based measure to compute similarity between each pair of nodes in an undirect graph. Hence, I calculated a distance matrix "D" such that the distance value "Dij" is simply the shortest-path between node i and node j. However, obtained distance values are absolutes (i.e. 5, 19, 3...etc) and I'd like to normalize them, such that : 0<= Dij <=1.

the normalized distance value must be converted finally to a similarity value S such that Sij=1-Dij.

can any one guide me to find the appropriate function to normalize absolute distances ?

I'm using the SOM tool in SAS enterprise miner to classify a dataset of 666 records into clusters. If I change the order of the observations, the SOM algoritm produces completely different clusters. I can't understand if it is a problem of convergence or if I did anything wrong in setting the options.

Did anyone get similar problems?

I want create sentiment analysis polarity (positive, negative) based on the multiple choice questions

What is the best algorithm to use? and please recommend me good article for doing this?

Thanks

In unsupervised learning how Agglomerative hierarchical clustering can be implemented with evalution of the result?

Hi all

I know this replacement is because of the complexity in spiking neurons computations and because many supervised learning algorithms use gradient based methods, then its difficult to use such a complex models for neurons. Here I have two questions:

1) If we use a simple model (like Izhikevich model), then do we have to use such substitution too?

2) Is this replacement just for supervised learning algorithms? or in unsupervisedis it also necessary? Considering in unsupervised there is no gradient and back-propagation (If I think right!!!)

please help me.

Thanks in advance for your replies.

I am currently looking at various methods like transfer spectral clustering, self-taught clustering, etc and was wondering if someone who has some expertise with these methods could provide some more intuition to these methods.

Hi all, I am attempting to measure magnification (see for instance www.mitpressjournals.org/doi/abs/10.1162/089976606775093918) in practical experiments with neural gas applied to geometric inference.

I found a clear description of how this measurement can be done in practice in http://www.researchgate.net/publication/6307269_Explicit_magnification_control_of_self-organizing_maps_for_forbidden_data/file/9fcfd5086521b0d268.pdf (see appendix).

Does anyone have any suggestions or comments about the measurement method?

How can we collect dataset similar to KDDcup 99 dataset in real environment? We want to check the performance of an unsupervised anomaly algorithm by collecting real dataset

I've been reading about Fisher-jenks natural breaks algorithm where the author describes it as an 'image segmentation' algorithm, while a method like Isodata is described as an image classification method. Is there a difference between these two terms? Why can't the Fisher-jenks algorithm be considered an image classification method as well?

I'm looking for a method for unsupervised classification of big data with an unknown number of clusters. Can you suggest a robust method? Is there any Matlab toolbox dedicated to this purpose?

Lets say, I have decomposed a matrix (of terms and documents) svd = UXV where U and V are orthogonal matrices. I am not sure how I can interpret this in a scatter plot. Explanations provided in terms of 2 dimensions are highly appreciated.

I'm using the feedforward neural network with 100 inputs and 10 units in hidden layer and one output neural network. I train the network several times using the same input training data and the same network architecture/settings with random initialize but I understand that there will be differences in the weightings produced within the NN each time and that no two neural networks will be identical, but what can I try to produce networks that are more consistent across each train, given the identical data?

I'm implementing a baseline system that uses a Vocabulary Tree, i.e. a BoF model based on HKM for images classification. Currently I'm obtaining low quality recognition due to the poor quality of the quantization structure resulting from the HKM.

I'm obtaining 100.000 words in my final vocabulary in a tree of depth 6 and branch factor 10 where the theoretical number of words is 10^6 = 1'000.000

In several papers, like those of Zisserman related to Large Scale Landmark recognition, they claim to be using a 1'000.000 words vocabulary, something I found difficult to understand since this number is theoretical while in practice there is no guarantee on obtaining it.

Am I understanding something wrong? If not, what should I do to increase the vocabulary size despite of using more descriptors to train the tree?

PS: my only hint so far is using a different seeding algorithm for clusters initialization like k-means++ or gonzales

I am a bit confused with "No of clusters" and "No of Seeds" in K-Mean clustering algorithms. Kindly provide an example for understanding the point of view. What is the effect if we change either?

Given two groups of data (blue & red line in the figure), what's the most efficient unsupervised classifier that can locate the blue line in the figure?