Science topic
Unsupervised Learning - Science topic
Explore the latest questions and answers in Unsupervised Learning, and find Unsupervised Learning experts.
Questions related to Unsupervised Learning
We have carried out the clustering of a dataset using Kmeasn algorithams
We are not sure what all parameters are required to calculate which helps us validate how good/bad our clusters are created in Machine learning?
Is Accuracy a good parameter for clustering validation?
Please guide us.
Thank you.
Nowadays semi-supervised and unsupervised are popular domains in research areas but still a lot of challenges are there. So how we can overcome these challenges in case of unsupervised learning in medical imaging?
Unsupervised learning means to "explain" complex or unknown data in a way that is objective, independent yet relevant.
For example, shop owners try always to learn new ways of increasing the profit by watching the passers-by in search for new ideas – an idea such as street attraction during vacation times, a need for a quick service in rush hours, offering help to window shoppers, etc. A fresh and objective look on unfiltered data is the key for useful indications that can be acted upon.
Many practical ideas can be found methodologically by analyzing available unsupervised data.
Unlike supervised-learning that pre-assumes much of the information about variables, in unsupervised learning one acquires new understanding regarding variables as well as cluster formation, alerts and farther insights.
I have i have I havea dataset likethat :
users T1 T2 … Tn
1 [1,2,1.5] [1,3,3] … [2,2,6]
2 [1,5,1.5] [1,3,4] … [2,8,6]
.
n [1,5,7.5] [5,3,4] … [2,9,6]
Given that lists are distinct incident change by time.
My aim to find distinct incidents which might happen to users by time.
I thought of feeding the full dataset to clustering algorithms , but I need an advice from you about best algorithms to fit such 2D dataset or best approach to follow in solving this problem
I want to detect the change in time-series data using RNN-LSTM. I know how to do it with supervised learning, but unable to do the same with unsupervised learning. In this case, I have time-series data for the electricity consumption of a customer with distorted patterns after some date. So before that date, the consumption patterns seem normal (no major change). After that date, the consumption sharply decreased as compared to the previous patterns. Here, I want to detect the change on the day when the pattern seems not normal. I am searching for a way forward for my problem.
Hello,
I'm looking working on a clustering analysis and would be curious if anyone has ideas about how to deal with nested categorical variables.
Normally I would calculate a distance/dissimilarity matrix (Gower when some variables are categorical), and then feed this to a clustering algorithm of choice. Now what happens when some categorical variables are nested?
Fictious example
If measuring characteristics of water samples like turbidity, temperature, dissolved gases, and presence/absence of 50 chemical compounds in the water.
* presence/absence of chemical compounds can be treated as 50 separate binary/categorical variables
* but say that these chemicals belong to 4 groups of compounds?
Thoughts
We could simply add an additional categorical variable "group" and for more complex nesting "subgroup", "subsubgroup"... OK, but as far as I understand, Gower distance is a bit like Manhattan distance in that it calculates a distance for each variable and then adds weights. What but part of the information will be redundant, and even more so if there are more levels of nesting. I was wondering whether anyone has come up with something else to specifically deal with that. Maybe some form of weighting of the variables?
Looking forward to your inputs!
Mick
I want to detect anomaly from a streaming data. Using FFT or DWT history is it possible to detect anomaly on the fly (online) . It will help a lot if anybody could suggest some related resources.
Thanks.
Hi everybody,
I would like to do part of speech tagging in an unsupervised manner, what are the potential solutions?
In my experience, pre-trained models are not suitable for unsupervised tasks. Especially in deep clustering when pre-trained models are used, they often have worse results than without pre-trained models.
What is the scientific reason for this?
Why are the learned representations in pre-trained models not suitable for clustering?
Hello everyone,
Could you recommend papers, books or websites about unsupervised neural networks?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
I can't find the dataset "blood" mentioned in this paper:
With 778 rows, however i found it in the repository:
with 748 rows, i am wondering if it is just a typing error? Else it would be a pleasure to give me the link of the dataset.

I'm working on images classification using ResNet, I have two datasets to train my model :
- 30 000 labeled images (260 class) ;
- 30 000 unlabeled images.
Images contains numers and letters so technically I should be able to classify 260 class (combinaison of 26 letters and 10 numers).
So I was wondering if there's any unsupervised or semi-supervised model that can help me to label my images ?
Which is the best method in case of supervised & unsupervised learning and also for deep learning? Which is the most used method for modeling? Please suggest with example.
Hi. I'm doing a classification problem using deep learning. so that need to train 512x512 images but when i trained my algorithm shows out of memory error. I want to know how much memory size needed to train 512x512 images in MATLAB.
I have been looking into self-supervised methods in computer vision. For example which looks at three consecutive frames of video, tasks a network with predicting the third frame, and uses the original as the ground truth / supervisory signal.
This type of pretext task for self supervision is a cross between context-based and semantic label based pretext tasks
Lane line detection in many ways is approached as a subset of the semantic segmentation task.
I am wondering if there is any way to come up with a pretext task that is specific to lane line detection?
I have seen where self supervision is used in the lane fitting task
But this is used after the lane segments have been identified.
I am trying to solve the Wi-Fi offloading decision making problem using classification and clustering of known and unknown traffic respectively in a given mobile network using bi-flows of packets in the network
Hello,
Is anyone already worked with MR image data set??? If so, Is there any model to remove the motion artifacts in the MR image data set if contains??? What should we do if we have an MR image with motion artifacts??? Please give me your suggestions if it is possible to remove artifacts once the scan is produced.
Thanks in advance,
Dhanunjaya, Mitta
Can use unsupervised learning for RSSI based indoor localization?
I run algorithms like HDBSCAN and MeanShift, and compare them with K-means and GaussianMixture. The first two algorithms do not return cluster labels for some elements in the dataset. My question is - how to correctly compare such different algorithms? Do I have to remove elements with no cluster labels from the dataset before performance metrics evaluation? My metrics are the Silhouette score, Davies-Bouldin score, and Calinsky-Harabasz Index.
Hi,
In the field of EEG, will it possible to do feature selection in unsupervised learning? If so, kindly mention some methods. Also, will it possible for unsupervised classification in EEG without feature extraction?. Kindly share your valuable suggestions.
Thanks,
Thenmozhi
Hello,
When using a Deep Belief network (DBN) for images classification, how could we choose the number of hidden layers (number of Restricted Boltzmann Machines) and nodes (RBM size) in a hidden layer? (For example in case of an input size of 19200 pixels )
I will be grateful for any help you can provide :)
Wafa
I am conducting exploratory research about users on the Ethereum blockchain (I obtain the data from big query), and I would like to cluster the users, mostly by transactional features, for persona/archetype development.
However, the data is not normally distributed, many of the variables have a power-law distribution and some have no clear distribution pattern. It is very likely that I would like to include more than five variables.
Besides the question of what algorithm fits best, is it reasonable to normalize all variables (to a more normal distribution) and to perform a z-transformation?
I am working on the development of multi-species volume allometric equation from diameter (DBH). I would like to form groups of species with the largest possible volume-diameter correlations by making a supervised classification (according to the functional traits of the species, phytogeographic area etc.) and an unsupervised classification (using classification algorithms). And between HAC, Kmeans and Random Forest what should i choose (Obviously I will try all algorithm and choose the best fit).Thank you for your suggestions and your help.
I have already read about a method of evaluation for unsupervised anomaly detection using excess-mass and mass-volume curves (https://www.researchgate.net/publication/304859477), but was wondering if there are other possibilities.
Dear community , I need your help , I'm training my model in order to classify sleep stages , after extracting features from my signal I collected the features(X) in a DataFrame with shape(335,48) , and y (labels) in shape of (335,)
this is my code :
def get_base_model():
inp = Input(shape=(335,48))
img_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(inp)
img_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(img_1)
img_1 = MaxPool1D(pool_size=2)(img_1)
img_1 = SpatialDropout1D(rate=0.01)(img_1)
img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = MaxPool1D(pool_size=2)(img_1)
img_1 = SpatialDropout1D(rate=0.01)(img_1)
img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = MaxPool1D(pool_size=2)(img_1)
img_1 = SpatialDropout1D(rate=0.01)(img_1)
img_1 = Convolution1D(256, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = Convolution1D(256, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = GlobalMaxPool1D()(img_1)
img_1 = Dropout(rate=0.01)(img_1)
dense_1 = Dropout(0.01)(Dense(64, activation=activations.relu, name="dense_1")(img_1))
base_model = models.Model(inputs=inp, outputs=dense_1)
opt = optimizers.Adam(0.001)
base_model.compile(optimizer=opt, loss=losses.sparse_categorical_crossentropy, metrics=['acc'])
model.summary()
return base_model
model=get_base_model()
test_loss, test_acc = model.evaluate(Xtest, ytest, verbose=0)
model.fit(X,y)
print('\nTest accuracy:', test_acc)
I got the error : Input 0 is incompatible with layer model_16: expected shape=(None, 335, 48), found shape=(None, 48)
you can have in this picture an idea about my data shape :

I have an input data set as a 5x100 matrix. 5 indicates the number of variables and 100 indicates the number of samples. I also have an target data set as a 1x100 matrix, which is continuous numbers. I want to design a model using input and target data set using a deep learning method. How can I enter my data (input and target) in this toolbox? Is it similar to the neural fitting ( nftool) toolbox?
I need to integrate a root finding algorithm in a neural network. For that, and in order to be able to perform backpropgation I need the algortihm to be differentiable. Is there any method/ algorithm to get a root finding that is compatible with a neural network, i.e is differentiable? I want to use a learned function that based on an equation will perform a root finding algorithm to provide the target for the cost function. So for this I a root finding algorithm (if there is any) that is compatible with automatic differentiation during the backpropagation i.e is differentiable.
The root of the following equation would be the target and L would be the learned function:
D_1L(q(k-1), q(k)) + D_2L(q(k),q(k+1)) = 0
Where D1 and D2 are derivatives with respect to ith argument of L.
Another way would maybe try to use unsupervised learning to learn L based on the previous equation. Any hint? Thank you
First, I have a theory in mind:
Imagine the wiring in our minds that connects the neurons to our visual cortex. In the image sensors, we have a defined array of sensors. Hence, we can directly transform the sensors outputs to data.
But, what about our minds? I guess there is no exact predefined wiring inside our minds. I think no one can guarantee that cell Xn is wired exactly to the input Yn. But, by growing, our minds learn how to relate the inputs and build a correct image. Hence, babies has no solid visionary until our minds build a correct engine.
Now, if my guess was right, what will be the applications that we can design and make based on this technology? In our designs and applications, usually we know exactly the inputs. So, where can we deploy this technology?
My question regards to the Convolutional Neural Networks (CNNs)
How we can do Unsupervised learning with (CNN) to Identify the similarity region in any organ in the human body using two medical imaging modalities?
I have an idea of
1. RNN Embedding
2. RNN with pack padded sequence
3. FastTest
4. Bi-LSTM
5. CNN
6. CNN-LSTM
7. BERT Transformer
these models.
I am looking model apart form these.
I am recently study clustering quality metrics like Normalized Mutual Information and Fowlkes-Mallows scores.
Both of the scoring metrics seem to focus on a summary of the entire clustering quality. I am wondering whether there is a standard way or variant of the metrics above to measure the quality of a certain cluster or a certain class? The basic idea is that even if the overall looks good but some certain cluster is problematic, the metrics will still give warnings.
PS: I am not looking for any intrinsic methods. More precise, let's assume what I have is, for each data point x_i belong to dataset X, there is a ground truth class mapping x_i -> y_i, and a clustering x_i -> z_i, where y_i, z_i indicates the membership and don't necessarily have the same cardinality. Besides, I would like to further assume there is no distance measure d(x_i, x_j) defined.
If I have collected data regarding say food-preferences from multiple sources and merged them.
How can I decide what kind of clustering to do if I want to find related preferences?
Whether to go for K means, hierarchical, density-based, etc. ?
Is there any process of selecting the clustering technique?
In an online website, some users may create multiple fake users to promote (like/comment) on their own comments/posts. For example, in Instagram to make their comment be seen at the top of the list of comments.
This action is called Sockpuppetry. https://en.wikipedia.org/wiki/Sockpuppet_(Internet)
What are some general algorithms in unsupervised learning to detect these users/behaviors?
While I am intrigued by the fact that unsupervised learning algorithms don't require label handouts yet computing astounding results, I wonder what is the stopping point in AI? Where do we know the machine 'learning' is out of our hands and we can't decode what we originally created?
Is there some method to know what our algorithm is learning and on what basis?
The use of cascaded neural networks for the reverse design of metamaterials and nanophotons can effectively alleviate the problems caused by one-to-many mapping, but the intermediate layer of the cascaded network is unsupervised learning, and an effective method is needed to limit output range of the intermediate layer.
UDA(https://github.com/google-research/uda) could achieve good accuracy by only 20 training data on text classification.
But I find it is hard to reproduce the result on my own dataset.
So I want to know the reason why UDA works. And I want to know what is the most important thing to reproduce the result.
Supervised learning is the basis of deep learning. But, human and animal learning are unsupervised. In order to make deep learning more effective in human life we need to discover approaches using Deep learning to handle unsupervised learning. How much of progress is made in this direction so far?
Can AI learn from processes instead of data ? the question is valid for supervised and unsupervised learning. if so there is algorithms or approach for learning from process execution ?.
Hi.
I'm dealing with clustering of data were the resulting clusters are, in general, non-spherical. Some of them are not convex.
What are the best internal metrics for evaluating these kind of clusters?
I know the silhouette index is a very common for evaluating the result of clustering process. However, it seems that silhouette index is biased towards spherical clusters.
Normalized Mutual Information (NMI) and B3 are used for extrinsic clustering evaluation metrics when each instance (sample) has only one label.
What are equivalent metrics when each instance (sample) has only one label?
For example, in first image, we see [apple, orange, pears], in second image, we see [orange, lime, lemon] and in third image, we see [apple], and in the forth image we see [orange]. Then, if put first image and last image in the one cluster it is good, and if put third and forth image in one cluster is bad.
Application: Many popular datasets for object detection or image segmentation have multi labels for each image. If we used this data for classification (not detection and not segmentation), we have multiple labels for each image.
Note: My task is unsupervised clustering, not supervised classification. I know that for supervised classification, we can use top-5 or top-10 score. But I do not know what will be in unsupervised clustering.
Dear researchers,
let's gather data regarding the Corona virus.
This could be used for analysis in a second step.
My first ideas:
1. Create predictive models
2. Search for similarities with Unsupervised Learning
3. Use Explainable AI for new insights.
What are your ideas?
Where did you find data?
Hello, I'm a biologist interested in machine learning application in genomic data; specifically, I'm trying to apply clustering techniques to differential gene expression data.
I started by understand the basics of unsupervised learning and clustering algorithms with random datasets, but now I need to apply some of that algorithms (k-means, PAM, CLARA, SOM, DBSCAN...) to differential gene expression data and, honestly, I don't know where to begin, so I'd be grateful if someone can recommend me some tutorials or textbooks, or give me some tips.
Thank you for your time!
PD: I'm mainly using R language, but if Python tutorials are also OK for me.
I work on graph based knowledge representation. I would like to know, how we can apply Deep Learning on Resource Description Framework (RDF) data and what we can infer by this way ? Thanks in advance for your help!
Hi,
I am pursuing PhD and my area of work is pattern recognition by machine learning. I have covered all supervised and unsupervised learning (deep learning) during my Ph.D because of my topic. I have completed my all research work and waiting to submit the thesis. I hope, I'll be able to complete my thesis with in 3 year. I have published 5 articles (4 conference and 1 Scopes journal) and 5 unpublished articles.
Could you suggest me what type of option I can follow after to complete my PhD and why that options are good according to you (based on my profile)?
Thank you for your time.
What are the links in their definitions? How do you interconnect them? What are their similarities or differences? ...
I would be grateful if you could reply by referring to valid scientific literature sources.
I have PCAP files captured from network traffic. What should be done so that PCAP files can be done with machine learning tools? What steps are needed so that data can be analyzed with one of the unsupervised methods? Does the data have to be changed to CSV format?
Suppose we have users, for each user, we have: user_id, user_name, user_job title, user_skills, user_workExperience. I need to cluster the user based on their skill and work experience( long text data), put the users into groups. I was searching about how to clustering text data but still didn't find a good example to follow" step by step". Based on the data I have I think I should use unsupervised approach (as the data I have is not labeled), I found that I can use K-Mean or hierarchical clustering, but I'm stuck in how to find: K "number of clustering with K-Mean". Also, I don't know what is the best way to prepare the long text before fit into the clustering algorithm. Any idea or example that can help me, would be very appreciated. Thanks in advance.
which method extracts the better features for unsupervised learning: PCA or Auto Encoder?
I am presently working on unsupervised learning for text classification.
The data is entered by end users in the business domain and can be on varying subjects.
Any new subject can get triggered at any point in time - hence continuous learning for creating new clusters/ classes based on the text entered text is required.
Thus want to avoid having any seed values such as density/ epsilon/ number of clusters etc.
Is there any such algorithm already known to find number of clusters, and cluster the data incrementally (tried Gaussian measure till now with other basic clustering algos - kmeans, dbscan etc)
Hello,
Is any one already worked with unsupervised image segmentation? If so, please give me your suggestion. I am using an autoencoder for Unsupervised image segmentation and someone suggests me to use Normalized cut to segment the image... Is there any such algorithm other than Normalized cut??? Also please suggest me some reconstruction loss algorithms which are efficient to use.
Thanks in advance,
Dhanunjaya, Mitta
Hello,
I want to know the difference between Reinforcement learning from Supervised and Unsupervised learning. There is a Reinforcement learning technique called Q-Learning. Anybody please explain the working concept of Q learning method. Looking forward for useful comments on this.
Thanks
I'm a newbie in the field of Deep Reinforcement Learning with background in linear algebra, calculus, probability, data structure and algorithms. I've 2+ years of software development experience. In undergrad, I worked in tracking live objects from camera using C++,OpenCV. Currently, I'm intrigued by the work been done in Berkeley DeepDrive (https://deepdrive.berkeley.edu/project/deep-reinforcement-learning). How do I gain the knowledge to build a theoretical model of a self-driving car ? What courses should I take? What projects should I do ?
For one of my studies, I designed an unsupervised predictive clustering model, and now searching for some modification steps and post processing to use that clustering model for classification in a reliable way.
In MATLAB, clustering data using kmeans can be achieved as shown below:
L = kmeans(X,k,Name,Value)
where L is cluster indices which is for each data point.
It implies that is if I have 307 data points I'm to have a 307 x 1 array(L) which is the index for each data point.
However, while using SOM for clustering
I discovered to get the index you use the code snippet below:
net = selforgmap([dimension1 dimension2]);
% Train the Network
[net,tr] = train(net,X);
%get indices
L = vec2ind(net(X))';
for a Network with 5 x 5 dimension:
it returns L which is an array with the dimesion 25 x 1 instead of 307 x 1
for a Network with 10 x 10 dimension:
it returns L which is an array with the dimesion 100 x 1 instead of 307 x 1
What am I doing wrong???
or to simply put, how do I compute the class vectors of each of the training inputs ?
I'm new to Matlab, I'm wondering if someone can help me to get start with machine learning task.
I would like to perform Linear discriminant analysis (LDA) or support vector machine (SVM) classification on my small data set (matrix of features extracted from ECG signal), 8 features (attributes). The task is binary classification into a preictal state (class 1) and interictal state (class 2).
In Matlab, I found (Classification learner app), which enable using different kinds of classifiers including SVM, but I don't know if I can use the input data that I have to train the classifier in this app?. I'm not sure how to start? Do you have any idea about this app? please help!
I'm having a concrete problem I'm trying to solve but I'm not sure in which direction I should go.
- Goal: Identify formation of a soccer team based on a static positional data (x,y coordinates of each player) frame
- Input: Dataframe with player positions + possible other features
- Output: Formation for the given frame
- Limited, predefined formations (5-10) like 5-3-2 (5 defenders, 3 midfield players, 3 strickers)
- Possible to manually label a few examples per formation
I already tried k-means clustering on single frames, only considering the x-axis to identify defense, midfield and offense players which works ok but fails in some situations.
Since I don't have (much) labels im looking for unsupervised neural network architectures (like self organizing maps) which might be able to solve this problem better than simple k-means clustering on single frames.
I'm looking for an architecture which could utilize the additional information I have about the problem (number and type of formations, few already labeled frames, ..).
I applied supervised and unsupervised learning algorithms on the data set which is available at UCI repository. I want to know further whether can I find the dataset based on the location of the user ,history of previous transactions and time span between two consecutive transactions?
I have a dataset, which contains normal as well as abnormal data (counter data) behavior .
I did not use English but one of the under-resourced language in Africa. The challenge is the testing of unsupervised learning.
I am looking for a way to test/evaluate this model.
Refer me links and tutorials about testing/evaluating unsupervised learning.
Nowadays there are plenty of core technologies for TC (Text Classification). Among all the ML learning approaches, which one would you suggest for training models for a new language and a vertical domain (alike Sports, Politics or Economy)?
Dear all respectful researchers,
I am working on a structured biomedical dataset that consists of many data type inconsistency, outliers and missing values (instances) on seven independent variables (attributes). I am considering to perform pre-processing methods such as data standardization and also imputations to improve the issues mentioned above. However, there are two version of the pre-processing methods, that is, supervised and unsupervised ones.
My main two questions regarding the common practice are:
1. Should I perform unsupervised discretisation method on the dataset to solve data type issue when, subsequently, I conduct cluster analysis using k-means cluster algorithm?
2. After completing the first clustering task above, should I perform supervised discretisation method on the same dataset when I train the model for classification task using supervised machine learning algorithms?
Thank you for your information and experience.
In top journal papers, there are many works which are being carried out on accelerometer signals. Most of them undergo the following steps
1. Handling signals with different length --- Never mentioned in any paper
1. Pre-processing(filtering out noise) - Optional
2. Signal Segmentation
3. Feature extraction
4. Supervised (or) Unsupervised learning
Nevertheless, none of the papers mentioned the technique used by them to handle signals of different lengths for example 600 secs to 13,200 secs variation(with same sampling rate 100Hz). Since such missing information can lead to inaccurate comparisons, i'm surprised that top journals didn't give importance to this issue. I would like to know the best technique to handle varied signal lengths. Please don't consider the sampling rate issue since all signals have the same sampling rate. I would like to know the most commonly used technique to handle signals with different lengths.
with respect to unsupervised learning such as clustering, are there any metrics to evaluate the performance of unsupervised learning as well as supervised learning?
Deep Learning Multilayer Perceptron is based on supervised learning while Deep Belief Network is based on unsupervised learning? Looking at the malware detection situation, which method will be the best?
Hi,
Please, when do I force for unsupervised learning and what is the benefit of unsupervised learning techniques?
Thanks & Best wishes
Osman
I have implemented 3 bootstrapping models for learning product details, then I did the comparison with more different performance measure and found out which model learned good, but I want to do something like optimization/ensemble (is these possible with the models result?) or please suggest some other simple process to conclude my work. Moreover, the work was performed in an unsupervised manner. So please help me how to do improvisation in my models results (like tp,tn,fp,fn or learned product details). Thanks in advance.
As the basic concepts used in association rule learning are related to conditional probability and ratio to independence, I was wondering if Correspondence Analysis has been used in the literature with this. I understand the main motivation in association rule learning is efficiency in CPU time and memory usage but these days SVD (Singular Value Decomposition) is pretty fast and some algorithms can be very scarce in memory usage?
Is there any way to compare the accuracy or cost of these two methods with each other ? SVM and K-Means clustering ?
kindly tell me if i am using a dataset which have to produced binary class attributes like Student Result "Pass" or :Fail". i have data which have 80% pass students and 20% fail. in reality it is true because same ratio is observed in real life however due to problem of intention the classifier towards majority class here i think needs to be it balanced. the question is that in this case 50/ 50 balancing will be consider right whereas it is impossible or 60/40 should be right or other?
The system should use the context of the item to select relevant data (PCA), then, use k-means to do clustering and finally use IBCF to generate top-n recommendations. I need the detailed algorithm for this task (PCA+K-Means+IBCF).
At training phase, I applied k-means clustering algorithm on a dataset (k=10). I want to apply decision tree on each cluster and generate a separate performance model for each cluster
At testing phase, I want to compare each test instance with the centroid of each cluster and use appropriate model to classify the instance.
Is there a way I can achieve this in WEKA or WEKA API. Any link/resource will be appreciated
I get the number of components for my dataset using BIC but i want to know if the Silhouette coefficient method is the right option to validate my results.
Thanks!