Science method

# KNN - Science method

Explore the latest questions and answers in KNN, and find KNN experts.

Questions related to KNN

I am getting extremely less value of piezoelectric coefficient d33 of Li doped KNN ceramic as reported in literature at MPB . What could be the reason?

I'm running a binomial GWR in both MGWR and GWR4, and sometimes models get a high number of neighbors, sometimes even close to N-1. Is that a problem?

Should I interpret the number of neighbors relative to number of observations as a metric of model quality?

Also, should I be worried when GWR requires a large number of neighbors to have inversible matrices?

Hi, I have been working on some Natural Language Processing research, and my

**dataset has several duplicate records.**I wonder**should I delete those duplicate records to increase the performance of the algorithms on test data?**I'm not sure whether

**duplication has a positive or negative impact on test or train data**. I found some controversial answers online regarding this, which make me confused!For reference, I'm using ML algorithms such as

**Decision Tree, KNN, Random Forest, Logistic Regression, MNB**etc. On the other hand, DL algorithms such as**CNN and RNN.**I made KNN nanofibers, using recipe is TCD 5cm, Voltage 11kV, flow rate 0.75ml/h, around 22.6 degrees, and 16%hum.

But, the rainy season is coming, RH became 28 degrees and 45%hum.

When I tried to make electrospun same KNN solution with the same electrospinning condition, nanofibers were solidified early and hanged on the collector as seen in picture..

I tried changing Voltage to 7~11kV and TCD to 3~5cm, but the results were same..

How can i solve this problem..? I can't wait untill the rainy season is over.. and I can't change solution's concentration..

Hello Friends,

I am applying ML algorithms (DT, RF, ANN, SVM, KNN, etc) in python to my dataset which has features and target variables as continuous data. For example, when I'm using DecisonTreeRegressor I get the r_2 square equal to 0.977. However, I'm interested to deploy the classification metrics like confusion matrix, accuracy score, etc. For this, I converted the continuous target values into categorical ones. Now when I'm applying the DecisionTreeClassifier, I get the accuracy square=1.0 which I think is overfitting. Then I applied the normality checks, and correlation techniques (spearman) but the accuracy remains the same.

My question is am I right to convert numeric data into categorical one?

Secondly, if both regressor and classifiers are used for the same dataset, will the accuracy be changed?

Need your valuable suggestions, please.

For details plz see the attached files.

Thanks for the time

As a student who wants to design a chip for processing CNN algorithms, I ask my question. If we want to design a NN accelerator architecture with RISC V for a custom ASIC or FPGA, what problems or algorithms do we aim to accelerate? It is clear to accelerate the MAC (Multiply - Accumulate) procedures with parallelism and other methods, but aiming for MLPs or CNNs makes a considerable difference in the architecture.

As I read and searched, CNN are mostly for image processing. So anything about an image is usually related to CNN. Is it an acceptable idea if I design architecture to accelerate MLP networks? For MLP acceleration which hw's should I work on additionally? Or is it better to focus on CNN's and understand it and work on it more?

I am trying to do this as my undergrad research topic: a better kNN algorithim. My model DOES require training, so the "training-free" advantage of traditional knn is not applicable here. What are some other advantages of kNN?

Hello to everyone. I am trying to implement KNN analysis to fix minPts in the DBSCAN clustering algorithm. My dataset is composed only of 4 variables and 935 observations. I have found that if k = 5 (no. of variables + 1) I get as output of DBASCAN 2 clusters: one of 911 observations and one of 8 observations. If I use a larger k, according to many papers as sqrt(no. of observations), I get 909 observation in only one cluster and the other are classified as noise points.

Both could be possible results, but their meaning is fundementally different. How can I get rid of this arbitrary choise of minPts hence k?

Thanks!

I have received comments from reviewers on a paper regarding applying ML techniques (RF, KNN, Nnet etc) on a data with 384 samples. One of the reviewers is extremely concerned that about using machine learning techniques for 384 samples as it seemed very small to him/her. If there is any paper where ML is used for samples of 350 or less, it would be greatly helpful for me in this regard. I could cite the paper as an evidence that ML can be used for relatively smaller samples.

TIA.

I need MATLAB code of above mentioned models. How could I find those?

I want to implement a machine learning algorithm into a MIDI music dataset such as KNN,SVM,Decision Trees. Any implementation idea on python or MATLAB? Thank you so much in advance.

Meta-heuristics can be used in Feature selection acting as wrapper methods. A lot of works use KNN classifier for evaluations of the selected subsets. But if I want apply FS for regression models, what kind of machine learning algorithms can I use in Matlab?

I am working on binary classification. I have 2k tweets in my dataset. I applied NB,SVM,LR,DT,RF,KNN. I got best result in LR. How to justify the result ? How can I say that why Logistic Regression works well ?

Please help me to prove the code to solve the following problem;

Problem: "Semantic segmentation of humans and vehicles in images".

Following are the given information related to solve this problem;

Experimental study:

using a learning machine model: SVM, KNN, or another model

Using a deep learning model :

either Semi-dl: resNet, VGg, inception (Google net) or others

full DL site: Yolo, unet, CNN family (CNN, RCNN, faster RCNN), or others

Evaluation of the two models in the learning phase

Evaluation of both models with test data

Exploration & descriptions & analysis of the results obtained (confusion matrix, specificity, accuracy, FNR)

for a dataset with 5 categorical classes , what would be the be the best classifiers ?

size of data (670,49)

I have prepared a relaxor ferroelectric (KNN - BZZ solid solution) bulk sample. During P - E loop measurements, we experienced saturation electric field of about 100 kV/cm before dielectric break. However people reported saturation fields of about 350 kV/cm. How is this possible? I thought that it may be related to the density of bulk samples but we also used a Cold Isostatic Press but the result s are the same.

I am currently trying to classify images using a HOG descriptor and a KNN classifier in C++.

The size of the feature vectors obtained with the HOG depends on the dimensions of my images and all of them have differents dimensions !

How can I make the size of the feature vector not depending on the images dimensions (without adapting the number of cells and blocs), or how can I use the KNN classifier with differents sized feature vectors?

I am performing a KNN classification for a sample with n=5 and m= 200 (n:# features, m:#samples). To find the optimum value for K, I calculated an error rate for each K (1:50). However, I found the the error rate vector as well as the classification_report are not fixed, and they are changing by each run.

If the data set is mixed (numerical, nominal, and binary) features, for classifying such data we need to define new measure able to handle the three types.

Generally, the combined approach is widely used for this case.

If the new measure is combination between different measures, it is very difficult to fulfill the triangle inequality, so it can be similarity measure or distance but not metric.

My question is:

Is it possible to use such measure with KNN?

as we know KNN requires metric distance.

In general:

Can I use KNN with a non-metric measure for classifying the data?

I am a beginner in the neural network

I have 6 class (1;2;3;4;5;6) in each class 8 samples means we all have (48 data serie time ).

the features are peaks after drawing datas .

Which classification program I can use here ?

and how i can do it this ?

- This is the data
- Each Class has 8 columns
- And each column contains 1000 values
- The feature to categorizing here is the peaks after we draw each column
- I made an attempt but it was not successful
- Can it be converted to svm ?

Hello dear friends

I need to run the

**K-nearest neighbor (KNN)**model to predict flood risk areas using**R software**. Does anyone have this model code in the R software?Thanks in advance for your cooperation.

Hello,

I'm currently working on my MSc thesis. One of the tasks I should do is classifying a crash dataset with different methods with R.

The dataset has 4 labels(fatal, major injury, minor injury, and pdo) and 12 predictors. The classes have equal number of rows (each class about 600 rows). I applied classification on the data with ANN, SVM, KNN, CART, and random forest methods. But unfortunately, all methods give me very low kappa and overall accuracy. I tried using just some of the predictors, or tuning the parameters, or even scaling and not scaling the data. Unfortunately, nothing changed. My data is attached and I appreciate anyone's opinion about this problem.

Thank you.

Hello everyone,

I am trying to optimize a Pulsed Laser Deposition process for the growth of (K0.5Na0.5)NbO3 piezoelectric perovskite on Pt(111), and I've been struggling to figure out the origin of some phenomenon I see in the film morphology: the film presents a predominant (100) texturization, but displays something that looks like unoriented cubic crystallites or terraces (in the picture).

Has anyone seen anything like this before and know what it could be due to and/or how to remove these defects?

Thanks in advance.

Hello everyone;

I'm preparing my master thesis it's about influenza prediction using deep learning. The data that I had is the rate of dangerous cases and the suspicious ones per week and per region around the country.

for now I implemented the Random Forest, KNN, DNN, LSTM, CNN, CNN-LSTM and Deep Belife Network. I concluded that I'm faced with time series forecasting so I used the window method to make my problem surpervised with window_size=3, 2 and 1. Calculating the r2_score I got 10 regions under the 70% (which I had read that it's the acceptable threshold).

So I'm writing this question hoping to find a solution or get some idea or another deep learning technique and maybe special architecture of the technique used above to improve my prediction in this region.

and thank you in advance

(You will found attached some picture of the regions that I want to improve and my LSTM model.)

I am developing the application on Android for human activity recognition. The classification results is satisfactory on other classifiers like KNN and Decision Tree. But what do you think should I implement the Convolutional Neural Network on mobile phone to recognize the activities. I am sure, it will increase the accuracy but what about the computational complexity and resource utilization?

i will be using the different types of algorithms like SVM,KNN,NAVIE BAYES,RANDOM FOREST how will i understand when to use which algorithm in increase in crop productivity.

I would like to know which method SVD or KNN will yield better prediction accuracy in recommendation systems. Has anyone done a comparative study which I can refer to.

I am working in PSO for feature selection. I use KNN algorithm with 10 cross validation for the evaluation. before I use the 10cv the algorithm is quite cheap meaning no high computational cost has been faced, but after turning to 10cv the code is running too slow, sometimes for days. may I know if there is any problem in performing the 10cv. I have attached my codes here.... Please help me ! Thanks a million

I am looking for some article related to Optimization of SVM, KNN, Ensemble and Decision tree classifier.

For a thesis, I am researching different classifier algorithms to be able to detect people from top-down view. This way it should be possible to detect people and count them in a real time video feed. I am keeping the research pre-neural network era.

Another thing is that I am completely new to Object detection and tracking + I am new to machine learning.

I have learned a lot about background subtraction, HOG + SVM (or the clasifier SVC), I have also learned about HAAR features and HAAR classifier.

But now my real question is why are feature extraction algorithms often used for a specific classifier (like HOG with SVC and not KNN, or forest). I have not been able to spit this out.

P.S. I'm stuck with the practical part of my thesis, so I could use some guidance, if you want to give me some, please contact me

Hi, I am trying to solve the problem of imbalanced dataset using SMOTE in text classification while using TfidfTransformer and K-fold cross validation. I want to solve this problem by using Python code. Actually it takes me over two weeks and I couldn't find any clear and easy way to solve this problem.

Do you have any suggestion where exactly to look?

After implementing SMOTE is it normal to get different results accuracy in the dataset?

~specifically using horizontal quartz tube furnace

I have a data set with 750 cases and five features with unbalanced distribution of the target class.I applied SMOTE to solve the problem of unbalance distribution of the target class. then i applied ANN, Naive Base, LR, KNN , Random forest. I used Cross-validation to evaluate models performance and learning curves ( error rate as a function of sample size) of evaluate overfitting. I got an AUC of (89% to 92%) and accuracy of (85% to 88%).

I got a replay from a reviewer that sample size is not enough and learning curves are not evidence against overfitting!

- Actually, I tried to make KNN-xBTO nanoparticle by solid state reaction technique by firing the precursors at high temperature (1200°C )over 2 h at heating rate of 5 °C (also 10°C) min-1 inside alumina crucible.
- After firing, the as-prepared final product become solidified and quite impossible to take-off from the crucible bottom surface.
- Can anyone please give me suggestion, what to do in order to make this kind of nano-composite by solid state reaction???

I am doing thesis on baby cry detection, i build the model with CNN and KNN, i got train accuracy of CNN is 99% and Test accuarcy is 98% and KNN train accuarcy is 98% and Test accuracy 98%.

please suggest me which algorithms i should choose from both of them and why?

Hello,

I have a historical time series of 72-year monthly inflows. I need to generate, say 100, synthetic scenarios using the historical data.

Simple resampling (by reordering annual blocks of inflows) is not the goal and not accepted. I want synthetic scenarios to have different monthly values, but all summing up to the same value of the annual inflow as in the historical one (e.g. if in the historical data 2015 inflow is 2400 MCM, I want the synthetic scenarios to have the same value for 2015 inflow but with different monthly values). Surely these synthetic scenarios must satisfy some statistical metrics such as autocorrelation etc.

Does any one know of an existing code/script for this, preferably written in Excel, Python, or Matlab? I've heard of ARMA, KNN, etc. but none suits my purpose.

Thanks

I am working on dataset in which almost every feature has missiong values. I want to impute missing values with KNN method. But as KNN works on distance metrics so it is advised to perform normalization of dataset before its use. Iam using scikit-learn library for this.

But how can I perform normalization with missing values.

My KNN model have 80% accuracy and 85% precision and SVC have 79% accuracy and 85% precision. What is the reason behind this? or both of these models are stable or not to use?

I have applied traincascadedetector , KNN ,featurematching, estimategeomatric transform in Matlab, opencv & Python.

Can anyone suggest me some another method to detect the symbol?

I am using KNN algorithm with sklearn library for authentication purpose.

when i train with one train per user I got 99%AUC, and when I increase the number of training per user the AUC is decreasing?

can someone explain what is going on?

I am trying to fabricate KNN thin films via rf magnetron sputtering using lab-made target. However there is a great discrepancy in the literature regarding the sintering temperature of target starting from 650 C to 1100 C. The bulk KNN ceramics are usually sintered at greater than 1000 C. Target sintered at 650 C have 50% density whereas it is also stated in the literature that targets should be dense. I am trying to find out the reason for low temperature sintering of targets.

One thing i have noticed is, the target sintered at higher temperature fractured due to plasma impingement where as low temperature sintered target was intact after usage. Could this be the reason?

Hi,

I would like to get any suggestion for any paper/reference that discussed about the KDTree with the fixed-radius algorithm.

I found numerous references but most of them are the extended / improvement of the KDtree with fiixed radius. My focus is on the fundamental concept.

I found this book discussed about the KDTree but it does not include the fixed radius in NN searching.

Appreciate if anyone can help me

I have training data of 1599 samples of 5 different classes with 20 features. I trained them using KNN, BNB, RF, SVM(different kernels and decission functions) used Randomsearchcv with 5 folds cv.

I get trainng accuracy not more than 60% Even the test accuracy is almost same. I used class_weights as 2 classes has more samples than others . I used PCA which reduced my feature size to 12 with 95% data covering. None helped in increasing accuracy of SVM and RF classifiers.

Can anyone suggest me any other different ways to improve accuracy or Fscore for my training data?

I am bit confused how to arrange the data.

I have 2 classes and if i arrange the data like

**Case1**Feature1 feature2 feature 3

**class 1**Feature1 feature2 feature 3

**class 1**Feature1 feature2 feature 3

**class 1**Feature1 feature2 feature 3

**class 2**Feature1 feature2 feature 3

**class 2**Feature1 feature2 feature 3

**class 2**value of k fold accuracy is lower than one time train test split accuracy.

**Case2**and if i arrnage the data like this

Feature1 feature2 feature 3

**class 1**Feature1 feature2 feature 3

**class 2**Feature1 feature2 feature 3

**class 1**Feature1 feature2 feature 3

**class 2**K fold acurracy is higher than normal one time train test split accuracy.

So which one is the right approach?

thanks

----------------------------------------------------

edit

Let discuss the result, in case 1

train test split accuracy with test size 25% is 90%.

and accuracy with K fold CV 69% with std 0.07%

in case 2

train test split accuracy with test size 25% is 54%.

and accuracy with K fold CV 78% with std 0.08%

In all cases, data is shuffled and randomized

I am performing classification task related to intrusion detection (Binary classificaton, i.e., normal and attack). Accuracy ad FAR are considered for comparisions of results of various classifiers like KNN, SVM etc. I have done that over dataset created from wireless network (IoT). As there are no other datasets available for IoT (RPL-6LoWPAN), from which dataset should i compare the performance results. Sholud i use KDD99, UNSW-NB15, ICSX etc.

In number of research papers the sintering temperature is given around 1050

^{o}C for KNN.But when I heat the system in the muffle furnace at heating rate 5^{o}C/min. in the crucible it is getting evaporated or stick to the crucible which is almost impossible to remove from the crucible.Melting point of KNN constituents ie sodium carbonate and potassium carbonate of below 900

^{o}C.but while sintering we heat the system around 1050-1100^{o}C.if i heat the system above 900^{o}C potassium carbonate and sodium carbonate will evaporate.Please explain this.my new interesting field is evaluation the accuracy of fault diagnostics based on artificial intelligence classification techniques like SVM , ANN , KNN ,PSO ,... on Dissolved Gas Analysis (DGA ) of power transformers .

i can't find standard DGA dataset for this purpose .

any help with this issue will be appreciated .

I am working on Brain MRI image classification using hybrid SVM and KNN algorithm

training is done using SVM and at time of testing it check for nearest distance for particular class

please also explain how much quantity we need of k2CO3,Na2CO3,Nb2O5 to prepare 1 mole of KNN

Hi

I am working on classification task, my data contains numeric and categorical features (mixed), for classifying such data we need to use transformation method, for converting any type of data to another type, to be able to apply the exist classification algorithms such as SVM, Naive bayes or KNN .

**My question is there any classification algorithm can handle mixed data without using any transformation method?**

we used many classifier as naive bias ,ANN,linear ,quadratic ,decision tree ,KNN and support vector machine ..which one is the best to prove the diagnosis of fault using machine learning

I gonna use the following Matlab code but I can't understand what does mean options in this code I mean I don't know what should I use from options set do I use all set or choose one thing?

the code is

function [eigvector, eigvalue] = LPP(W, options, data)

% LPP: Locality Preserving Projections

%

% [eigvector, eigvalue] = LPP(W,

*, data)***options**%

% Input:

% data - Data matrix. Each row vector of fea is a data point.

% W - Affinity matrix. You can either call "constructW"

% to construct the W, or construct it by yourself.

% options - Struct value in Matlab. The fields in options

% that can be set:

%

% Please see LGE.m for other options.

%

% Output:

% eigvector - Each column is an embedding function, for a new

% data point (row vector) x, y = x*eigvector

% will be the embedding result of x.

% eigvalue - The sorted eigvalue of LPP eigen-problem.

%

%

% Examples:

%

% fea = rand(50,70);

% options = [];

% options.Metric = 'Euclidean';

% options.NeighborMode = 'KNN';

% options.k = 5;

% options.WeightMode = 'HeatKernel';

% options.t = 5;

% W = constructW(fea,options);

% options.PCARatio = 0.99

% [eigvector, eigvalue] = LPP(W, options, fea);

% Y = fea*eigvector;

%

%

% fea = rand(50,70);

% gnd = [ones(10,1);ones(15,1)*2;ones(10,1)*3;ones(15,1)*4];

% options = [];

% options.Metric = 'Euclidean';

% options.NeighborMode = 'Supervised';

% options.gnd = gnd;

% options.bLDA = 1;

% W = constructW(fea,options);

% options.PCARatio = 1;

% [eigvector, eigvalue] = LPP(W, options, fea);

% Y = fea*eigvector;

%

%

% Note: After applying some simple algebra, the smallest eigenvalue problem:

% data^T*L*data = \lemda data^T*D*data

% is equivalent to the largest eigenvalue problem:

% data^T*W*data = \beta data^T*D*data

% where L=D-W; \lemda= 1 - \beta.

% Thus, the smallest eigenvalue problem can be transformed to a largest

% eigenvalue problem. Such tricks are adopted in this code for the

% consideration of calculation precision of Matlab.

%

%

% See also constructW, LGE

%

%Reference:

% Xiaofei He, and Partha Niyogi, "Locality Preserving Projections"

% Advances in Neural Information Processing Systems 16 (NIPS 2003),

% Vancouver, Canada, 2003.

%

% Xiaofei He, Shuicheng Yan, Yuxiao Hu, Partha Niyogi, and Hong-Jiang

% Zhang, "Face Recognition Using Laplacianfaces", IEEE PAMI, Vol. 27, No.

% 3, Mar. 2005.

%

% Deng Cai, Xiaofei He and Jiawei Han, "Document Clustering Using

% Locality Preserving Indexing" IEEE TKDE, Dec. 2005.

%

% Deng Cai, Xiaofei He and Jiawei Han, "Using Graph Model for Face Analysis",

% Technical Report, UIUCDCS-R-2005-2636, UIUC, Sept. 2005

%

% Xiaofei He, "Locality Preserving Projections"

% PhD's thesis, Computer Science Department, The University of Chicago,

% 2005.

%

% version 2.1 --June/2007

% version 2.0 --May/2007

% version 1.1 --Feb/2006

% version 1.0 --April/2004

%

% Written by Deng Cai (dengcai2 AT cs.uiuc.edu)

%

if (~exist('options','var'))

options = [];

end

[nSmp,nFea] = size(data);

if size(W,1) ~= nSmp

error('W and data mismatch!');

end

%==========================

% If data is too large, the following centering codes can be commented

% options.keepMean = 1;

%==========================

if isfield(options,'keepMean') && options.keepMean

;

else

if issparse(data)

data = full(data);

end

sampleMean = mean(data);

data = (data - repmat(sampleMean,nSmp,1));

end

%==========================

D = full(sum(W,2));

if ~isfield(options,'Regu') || ~options.Regu

DToPowerHalf = D.^.5;

D_mhalf = DToPowerHalf.^-1;

if nSmp < 5000

tmpD_mhalf = repmat(D_mhalf,1,nSmp);

W = (tmpD_mhalf.*W).*tmpD_mhalf';

clear tmpD_mhalf;

else

[i_idx,j_idx,v_idx] = find(W);

v1_idx = zeros(size(v_idx));

for i=1:length(v_idx)

v1_idx(i) = v_idx(i)*D_mhalf(i_idx(i))*D_mhalf(j_idx(i));

end

W = sparse(i_idx,j_idx,v1_idx);

clear i_idx j_idx v_idx v1_idx

end

W = max(W,W');

data = repmat(DToPowerHalf,1,nFea).*data;

[eigvector, eigvalue] = LGE(W, [], options, data);

else

options.ReguAlpha = options.ReguAlpha*sum(D)/length(D);

D = sparse(1:nSmp,1:nSmp,D,nSmp,nSmp);

[eigvector, eigvalue] = LGE(W, D, options, data);

end

eigIdx = find(eigvalue < 1e-3);

eigvalue (eigIdx) = [];

eigvector(:,eigIdx) = [];

Hi,

Please find attachment of my dataset with labels.

I applied pre-precessing techniques on my data sets, like stop words removal, remove weblinks, punctuation marks, and finally did lemmatization. now i think mydataset is fully tuned. now i am applying different feature extraction techniques to extract features and then i'll classify them using some classifier.

**Please Recommend me some feature extraction techniques.**

Previously i used lexicon based techniques, Bag of words Model, KNN to generate features. Now looking for some, to improve my results.

Regards

Using HoG transform i obtained feature vector for each image, now how to classify these images using Sklearn classification algorithm(Knn) using obtained feature vector??

What is Caesar, ISS, SarPy and KNN software classification as per ICH M7

I'm studying SVM.

And I read one article about SVM Regression.

There is one sentence,

"Support Vector Machines are very specific class of algorithms, characterized by usage of kernels, absence of local minima,

**sparseness of the solution**and capacity control obtained by acting on the margin, or on number of support vectors, etc."In that, what is "sparseness of the solution" means?

Further how we use these variable to train the model in libsvm.?

Kindly elaborate in clear and simple way and provide relevant links

Thanks!

I used to LibSVM for two class data but I need the plot for showing my data classify is true and especially my margin. How can I use it with LibSVM?

**In order to make the system more robust to human orientation changes, i need Pose normalization method, but i dont know it's Related Work, or which work is beter than other work ?**

**I Need Help**

**Regards**

I used some feature selection methods that request the discretization prior feature selection and I found some of the literature used the method based on three category values (−1, 0 and 1) or (-2,0 and 2) using mean (μ) and standard deviation (σ) of the feature values. So, could someone help how to do that?

Thanks in advance.

I am looking forward to learning ML and I had an idea to implement kNN or Decision tree and train it to understand ISL. However, I am unable to find an open source dataset to train and test. Is there any such data set available.

I have a dataset consisting of different characters. I need an approch to combine two classifiers KNN and SVM to classify these characters instead of using one in isolation.

Ultrasound transducer has been studied for using Lead free piezoelectric material.

BNT-BT, KNN, etc. Ceramic is available but ...

Lead free single crystal to ask for help is hard to find.

How can I get? Please

At research of objects erroneously classified by the SVM classifier, it was found out that most of them fall into the strip separating the classes. The clarification of the classification decision for such objects can be realized using the approach based on the combined application of the SVM classifier and the nearest neighbors (kNN) algorithm.

Hello everyone I've a training set of 17 observations. The dimensions of training and testing data are

- X= 17x7660 (features, each row is feats from each obs)
- Y= 17 x 1 (label for each row) and
- Test feat is Z=1x7660

I'm using knnclassify and everytime i run different test samples, I always end up with same prediction result; which is the first column of training data (Y(1,:)) |...

Im not sure if my training has an error..or where the problem is.. urgently waiting for help|