Science topics: Artificial IntelligenceClassifier
Science topic
Classifier - Science topic
Explore the latest questions and answers in Classifier, and find Classifier experts.
Questions related to Classifier
Land use Classifications of different types of water bodies through hyperspectral data.
Can i distinguish Coastal waters, inland waters, ponds, and inland aquaculture?
I want to classify rock minerals
Hi
I now have three sets of vehicle signal sampling data (fs=100Hz):
Lateral acceleration
Longitudinal acceleration
Steering wheel angle
I would like to classify the vehicle motion behavior.
How to extract feature values from these three signals?
Looking forward to the ANSWER!
Thanks!
Hello I used online survey collected by Qualtrics panels.
I am wondering what type of samples are Qualtrics samples classified ? Commonly used non-probability sampling techniques include convenience sampling, judgemental sampling, quota sampling and snowball sampling.
Ex. Shannon's Index is 2.45, is it high or low?
Dear Clinicians/Researchers working on sleep disorders,
We are working on the automated detection of sleep disorders using EEG. While we can get good accuracy with binary classifiers distinguishing normal subjects from any one particular sleep disorder, the obtained accuracy is very low, when we attempt multiclass classifiers, where the classifier is expected to identify a specific sleep disorder from the following list: insomnia, narcolepsy, REM behavior disorder, bruxism, nocturnal frontal lobe epilepsy, periodic leg movement syndrome, and sleep-disordered breathing. I would like to know from the clinicians (i) what features they look for in identifying the above disorders (differential diagnosis) (ii) is EEG a meaningful signal to use for this purpose. if not, what other signals could be used? (iii) how is a clinical diagnosis performed?
I shall be very grateful for any useful input in this direction.
Best regards.
Ram
It seeks to reflect on the possible causes that influence and originate inequality worldwide. How is it possible that there are developed countries where the quality of life is of a high level and instead there are other countries classified as underdeveloped countries, developing countries where it is not even possible to have drinking water? What is happening in a world as globalized as ours to allow this to continue happening?
Hey guys!!
I’m trying to import my clarifier data into MorhopJ to my current dataset, but I keep getting the report: “possible problem: different numbers of classifiers, ranging from 0 to 3“. I’ve tripled checked to make sure everything lines up properly and that it’s accurate, but I must be missing something.. any ideas?
I want to characterize soil types from images based on deep learning
how can logs be classified(i.e small datas like for eg. in network for packets like header etc.) for the purpose of anomaly detection
Dear all,
I am performing analysis of 16S rRNA amplicon sequencing data. I have tested effectivity of two classifiers on the mock community and blast classifier shows the best result. However, I found out blast is using a local sequencing alignment. So I do not know if it is appropriate to use this classifier to assign a "mystery" sequence to a bacterial taxon. Is it possible that this approach will result to false positive results? Is it better to use Vsearch classifier which showed worse results but is using a global sequencing alignment?
And a bonus question. Should I use rarefied representative sequences to perform a taxonomy classification or not? I use rarefied data for alpha diversity testing (and for beta diversity testing I do not).
Thank you all for answers!
Martin
As we know renal cell carcinomas are now genetically classified and various surrogate immunohistochemistical stains are now being used to do that in addition to FISH testing and other sophisticated methods of genetic testing. What should be the minimum panel of immunostains available inorder to be able to classify renal tumors. Few examples are SDH, FH, TFE3, TFEB, Cathepsin K in addition to conventional markers. Secondly what should be the panel of FISH probes available in addition to immunostains. Thanks
I have a dependent variable: childcare take up, with answers 1 or 0.
I have several other independent variables such as education level, household income, migration background, etc. I have dichotomized all of them. For example, observations with college education have been classified as 1, otherwise 0.
I would like to know which regression model would be the best to predict the childcare take up.
I tried to download the daily data from India-WRIS for Bagmati river discharge stations, but it says locations are classified. Please login to access data. And I don't have login credentials. Kindly let me know if anyone have some idea on it as I need it urgently for my research work.
In Deep Learning Classifier.
Hi,
I want to classify Sentinel-2 images using QGIS and SVM. Please provide me with guidelines/instructions or a video tutorial link. I'm new to QGIS.
It is favored that the SVM classifier algorithm is executed without coding.
Thank you.
Dear Researchers
I need to know how to plot my values to create a Doneen plot to classify my water samples (into three classes). which graph software can I use?
Please let me know how to calculate runtime cost/ efficiency of SVM classifier
Hello Everyone,
in SWAT HRU definition part, we need to define slope classes,i.e either single or multiple.
I have a mountainous watershed, so I want to use the multiple slope option but I m not clear that in how many classes should it be classified.
Moreover, what should be the thresholds (class upper limit %)for slope classes?
please see the attached fig showing slope stats by swat and slopes calculated separately using DEM
any help will be much appreciated
I am a beginner in this field. I want to learn basic audio deep learning for classifying audio. If you have articles or tutorial videos, please send me the link. Thank you very much.
I'm using the 'survivalROC ' package to draw a time-dependent ROC curve for some genes in my manuscript. The reviewer asked the following questions: "some of the single gene feature yields AUC < 0.5. A predictor that makes random guess gives an AUC score of 0.5. The aforementioned features seem to work even worse than random guess. It's either the classifier targets were labeled wrongly or the training algorithm was bad. The authors need to revisit the models, explain the inconsistency and make sure the validity holds". I am sure there is no error in the analysis. My interpretation is: If the graph is convax(AUC > 0.5), the marker is poor prognosis factor,On the other hand, concave(AUC < 0.5) marker is good prognosis factor. Is my understanding correct? How should I reply to the reviewer?
I need eeg dataset to train a model for classifying AD and healthy control for my research.Can anyone suggest where can I find these?
I am aware that PC's are 2-D groups of clustered data which explain variance in a dataset, but am unsure how I can refer to PC1 and PC2 when reporting on differences in the produced trends. Since PC1 and PC2 explain the most variability in a set, I wanted to refer to these as the "dominant" and "subdominant" trends for my produced graphs, but am unsure if classifying PC1 and PC2 by the terms above is incorrect.
Any clarification on the differences between PC1 and PC2 and whether I can technically refer to these as "dominant" and "subdominant" trends would be appreciated.
Hi Everyone,
I am actually struggling to understand the impact of submitting my work to the new journal/, There are a few questions coming to my mind; some are related to the journal, and others are related to its future publicly peer-reviewed, so it has some standards but
1. They classify themselves as a publishing platform. I am afraid that my work will be later classified as a report.
2. They don't have an impact factor now, though I believe every journal gets one after three years, but on their website, they say they will not apply for an impact factor in the future.
3. They publish articles right away after submission, but after peer review approval, they submit them to repositories. I.am afraid If reviewers take their time, the work can be used by someone else, or in case it gets rejected, I cannot submit it somewhere else.
I just studied that binary classifiers are used for multiclass classification using OvO and OvA techniques. My question is why don't we use those algorithms that are specially designed for multiclass classification i.e. random forest classifiers or Naive Bayes classifiers etc.?
I want classify yeast genes based on a particular phenotype. Please specify any genomic or phenotypic features for building the model
hello everyone,
my study assess health care providers knowledge, attitude and interpretation of waves about certain topic in critical care unit
can I get advice about the following:
- how to determine level of knowledge and classify it into satisfactory/ unsatisfactory ( scoring system 85%)?
how to classify the attitude into positive / negative ( scoring system 70%)
NB. I use 3 point likert scale
Any help with a detailed explanation for how to deal with the previously mentioned questions would be very appreciated.
hello everyone,
my study assess health care providers knowledge, attitude and interpretation of waves about certain topic in critical care unit
can I get advice about the following questions:
- how to determine level of knowledge and classify it into satisfactory/ unsatisfactory ( scoring system 85%) by spss?
how to classify the attitude into positive / negative ( scoring system 70%) by spss
NB. I use 3 point likert scale
Any help with a detailed explanation for how to deal with the previously mentioned questions would be very appreciated.
Hi everyone,
I did a classification work with less number of samples (n=73). I compared different machine learning classifiers but two of them, SVM with poly kernel and Radial Basis Function kernel, produced an overall accuracy with 100%. Hence, is this finding acceptable? if unacceptable, why?
Can I get an email address for a magazine specialized in strategic management classified under Scopus
I have to classify the urban growth into infill, sprawl, ribbon and scattered development. All literatures gave theoretical base but couldn't find tutorials for practical approach. Can anyone aid me in finding materials for the same?
I am looking for references that describes the acceptable ranges of the properties for edible oils from oil seed crops (vegetable oil). What qualities are important and detrimental in selecting an oil as an edible oil?
Your kind contributions are appreciated.
Does anyone have experience classification timber harvesting sites based on logging operation and site attributes? I want to categorise timber harvesting sites based on logging operations. This categorisation should be done prior to the logging and site characteristics such as terrain, obstacles, harvest/ stocking, distance to the main road, understory etc. have to be accounted for classifying the difficulty of logging.
Negotiation types
Classification
shall we consider the top 100 ft (30m) from the surface or from the bottom of Mat/Spread footing?
My thoughts: The site class should reflect the soil conditions that affect the ground motion input to the structure or a significant portion of the structure.
For structures that receive substantial ground motion input from shallow soils (ex. structures with shallow spread footing, with laterally flexible piles, or with basements where substantial ground motion input may come through the side walls), it is reasonable to classify the site on the basis of the top 100 ft (30 m) of soils below the ground surface.
Conversely, for structures with basements supported on firm soils or rock below soft soils, it may be reasonable to classify the site on the basis of the soils or rock below the Mat/Spread footing.
I´m working currently on a study with an Anova within-subjects design. To resume:
To examine the string length effect on the memory task and only search task conditions, the means of RTs (response times) for the correctly classified short and long strings with 0, 1 or 2 targets were calculated for each subject.
To examine this effect I conducted an ANOVA design which goes as follows: 2 x 3 x 2 (task type: only search task vs. memory task; string length: short vs. long; number of targets: 0, 1, 2).
Dependent variable: Response Times (RT).
I have a question of how should I interpret the effect sizes of an varianz analysis with multiple factors within subjects.
Can someone help me?
Results:
- There was a significant large main effect for the factors of task type F(1,89)= 25.10, p<.001, η²ₚ=.22, number of targets F(1.66, 147.92)= 206.12, p<.001, η²ₚ=.70 and string length F(1,89)= 851.35, p<.001, η²ₚ=.90.
- The interaction effect between number of targets and string length F(1.66,147.65)= 55.47, p=<.001, η²ₚ=.38 showed as well significant effect sizes. This indicates that the number of targets contained in a task had different effects on the RTs of the subjects depending on the length of the numeric string.
- The interaction effects between task type and number of targets F(2,178)=4.63, p=.011, η²ₚ=.05 and between task type and string length F(1,89)= 4.13, p=.045, η²ₚ=.04 were significant small.
- Ultimately, there was a significant three-way interaction effect between task type, number of targets and string length F(2,178)=6.61, p=.002, η²ₚ=.07. This implies that the task type (only search vs. memory task) had different medium effects on the RTs of the subjects depending on the number of targets (0 vs 1 vs. 2) contained in the task and the length of the numeric strings (short: 5 digits vs. long: 10 digits).
All effects and interactions are signiticant but what does that mean? how can I report why are there significant interaction effects?
I attached the response times RT means.
Thank for your help in advance
Best Regards!
I am using human skin metagenome data.
I was trying to use SILVA classifier instead of GreenGenes classifier but my computer system got hanged and the command got killed.
I have four point scale and three constructs. One construct has much more items compared to the other two constructs. For instance, the construct A has 70 items, the construct B has 10 items, and the construct C has 10 items. Theoretically, someone can be considered as having good ability when he has the three constructs. When someone has a high score in the construct A (the majority of the responses are strongly agree), but very low in the construct B and the construct C, could he be classified in the high category?
I want to downscale time-series imagery data using precipitation and evapotranspiration (temporal) and possibly even topography (static) using Google Earth Engine (GEE) Random Forest Regression. I have processed the remote sensing products to the same temporal and spatial resolution and joined them.
Typically the code would be something like:
// Create a classifier
var classifier = ee.Classifier.smileRandomForest().setOutputModel('REGRESSION').train({
features:training_data,
classProperty: 'what_I_want_to_predict',
inputProperties: ['predictor_variables']
});
var classified = predictor_variables_data.classify(classifier);
My question is:
1. How do I include temporal and static data as predictor variables (training data)? How do I sample it?
2. How does one apply the RDR model across monthly images over 2 years for example? Do you run the model 24 respective times?
Regards,
Cindy
Hi there. I have some data from RNA-seq experiments. In PCA, we observe that the dominating patterns in our data are explained by other variables, rather than by those we used to classify our samples. Hence, our predefined groups are not visible in the PCA representation (they are mixed up). However, in the cluster heatmap, samples are clustered according to our variable, and is clear that the expression vectors (the columns of the heatmap) for samples within the same cluster are much more similar than expression vectors for samples from different clusters.
Given that in ALL the heatmaps cluster samples correctly according to our variable, is it possible that PCA is filtering out information that is meaningful to our samples?
How should I interpret these findings?
Thank you
I obtained a soil Cu to be 97.56 with a Cc of 0.48. I am confused because using Cu, the soil can be classified as well-graded, but it will be classified as poorly graded with the Cc. Anyone with a better idea of how to classify such soil should please explain.
Thank you.
While training optimizable classifier using MALAB classification app, it will provide minimum classification error plot. From that we will get the bestpoint hyperparameters and minimum error hyperparameters. I would like to know what exactly difference between them.
I carried out few experiments by combining 7 CNN based feature extractors and 7 machine learning classifier. In total there were 49 pairs.
For CNNs i used VGG16, Mobile net v2, Densenet121, Inception V3, ResNet 101, ResNet 152 and XceptionNet as feature extractors and passed the generated feature vector to ML classifiers to perform binary classification.
The ML classifiers i have used are support vector machine, k nearest neighbour, gaussian naive Bayes, decision tree, random forest, extra tress and Multi layer perceptron.
For all the evaluation metrics like accuracy, precision, recall and F1 score i achieved best results with the combination of ResNet 101 and Multi layer perceptron.
I'm not able to understand that why it is performing the best. Resnet152 has a deeper network and support vector machine generally perform well. In my case Resnet101 and multi layer perceptron is giving the best results.
Please help me to understand the reason behind it.
Dear colleagues,
I am trying to find the best method with neural networks to classify and calculate cells from microscope feedback images. I need to create a simple network as a start. I do not want a complicated pre-trained model.
TIA :)
Can we retrain the alexnet or googlenet on different images that it has not been trained on before or is it only used for classifying the images that have been trained on?
Can I retrained it using different parameters (learning rates) and differernt weights
People migrate for many reasons. These reasons can be classified as economic, social, political or environmental: economic migration - moving to find a job or following a certain career path.
Migration can expose hosts to a greater number of infectious diseases, as it covers a larger area and visits more habitats than residents. However, because long-distance movement is energy consuming, migration can have a devastating effect on infected hosts, reducing the risk of infection.
I'm trying to use some machine learning (autoencoder?) to help classify various data for pass/fail. This data will be RF data such as frequency response etc. Does anyone have any recommendations on methods to best accomplish this?
Regards,
Adam
I want to apply DEA to measure the efficiency of different DMUs regarding an undesirable output that should be reduced instead of increased. Meaning DMUs with minimal output should be classified as efficient instead of the ones with maximum output.
I have two input variables and that one output variable and I've already decided to use an output-oriented model. How do I transform the output variables correctly?
Hello everyone,
I'm working on forecasting the photovoltaic power, and I want to classify the day ahead and see if it's a clear day or a cloudy day... For that I need to know wich variables are used for this type of classification ? or maybe the models or any informations..
It will be helpful if anyone knows an article, books...etc
Thanks,
Saad
Cross-Validation for Deep Learning Models
Hello Dears
My goal is to classify dense forest tree species using drone images. Is it possible to classify dense cloud points? If yes, in what way and in what software?
Thankful
How can I combine three classifiers of deep learning in python language ?
I got set of data that includes:
Gender: categorical (classified as IV in jasp)
Ethnicity: categorical (classified as IV in jasp)
Congruent: continuous data (classified as DV in jasp)
Incongruent: continuous data (classified as DV in jasp)
I have been asked the following questions:
Is there a significant interaction between ethnicity and implicit association?
I am struggling to choose the correct test; I am trying ANOVA but actually I don’t know what I should measure to answer the question!
Is it the interaction between Ethnicity and gender? What about congruency data?
Some journal write in its web site the acceptance rate (Ex. 12%).
I have a collection of sentences that is in an incorrect order. The system should output the correct order of the sentences. What would be the appropriate approach to this problem? Is it a good approach to embed each sentence into a vector and classify the sentence using multiclass classification (assuming the length of the collection is fixed)?
Please let me know if there can be other approaches.
I am proposing a study to see if dress style can impact juror perceptions of a defendants during a trail.
So my IV is dress style (3 levels; unkept style, neutral, well dressed) this will be presented to participants using photos.
in regard to my DV's I have 7 questions asking participants to rate the defendants perceived guilt, trustworthiness, etc etc
this will be presented as a 7 point Likert scale (1 = not trustworthy at all, 7 = completely trustworthy)
I want to see if participant perceptions differ between the photos shown
I'm struggling to determine what type of data a Likert scale is classified as and therefore cannot determine what type of test I should run.
Any advice would be greatly appreciated. Thank you
I would like to analyze the number (population) of MDSC in mouse spleen by FACS.
In general, since MDSCs co-express CD11b and Ly-6C and Ly-6G, would it be acceptable to determine the double positive population of Ly6C and Ly6G as MDSCs after gating the collected cells with CD11b+?
I am not sure what the difference is between gating CD11b+ cells and classifying them as Gr-1 (Ly6C/Ly6G)+ and Ly6G/Ly6C co-expressing cells.
Thank you in advance for your help.
My aim is to use six classifiers to test various ML tools and generate a model for each of them from the raw data ( Big analytic tools on the data set)
Power Quality Disturbance (PQD) classification and detections algorithms constitute a decent part of the literature in Electrical Engineering. But, where are these classifiers and detectors useful?
Grid integration is done in accordance with the grid codes and that should restrict the PQDs in the grid. So, there may not be any significant PQDs present in the voltage signals obtained from the grid. Thus the use of PQD detection algorithms seems limited in this context.
Are there any other areas where these detection methods are widely used?
Detection by itself may not be useful unless it can lead to some control action correcting the disturbance. Are there any specific areas where such control scenarios exist?
Hello, I am very new to CNN. Currently, I am currently working to classify images with CNN and transfer learning. I am trying to learn in more detail. I’ll be very grateful if you suggest to me some papers or resources. Thank you.
I am using two different methods, neural network (NN) and discriminant analysis of principal components (DAPC), to predict a binary variable from a big set of values. I obtain slightly different accuracy (number of correct guess/total) from the two methods, and I would like to test whether this difference is statistically significant. I stumbled upon a paper from Benavoli et al. (2017)(https://jmlr.org/papers/v18/16-305.html), in which they describe the typical approach:
1. choose a comparison metric;
2. select a group of datasets to evaluate the algorithms;
3. perform m runs of k-fold cross-validation for each classifier on each dataset;
afterwards, it is possible to obtain m*k differences in accuracies between the two classifiers, and evaluate if the average of those differences is statistically different from 0 (using a paired t-test).
The problem is that I have only one datasets available (n=74). I could run m=100 times the k-fold cross-validation on the same dataset, changing only the seed for the pseudorandom number generator (otherwise I get the exactly the same accuracy values m times!), but would that be a correct approach? It sounds a lot like pseudo-replication. Thanks in advance for your answer!
Hi, I am working on a biometric system based on EEG signals. I have a database of 100 users with 70 trail each. I have extracted the features of each user using wavelets, AR model, and tried different classifiers such as SVM, k-NN, MLP to classify them. But I got accuracy up to 50% only. Can someone please tell me which classifier or feature extraction algo to use to get better results.
What is the criteria followed to classify a propellant 'green'?
How effective is teaching & learning through examples? If there is a specific term or phenomenon or theoretical framework for it?
Could you please explain specifically these two classification algorithms and the similarities and differences when they are applied in remote sensing image classification? Thanks a lot.
I would like to ask regarding salami slicing in research.
1. When will we categorize papers as salami slicing?
2. Some published papers are part of larger research in which it employ same population, same methodology, same timeframe, but different tools (that measured different concepts) to fulfill different research objectives. This will save time, cost, and resources to achieve multiple research objectives. Is this considered as salami slicing and acceptable ethically for publication?
3. How do we handle the situation in (2) so that journal editorial will not classify it as salami slicing (or part of the research has been published elsewhere but with different research objective) and reject the submission?
Thank you.
If I use Autoencoder for anomaly detection based on reconstruction error. Now If I have two or different classes or types of anomaly and my Autoencoder can only detect anomaly and cannot classify them. How can I classify or give probabilistic classification after that. Please provide me some idea on this. Thank you.
Can anyone help to put the right parameters in the search area on ADNI webSite
I tend to use PET and MRI as fused modalities to classify AD
MRI subjects must have PET
The following is the PR curve obtained using different kernels of SVM classifier.
How can it be interpreted? Does it indicate good classification performance?
Hi, I'm looking to learn how to classify urban elements, mainly green areas, at a detailed level, only with high-resolution images, but I still can't find how to do it clearly (I know it's been done with LiDAR, but I don't have the equipment)
Does anyone have any manual or tutorial?
I will be eternally grateful
In literature, meta-heuristic algorithms are commonly classified into four groups: (1) Swam-based, (2) Physics-Based, (3) Human Behavior-Based, and (4) Evolutionary-Based. I was wondering what were the latest evolutionary-based algorithms (which employ evolutionary operators) that were developed in the last couple of years, since as far as I can tell, both swarm and physics-based algorithms received most of the attention lately. Thanks in advance.
I am using DecisionTree as a regression model (not as a classifier) on my continuous dataset using Python. However, I get mean squared error equals ZERO. Does it mean my model is overfitting or it can be possible?
Hello everyone,
I would like to know if there is a package from R, Python, or something like that to classify plant species according to their use (food, construction, medicinal, ...), habitat, native climate, life cycle, whether it is crop or wild and vegetation type.
I have been doing this manually using the "Plants of the World Online" and "Encyclopedia of Life" websites, but I am afraid of making mistakes, particularly for native climate.
Could someone please suggest something?
Thank you very much for your attention.