Science topic

Classifier - Science topic

Explore the latest questions and answers in Classifier, and find Classifier experts.
Questions related to Classifier
  • asked a question related to Classifier
Question
4 answers
Land use Classifications of different types of water bodies through hyperspectral data.
Can i distinguish Coastal waters, inland waters, ponds, and inland aquaculture?
Relevant answer
Answer
It is possible to distinguish different types of water bodies using hyperspectral data, including coastal waters, inland waters, ponds, and inland aquaculture. Hyperspectral imaging measures the reflectance of water bodies across a wide range of wavelengths, which can be used to identify different water properties and characteristics.
Different water bodies have unique spectral signatures, which can be analyzed to identify their specific land use classifications. For example, coastal waters typically have high reflectance in the blue and green parts of the spectrum due to the presence of phytoplankton and other water constituents. Inland waters may have different spectral characteristics depending on their location, depth, and surrounding vegetation.
Ponds and inland aquaculture can also be distinguished using hyperspectral data by analyzing the reflectance patterns of water and surrounding vegetation. For example, ponds may have different spectral signatures depending on their size, depth, and the type of vegetation present around them. Inland aquaculture, on the other hand, may have unique spectral signatures based on the type of fish species and feed used, as well as water quality and management practices.
However, accurate classification of water bodies using hyperspectral data requires careful data processing and analysis, as well as accurate ground truthing data for calibration and validation. Additionally, it is important to consider the specific environmental and management factors that may affect the spectral characteristics of different water bodies, in order to accurately classify them.
  • asked a question related to Classifier
Question
1 answer
I want to classify rock minerals
Relevant answer
Answer
Please, could you provide more details? What type of images do you have?
  • asked a question related to Classifier
Question
1 answer
Hi
I now have three sets of vehicle signal sampling data (fs=100Hz):
Lateral acceleration
Longitudinal acceleration
Steering wheel angle
I would like to classify the vehicle motion behavior.
How to extract feature values from these three signals?
Looking forward to the ANSWER!
Thanks!
Relevant answer
Answer
some common feature extraction techniques:
  1. Statistical features can be extracted from acceleration and steering signals:
  • Mean
  • Standard deviation
  • Maximum and minimum values
  • Skewness and Kurtosis
  • Variance
  • Root mean square (RMS)
  • Median and quartiles
  1. Time-domain features can be extracted from acceleration and steering signals:
  • Zero-crossing rate (ZCR)
  • Autocorrelation
  • Entropy
  • Energy
  • Signal magnitude area (SMA)
  • Wavelet Transform
  1. Frequency-domain features can be extracted from acceleration and steering signals:
  • Spectral centroid
  • Spectral flatness
  • Spectral roll-off
  • Spectral entropy
  • Power spectral density (PSD)
  1. Dynamic features can be extracted from acceleration and steering signals:
  • Jerk
  • Snap
  • Crackle
  • Pop
  • Impulse
Hope this will help to start your productive work with collaboration.
  • asked a question related to Classifier
Question
1 answer
Hello I used online survey collected by Qualtrics panels.
I am wondering what type of samples are Qualtrics samples classified ? Commonly used non-probability sampling techniques include convenience sampling, judgemental sampling, quota sampling and snowball sampling.
Relevant answer
Answer
As far as I know it depends on how you decide to have the company take the sample. In any case it will be a non probability sample so standard statistical methods probably don't apply. That's the real problem. Best wishes David Booth
  • asked a question related to Classifier
Question
1 answer
Ex. Shannon's Index is 2.45, is it high or low?
Relevant answer
Answer
There is no universally agreed-upon classification scheme for Shannon's Diversity Index that categorizes it as "high", "medium", "low", or "very low". The interpretation of Shannon's Diversity Index can vary depending on the context in which it is used and the specific values obtained.
Shannon's Diversity Index ranges from 0 to a maximum value, which depends on the number of species or categories being analyzed and their relative abundances. Higher values of Shannon's Diversity Index generally indicate greater diversity or evenness of the distribution of species or categories, while lower values indicate lower diversity or greater dominance of certain species or categories.
  • asked a question related to Classifier
Question
3 answers
Dear Clinicians/Researchers working on sleep disorders,
We are working on the automated detection of sleep disorders using EEG. While we can get good accuracy with binary classifiers distinguishing normal subjects from any one particular sleep disorder, the obtained accuracy is very low, when we attempt multiclass classifiers, where the classifier is expected to identify a specific sleep disorder from the following list: insomnia, narcolepsy, REM behavior disorder, bruxism, nocturnal frontal lobe epilepsy, periodic leg movement syndrome, and sleep-disordered breathing. I would like to know from the clinicians (i) what features they look for in identifying the above disorders (differential diagnosis) (ii) is EEG a meaningful signal to use for this purpose. if not, what other signals could be used? (iii) how is a clinical diagnosis performed?
I shall be very grateful for any useful input in this direction.
Best regards.
Ram
Relevant answer
Answer
  • asked a question related to Classifier
Question
5 answers
It seeks to reflect on the possible causes that influence and originate inequality worldwide. How is it possible that there are developed countries where the quality of life is of a high level and instead there are other countries classified as underdeveloped countries, developing countries where it is not even possible to have drinking water? What is happening in a world as globalized as ours to allow this to continue happening?
Relevant answer
Answer
The perpetual coexistence of developing countries (with high income inequality and poor standard of living) and developed countries (with low income inequality and high standard of living) can be explained under the following headings.
*1.Technological Advancement and Colonization.*
As a result of development of superior military and industrial technology, the Western Capitalist Powers were able to colonize less technologically advanced segments of the world. Thereby giving developed countries an edge in international trade
*2. Old International Division of Labour.*
Given colonization and technology advancement, trade between developed economies (Colonial masters) and developing countries (colonies) were based on old international division of labour in which developing countries specialize in exportation of primary products (and importation of manufactured goods) while developed countries specialize in exportation of manufactured goods (and importation of raw materials)
Developing economies dependence on primary exports implies little or no value addition which in turn inhibits job creation; development of infrastructure etc. These negative effects of dependence on primary export further worsen global inequality.
*3. Post Colonization and New International Division of Labour.*
After attainment of political independence from the colonial masters, one would expect the indigenous leaders of the erstwhile colonies to re-write their history and progress from dependence on primary export to manufactured export (new international division of labour).
However, the few elites in developing countries (for personal interest) chose to maintain the status quo at the expense of the masses. Consequently most developing countries only got political independence and not economic independence.
*Conclusion.*
Global inequality will persist until developing countries metamorphose from primary-export dependent economies to manufactured export dependent economies.
  • asked a question related to Classifier
Question
2 answers
Hey guys!!
I’m trying to import my clarifier data into MorhopJ to my current dataset, but I keep getting the report: “possible problem: different numbers of classifiers, ranging from 0 to 3“. I’ve tripled checked to make sure everything lines up properly and that it’s accurate, but I must be missing something.. any ideas?
Relevant answer
Answer
To resolve this issue, you should first check if your clarifier data contains the same number of classifiers as the current dataset in MorpheoJ. If the number of classifiers is not consistent, you may need to modify your clarifier data to align with the current dataset.
It may also be helpful to double-check the format of your clarifier data to ensure that it is in a compatible format for importing into MorpheoJ. You can check the MorpheoJ manual or website for information on the supported file formats for importing data.
If you've already tried these steps and are still encountering the same issue, you may want to consider reaching out to the MorpheoJ support team for further assistance. They may be able to provide more specific guidance on how to resolve the issue you're encountering.
  • asked a question related to Classifier
Question
4 answers
I want to characterize soil types from images based on deep learning
Relevant answer
Answer
Follow these steps:
1. First, you'll need to gather a collection of dirt photos to categorize. This dataset should be divided into two parts: training and testing.
2. The photos must then be pre-processed, such as resizing and normalizing the pixel values.
3. After that, import the VGG19 model and delete the final completely linked layer.
4. Then, you will need to add new layers to the model and train it using the training dataset of soil images.
5. Once the model is trained, you can test it on the testing dataset of soil images and evaluate the performance using metrics such as accuracy, precision, recall, and F1-score.
  • asked a question related to Classifier
Question
2 answers
how can logs be classified(i.e small datas like for eg. in network for packets like header etc.) for the purpose of anomaly detection
Relevant answer
Answer
Thankyou :)
  • asked a question related to Classifier
Question
2 answers
Dear all,
I am performing analysis of 16S rRNA amplicon sequencing data. I have tested effectivity of two classifiers on the mock community and blast classifier shows the best result. However, I found out blast is using a local sequencing alignment. So I do not know if it is appropriate to use this classifier to assign a "mystery" sequence to a bacterial taxon. Is it possible that this approach will result to false positive results? Is it better to use Vsearch classifier which showed worse results but is using a global sequencing alignment?
And a bonus question. Should I use rarefied representative sequences to perform a taxonomy classification or not? I use rarefied data for alpha diversity testing (and for beta diversity testing I do not).
Thank you all for answers!
Martin
Relevant answer
Answer
  1. It is not rRNA amplicons but rRNA gene amplicons
  2. You are having amplicons which are probably 300-400 bp long, why do you think global alignment is better in this case?
  3. For rarification, read the following and decide yourself.
  • asked a question related to Classifier
Question
1 answer
As we know renal cell carcinomas are now genetically classified and various surrogate immunohistochemistical stains are now being used to do that in addition to FISH testing and other sophisticated methods of genetic testing. What should be the minimum panel of immunostains available inorder to be able to classify renal tumors. Few examples are SDH, FH, TFE3, TFEB, Cathepsin K in addition to conventional markers. Secondly what should be the panel of FISH probes available in addition to immunostains. Thanks
Relevant answer
Answer
Renal cell carcinomas (RCCs) are a type of cancer that affects the cells of the kidney. These tumors are now commonly classified based on their genetic characteristics, and various immunohistochemical stains and other genetic testing methods are used to identify the specific subtype of RCC.
In terms of the minimum panel of immunostains that should be available for classifying RCCs, it is important to have a range of markers that can help to identify the specific subtype of RCC and guide treatment decisions. Some commonly used immunostains for RCC include:
  1. SDH (succinate dehydrogenase): This marker is used to identify RCCs that have a mutation in the SDHx gene, which is associated with the succinate dehydrogenase subtype of RCC.
  2. FH (fumarate hydratase): This marker is used to identify RCCs that have a mutation in the FH gene, which is associated with the fumarate hydratase subtype of RCC.
  3. TFE3 (transcription factor E3): This marker is used to identify RCCs that have a translocation involving the TFE3 gene, which is associated with the TFE3 subtype of RCC.
  4. TFEB (transcription factor EB): This marker is used to identify RCCs that have a translocation involving the TFEB gene, which is associated with the TFEB subtype of RCC.
  5. Cathepsin K: This marker is used to identify RCCs that have a mutation in the Cathepsin K gene, which is associated with the Cathepsin K subtype of RCC.
In addition to these markers, it may also be helpful to have access to other conventional markers, such as CD10, vimentin, and PAX2, which are commonly used to differentiate RCCs from other types of renal tumors.
In terms of FISH probes, some commonly used probes for RCC include those that target specific genetic abnormalities,
  • asked a question related to Classifier
Question
3 answers
I have a dependent variable: childcare take up, with answers 1 or 0.
I have several other independent variables such as education level, household income, migration background, etc. I have dichotomized all of them. For example, observations with college education have been classified as 1, otherwise 0.
I would like to know which regression model would be the best to predict the childcare take up.
Relevant answer
Answer
For dichotomous dependent variables, you could use logistic regression.
However, I would discourage you from dichotomizing your independent variables (leave them in their original metric). Dichotomizing leads to a loss of information and can reduce statistical power. There is almost never a need to dichotomize variables from a statistical perspective because there are many statistical procedures available that can handle ordinal-polytomous and/or continuous (metrical, interval-scale) variables. When you type "dichotomizing continuous variables" into Google Scholar, you will find a large number of relevant references that explain why this is a bad idea.
  • asked a question related to Classifier
Question
4 answers
I tried to download the daily data from India-WRIS for Bagmati river discharge stations, but it says locations are classified. Please login to access data. And I don't have login credentials. Kindly let me know if anyone have some idea on it as I need it urgently for my research work.
Relevant answer
Answer
The Central Water Commission of India provides daily discharge data of Indian rivers on its website. The data can be accessed at http://www.cwc.gov.in/main/hydrometric_observations.html.
  • asked a question related to Classifier
Question
5 answers
In Deep Learning Classifier.
Relevant answer
Answer
Booth are commenly used, however., for the case of data mining people use tabular data, whilst for computer vision applications, images data is efficient and usual.
  • asked a question related to Classifier
Question
1 answer
I need with a reasonable answer from the scholars
Relevant answer
Answer
I found the Semi-Automatic Classification Plugin in QGIS very useful and easy to use. It is open-source, easily accessible, and can be learned through youtube tutorials.
Here's one from the creator:
  • asked a question related to Classifier
Question
2 answers
Hi,
I want to classify Sentinel-2 images using QGIS and SVM. Please provide me with guidelines/instructions or a video tutorial link. I'm new to QGIS.
It is favored that the SVM classifier algorithm is executed without coding.
Thank you.
Relevant answer
Answer
you can find the SVM classifier as a ready made algorithm in ENVI
or you can use QGIS with open CV library which has a ready made codes for some machine learning algorithms
  • asked a question related to Classifier
Question
3 answers
Please let me know how to calculate runtime cost/ efficiency of SVM classifier
Relevant answer
Answer
The same argument applies to Machine Learning Algorithms. Selecting the best ML algorithm in terms of computational complexity for a given problem is an important part of an ML practitioner. This article will be very specific in context to SVM.
Regards,
Shafagat
  • asked a question related to Classifier
Question
3 answers
Hello Everyone,
in SWAT HRU definition part, we need to define slope classes,i.e either single or multiple.
I have a mountainous watershed, so I want to use the multiple slope option but I m not clear that in how many classes should it be classified.
Moreover, what should be the thresholds (class upper limit %)for slope classes?
please see the attached fig showing slope stats by swat and slopes calculated separately using DEM
any help will be much appreciated
Relevant answer
Answer
Misspelled — reading comments should be roading— my computer thinks it knows what I mean — not always so.
  • asked a question related to Classifier
Question
2 answers
I am a beginner in this field. I want to learn basic audio deep learning for classifying audio. If you have articles or tutorial videos, please send me the link. Thank you very much.
Relevant answer
Answer
Hi Tim Albiges
Wow. This is very helpful. Thank you so much.
  • asked a question related to Classifier
Question
14 answers
I'm using the 'survivalROC ' package to draw a time-dependent ROC curve for some genes in my manuscript. The reviewer asked the following questions: "some of the single gene feature yields AUC < 0.5. A predictor that makes random guess gives an AUC score of 0.5. The aforementioned features seem to work even worse than random guess. It's either the classifier targets were labeled wrongly or the training algorithm was bad. The authors need to revisit the models, explain the inconsistency and make sure the validity holds". I am sure there is no error in the analysis. My interpretation is: If the graph is convax(AUC > 0.5), the marker is poor prognosis factor,On the other hand, concave(AUC < 0.5) marker is good prognosis factor. Is my understanding correct? How should I reply to the reviewer?
Relevant answer
Answer
Time-dependent ROC curve analysis in medical research: current methods and applications
Adina Najwa Kamarudin* , Trevor Cox and Ruwanthi Kolamunnage-Dona
  • asked a question related to Classifier
Question
7 answers
I need eeg dataset to train a model for classifying AD and healthy control for my research.Can anyone suggest where can I find these?
Relevant answer
Answer
You can request here to get the dataset:
  • asked a question related to Classifier
Question
3 answers
I am aware that PC's are 2-D groups of clustered data which explain variance in a dataset, but am unsure how I can refer to PC1 and PC2 when reporting on differences in the produced trends. Since PC1 and PC2 explain the most variability in a set, I wanted to refer to these as the "dominant" and "subdominant" trends for my produced graphs, but am unsure if classifying PC1 and PC2 by the terms above is incorrect.
Any clarification on the differences between PC1 and PC2 and whether I can technically refer to these as "dominant" and "subdominant" trends would be appreciated.
Relevant answer
Answer
Hello Liam,
The extraction of components is such that the first PC will always be that linear combination of variables which accounts for the largest possible portion of total variance in the data set. As subsequent components must be linearly independent of previously extracted components, PC2 will always account for a lower proportion of total variance than the first extracted component, and so on. The k-th PC extracted (for a set of k variables) will account for the least amount of total variation.
Sometimes, you will encounter studies in which the researcher has chosen to rotate the chosen solution in multi-dimensional space (when the chosen solution has more than one component). This can and will redistribute the amount of variance associated with a given component, when compared to the amount associated with that component upon extraction. The total variance accounted for by a set (or subset) of components does not change due to rotation, just the allocation by individual components.
Rotated or not, whether the resultant components make conceptual sense is in no way guaranteed for your specific sift of variables (and their specific method of quantification), given your specific sample. So, labels such as "dominant" may or may not be meaningful; as well, they can change if one rotates a given structure.
Good luck with your work.
  • asked a question related to Classifier
Question
5 answers
Hi Everyone,
I am actually struggling to understand the impact of submitting my work to the new journal/, There are a few questions coming to my mind; some are related to the journal, and others are related to its future publicly peer-reviewed, so it has some standards but
1. They classify themselves as a publishing platform. I am afraid that my work will be later classified as a report.
2. They don't have an impact factor now, though I believe every journal gets one after three years, but on their website, they say they will not apply for an impact factor in the future.
3. They publish articles right away after submission, but after peer review approval, they submit them to repositories. I.am afraid If reviewers take their time, the work can be used by someone else, or in case it gets rejected, I cannot submit it somewhere else.
Relevant answer
Answer
  • asked a question related to Classifier
Question
7 answers
I just studied that binary classifiers are used for multiclass classification using OvO and OvA techniques. My question is why don't we use those algorithms that are specially designed for multiclass classification i.e. random forest classifiers or Naive Bayes classifiers etc.?
Relevant answer
Answer
This is an old question indeed. The one-vs-one approach was first proposed in 1990, as an application of the divide-and-conquer strategy, which has been very successful in many areas of science: turn a complex problem into a set of simple
ones:
This article, which received 1200+ citations (Google Scholar count), describes the principle of one-vs-one classification, and a small academic example.
The first real-life application of one-vs-one classification was published in 1992:
The main reasons for the success of one-vs-one as opposed to one-vs-all are the following.
1) Assume that you want to solve an N class classification problem. In a one-vs-all-approach, you have to discriminate 1 class from the N-1 others. If all classes have the same number of examples n, this is a classification problem with imbalanced classes, as class "1" has n examples, while class "N-1 others" has n(N-1) examples. Classification with imbalanced classes is a notoriously difficult problem; thus, the one-vs-all approach requires the design of N complex classifiers, with imbalanced classes. By contrast, the one-vs-one approach requires the design of N(N-1)/2 simple classifiers, with balanced classes.
2) A very important, seldom mentioned, advantage of the one-vs-one approach in real-life problems is the following: very often, the variables that are relevant for discriminating class A from class B are not the same as the variables that are relevant for discriminating class A from class C. Hence, for each one-vs-one classifier, one can perform variable selection in order to design a classifier that has only the relevant variables, whose number is very frequently much smaller than the total number of variables necessary for one-vs-all classification.
3) Once the one-vs-one classifiers are designed and tested, the global classification result can be derived, in closed form, by an exact relation:
  • asked a question related to Classifier
Question
3 answers
I want classify yeast genes based on a particular phenotype. Please specify any genomic or phenotypic features for building the model
  • asked a question related to Classifier
Question
5 answers
hello everyone,
my study assess health care providers knowledge, attitude and interpretation of waves about certain topic in critical care unit
can I get advice about the following:
- how to determine level of knowledge and classify it into satisfactory/ unsatisfactory ( scoring system 85%)?
how to classify the attitude into positive / negative ( scoring system 70%)
NB. I use 3 point likert scale
Any help with a detailed explanation for how to deal with the previously mentioned questions would be very appreciated.
Relevant answer
Answer
It sounds like your study is essentially descriptive,. So, I would report the four quartiles for your measure and let the readers draw their own conclusions, because there is no hard and fast rule for what is a "satisfactory" level of knowledge.
  • asked a question related to Classifier
Question
6 answers
hello everyone,
my study assess health care providers knowledge, attitude and interpretation of waves about certain topic in critical care unit
can I get advice about the following questions:
- how to determine level of knowledge and classify it into satisfactory/ unsatisfactory ( scoring system 85%) by spss?
how to classify the attitude into positive / negative ( scoring system 70%) by spss
NB. I use 3 point likert scale
Any help with a detailed explanation for how to deal with the previously mentioned questions would be very appreciated.
Relevant answer
Answer
Heba Mohammed , if your response to David Morse 's answer was meaning how to do IRT within SPSS, this is addressed in https://www.ibm.com/support/pages/item-response-theoryrasch-models-spss-statistics though it will likely be easier in other software. The rest of his recommendations should be straightforward for anyone having taken courses with SPSS.
  • asked a question related to Classifier
Question
17 answers
Hi everyone,
I did a classification work with less number of samples (n=73). I compared different machine learning classifiers but two of them, SVM with poly kernel and Radial Basis Function kernel, produced an overall accuracy with 100%. Hence, is this finding acceptable? if unacceptable, why?
Relevant answer
Answer
There seems to be a widely shared misconception here: 73 samples is a "small" number. The number of samples alone is irrelevant. If you are performing classification in two dimensions (i.e. your classifiers have two variables), 73 samples may be quite enough; if your classifiers have 100 variables, 73 samples is a hopelessly small number. That crucial piece of information is not provided in your question, so that no scientifically valid answer can be provided. The only meaningful figure is the ratio of the number of samples to the complexity of the model, which must be as large as possible. Roughly, the complexity is the number of parameters of the model. In most models the number of parameters can be computed easily; in SVMs it can be estimated after training by the number of support vectors, which must be much smaller than the number of samples. Obviously the complexity of the model increases with the number of variables; hence the first step to take is not to try to increase the number of samples, which may be a lengthy and costly process, if at all possible. The first step is to make sure that all your variables are really relevant and discard irrelevant ones, i.e. perform variable selection with a serious, statistically meaningful variable selection method.
Another crucial missing data: do you get 100 % accuracy on the training set or on the test set? 100% accuracy can be very easily obtained on the training set, by using a model with a large number of parameters ("large" meaning as large as, or larger than, the number of samples of the training set). Getting 100% accuracy on a test set, i.e. a set of data that were never used at any step of the model design, is very unusual when dealing with real experimental data. If that happens, all the steps of the design method must be checked very carefully with a critical eye.
Another critical missing information: are your classes balanced, i.e.do they have roughly the same number of samples? If they are seriously imbalanced, you can very easily obtain near-100% accuracy on the training set and a disaster on the test set.
My advice is the following: turn off your computer, go to the library of your university and read a couple of good textbooks on machine learning and classification; my first choice would be: C. Bishop - Pattern Recognition And Machine Learning - Springer 2006. It is rightfully said that one week in the library may save you months in front of your computer, trying this and that, following advice from blogs written by self-proclaimed experts, without really understanding what's going on.
  • asked a question related to Classifier
Question
4 answers
Can I get an email address for a magazine specialized in strategic management classified under Scopus
Relevant answer
Answer
There are many journals specialized in strategy management, such as, Journal of Strategic Studies and Strategy and Leadership. These journals are listed in Scopus.
  • asked a question related to Classifier
Question
5 answers
I have to classify the urban growth into infill, sprawl, ribbon and scattered development. All literatures gave theoretical base but couldn't find tutorials for practical approach. Can anyone aid me in finding materials for the same?
Relevant answer
Answer
Hello,
Landscape metrics calculated through Fragstats may be of help. There are some indices regarding the spatial pattern analysis.
  • asked a question related to Classifier
Question
2 answers
I am looking for references that describes the acceptable ranges of the properties for edible oils from oil seed crops (vegetable oil). What qualities are important and detrimental in selecting an oil as an edible oil?
Your kind contributions are appreciated.
Relevant answer
Answer
It should not contain toxic materials or high percent of saturated fatty acids. Regards.
  • asked a question related to Classifier
Question
4 answers
Does anyone have experience classification timber harvesting sites based on logging operation and site attributes? I want to categorise timber harvesting sites based on logging operations. This categorisation should be done prior to the logging and site characteristics such as terrain, obstacles, harvest/ stocking, distance to the main road, understory etc. have to be accounted for classifying the difficulty of logging.
Relevant answer
Answer
Some models such as Artificial Neural networks and their derivatives.
  • asked a question related to Classifier
Question
5 answers
Negotiation types
Classification
Relevant answer
Answer
Yes, there is. You can assess now all types of negotiations in all scenarios. I addressed this issue on THE FOUR-TYPE NEGOTIATION MATRIX: A MODEL FOR ASSESSING NEGOTIATION PROCESSES. Check it out:
  • asked a question related to Classifier
Question
3 answers
shall we consider the top 100 ft (30m) from the surface or from the bottom of Mat/Spread footing? My thoughts: The site class should reflect the soil conditions that affect the ground motion input to the structure or a significant portion of the structure.  For structures that receive substantial ground motion input from shallow soils (ex. structures with shallow spread footing, with laterally flexible piles, or with basements where substantial ground motion input may come through the side walls), it is reasonable to classify the site on the basis of the top 100 ft (30 m) of soils below the ground surface. Conversely, for structures with basements supported on firm soils or rock below soft soils, it may be reasonable to classify the site on the basis of the soils or rock below the Mat/Spread footing.
Relevant answer
Answer
It's correct. Ref: FEMA 450 commentary and tall building (taranath)
  • asked a question related to Classifier
Question
7 answers
I´m working currently on a study with an Anova within-subjects design. To resume:
To examine the string length effect on the memory task and only search task conditions, the means of RTs (response times) for the correctly classified short and long strings with 0, 1 or 2 targets were calculated for each subject.
To examine this effect I conducted an ANOVA design which goes as follows: 2 x 3 x 2 (task type: only search task vs. memory task; string length: short vs. long; number of targets: 0, 1, 2).
Dependent variable: Response Times (RT).
I have a question of how should I interpret the effect sizes of an varianz analysis with multiple factors within subjects.
Can someone help me?
Results:
- There was a significant large main effect for the factors of task type F(1,89)= 25.10, p<.001, η²=.22, number of targets F(1.66, 147.92)= 206.12, p<.001, η²=.70 and string length F(1,89)= 851.35, p<.001, η²=.90.
- The interaction effect between number of targets and string length F(1.66,147.65)= 55.47, p=<.001, η²=.38 showed as well significant effect sizes. This indicates that the number of targets contained in a task had different effects on the RTs of the subjects depending on the length of the numeric string.
- The interaction effects between task type and number of targets F(2,178)=4.63, p=.011, η²=.05 and between task type and string length F(1,89)= 4.13, p=.045, η²=.04 were significant small.
- Ultimately, there was a significant three-way interaction effect between task type, number of targets and string length F(2,178)=6.61, p=.002, η²ₚ=.07. This implies that the task type (only search vs. memory task) had different medium effects on the RTs of the subjects depending on the number of targets (0 vs 1 vs. 2) contained in the task and the length of the numeric strings (short: 5 digits vs. long: 10 digits).
All effects and interactions are signiticant but what does that mean? how can I report why are there significant interaction effects?
I attached the response times RT means.
Thank for your help in advance
Best Regards!
Relevant answer
Answer
If you have only looked at correct RTs then there can be some issues with just using the means as the cells sizes will be unbalanced. One option to consider is model the individual trials in a multilevel model or models such as the diffusion model (which can cope with modeling accuracy and RT simultaneously).
  • asked a question related to Classifier
Question
1 answer
I am using human skin metagenome data.
I was trying to use SILVA classifier instead of GreenGenes classifier but my computer system got hanged and the command got killed.
Relevant answer
Answer
Hey!
What do you mean by "empty phylum/order/genus/species"? not-assigned taxa? I'll assume so.
If so, it's expected that some OTUs can't be identified precisely because most microbes are not yet known/described/sequenced (so, theres no rRNAs sequences available on databases from it).
Also, some taxa can't be identified to higher taxonomic levels due to lack of resolution (e.g. some genus have species with very similar SSU sequences that prevents the species diferentiation using such approach).
Lastly, as far as I know, GreenGenes is outdated.
you can refer to:
You could use RDP database as an alternative (assuming you're using 16S sequences - since you've used GreenGenes until now).
  • asked a question related to Classifier
Question
4 answers
I have four point scale and three constructs. One construct has much more items compared to the other two constructs. For instance, the construct A has 70 items, the construct B has 10 items, and the construct C has 10 items. Theoretically, someone can be considered as having good ability when he has the three constructs. When someone has a high score in the construct A (the majority of the responses are strongly agree), but very low in the construct B and the construct C, could he be classified in the high category?
Relevant answer
Answer
But then two people who are only a point apart on your full scale could end up in different categories, or two people who are apart would be lumped into the same score. That is what I mean by "lose information."
  • asked a question related to Classifier
Question
2 answers
I want to downscale time-series imagery data using precipitation and evapotranspiration (temporal) and possibly even topography (static) using Google Earth Engine (GEE) Random Forest Regression. I have processed the remote sensing products to the same temporal and spatial resolution and joined them.
Typically the code would be something like:
// Create a classifier
var classifier = ee.Classifier.smileRandomForest().setOutputModel('REGRESSION').train({
features:training_data,
classProperty: 'what_I_want_to_predict',
inputProperties: ['predictor_variables']
});
var classified = predictor_variables_data.classify(classifier);
My question is:
1. How do I include temporal and static data as predictor variables (training data)? How do I sample it?
2. How does one apply the RDR model across monthly images over 2 years for example? Do you run the model 24 respective times?
Regards,
Cindy
Relevant answer
Answer
For those following this question - the solution was:
1 joining time series data
2. concatenating the stationary data.
// JOIN - Combine/join image collections for training
var joinA = GW_anomaly2.select('GW_anomaly');
var joinB = CHIRPS_resampleD.select('precipitation');
var joinC = MODIS_resampleD.select('ET','PET');
var filter2 = ee.Filter.equals({
leftField: 'system:time_start',
rightField: 'system:time_start'
});
var simpleJoinB = ee.Join.inner();
var innerJoinB = ee.ImageCollection(simpleJoinB.apply(joinA, joinB, filter2));
var joinedB = innerJoinB.map(function(feature) {
return ee.Image.cat(feature.get('primary'),feature.get('secondary'));
});
var simpleJoinC = ee.Join.inner();
var innerJoinC = ee.ImageCollection(simpleJoinC.apply(joinedB, joinC, filter2));
var joinedC = innerJoinC.map(function(feature) {
return ee.Image.cat(feature.get('primary'),feature.get('secondary'));
});
// Concatenate (combine into single collection with multiple bands) the given long term mean (single band)
function add_topo(image){
var concatenate2 = ee.Image.cat([DEM_resampleD]);
return image.addBands(concatenate2);
}
//Map function over entire time-series collection
var combined_input = joinedC.map(add_topo);
  • asked a question related to Classifier
Question
1 answer
Hi there. I have some data from RNA-seq experiments. In PCA, we observe that the dominating patterns in our data are explained by other variables, rather than by those we used to classify our samples. Hence, our predefined groups are not visible in the PCA representation (they are mixed up). However, in the cluster heatmap, samples are clustered according to our variable, and is clear that the expression vectors (the columns of the heatmap) for samples within the same cluster are much more similar than expression vectors for samples from different clusters.
Given that in ALL the heatmaps cluster samples correctly according to our variable, is it possible that PCA is filtering out information that is meaningful to our samples?
How should I interpret these findings?
Thank you
Relevant answer
  • asked a question related to Classifier
Question
1 answer
I obtained a soil Cu to be 97.56 with a Cc of 0.48. I am confused because using Cu, the soil can be classified as well-graded, but it will be classified as poorly graded with the Cc. Anyone with a better idea of how to classify such soil should please explain.
Thank you.
Relevant answer
Answer
Interesting question.... KINDLY CHECK THE FORMULAR FOR EITHER CU OR CC....
IF YOUR ANSWERS ARE TRULY CORRECT I WILL URGE YOU TO STICK TO ONE... CU.... AND YOU WILL BE OKAY.
  • asked a question related to Classifier
Question
3 answers
While training optimizable classifier using MALAB classification app, it will provide minimum classification error plot. From that we will get the bestpoint hyperparameters and minimum error hyperparameters. I would like to know what exactly difference between them.
Relevant answer
Answer
After you choose a particular type of model to train, for example a decision tree or a support vector machine (SVM), you can tune your model by selecting different advanced options. For example, you can change the maximum number of splits for a decision tree or the box constraint of an SVM. Some of these options are internal parameters of the model, or hyperparameters, that can strongly affect its performance. Instead of manually selecting these options, you can use hyperparameter optimization within the Classification Learner app to automate the selection of hyperparameter values. For a given model type, the app tries different combinations of hyperparameter values by using an optimization scheme that seeks to minimize the model classification error, and returns a model with the optimized hyperparameters. You can use the resulting model as you would any other trained model.
  • asked a question related to Classifier
Question
4 answers
I carried out few experiments by combining 7 CNN based feature extractors and 7 machine learning classifier. In total there were 49 pairs.
For CNNs i used VGG16, Mobile net v2, Densenet121, Inception V3, ResNet 101, ResNet 152 and XceptionNet as feature extractors and passed the generated feature vector to ML classifiers to perform binary classification.
The ML classifiers i have used are support vector machine, k nearest neighbour, gaussian naive Bayes, decision tree, random forest, extra tress and Multi layer perceptron.
For all the evaluation metrics like accuracy, precision, recall and F1 score i achieved best results with the combination of ResNet 101 and Multi layer perceptron.
I'm not able to understand that why it is performing the best. Resnet152 has a deeper network and support vector machine generally perform well. In my case Resnet101 and multi layer perceptron is giving the best results.
Please help me to understand the reason behind it.
Relevant answer
Answer
In my opinion, if you will use pre-trained CNN based on ImageNet then better results can give Inception V3 or Xception.
I agree with Akhil Kumar that it will depend on a dataset.
  • asked a question related to Classifier
Question
3 answers
Dear colleagues,
I am trying to find the best method with neural networks to classify and calculate cells from microscope feedback images. I need to create a simple network as a start. I do not want a complicated pre-trained model.
TIA :)
Relevant answer
Answer
Automated cell classification is an important yet a challenging computer vision task with significant benefits to biomedicine. In recent years, there have been several studies attempted to build an artificial intelligence-based cell classifier using label-free cellular images obtained from an optical microscope.
Regards,
Shafagat
  • asked a question related to Classifier
Question
1 answer
Can we retrain the alexnet or googlenet on different images that it has not been trained on before or is it only used for classifying the images that have been trained on?
Can I retrained it using different parameters (learning rates) and differernt weights
Relevant answer
Answer
Yes it is possible, but I think it is very limited
  • asked a question related to Classifier
Question
3 answers
People migrate for many reasons. These reasons can be classified as economic, social, political or environmental: economic migration - moving to find a job or following a certain career path.
Migration can expose hosts to a greater number of infectious diseases, as it covers a larger area and visits more habitats than residents. However, because long-distance movement is energy consuming, migration can have a devastating effect on infected hosts, reducing the risk of infection.
Relevant answer
Answer
Greater human mobility, largely driven by air travel, is leading to an increase in the frequency and reach of infectious disease epidemics. Air travel can rapidly connect any two points on the planet, and this has the potential to cause swift and broad dissemination of emerging and re-emerging infectious diseases that may pose a threat to global health security.
Population migration, spread of COVID-19, and epidemic prevention and control: empirical evidence from China | BMC Public Health | Full Text (biomedcentral.com)
Human Mobility and the Global Spread of Infectious Diseases: A Focus on Air Travel (nih.gov)
  • asked a question related to Classifier
Question
8 answers
I'm trying to use some machine learning (autoencoder?) to help classify various data for pass/fail. This data will be RF data such as frequency response etc. Does anyone have any recommendations on methods to best accomplish this?
Regards,
Adam
Relevant answer
Answer
Anomaly detection is one of the most common use cases of machine learning. Finding and identifying outliers helps to prevent fraud, adversary attacks, and network intrusions that can compromise your company’s future.
Regards,
Shafagat
  • asked a question related to Classifier
Question
3 answers
I want to apply DEA to measure the efficiency of different DMUs regarding an undesirable output that should be reduced instead of increased. Meaning DMUs with minimal output should be classified as efficient instead of the ones with maximum output.
I have two input variables and that one output variable and I've already decided to use an output-oriented model. How do I transform the output variables correctly?
Relevant answer
Answer
Dear Tiziana Ziltener:
You can benefit from these valuable Qs/As about your topic:
I hope it will be helpful..
Best wishes...
  • asked a question related to Classifier
Question
3 answers
Hello everyone,
I'm working on forecasting the photovoltaic power, and I want to classify the day ahead and see if it's a clear day or a cloudy day... For that I need to know wich variables are used for this type of classification ? or maybe the models or any informations..
It will be helpful if anyone knows an article, books...etc
Thanks,
Saad
Relevant answer
Answer
Hello dear Saad Benslimane,
I hope this helps you!
  • asked a question related to Classifier
Question
2 answers
Cross-Validation for Deep Learning Models
Relevant answer
Answer
Interesting question. Yes - I suspect that the network will quickly overfit because it will essentially start training near a local minima. However, if you start with a large learning rate, it is possible that a jump away from the local minima happens so it might not overfit. My advice would be to give it a try, record your results and let us know.
  • asked a question related to Classifier
Question
4 answers
Hello Dears
My goal is to classify dense forest tree species using drone images. Is it possible to classify dense cloud points? If yes, in what way and in what software?
Thankful
Relevant answer
Answer
  • asked a question related to Classifier
Question
5 answers
How can I combine three classifiers of deep learning in python language ?
Relevant answer
Answer
can anyone tell how am i ensemble deep learning models those having different input shape array?
as Dnn using 2D input array shape while CNN and RNN using 3D shape input arrays.
all of the models are already fitted
  • asked a question related to Classifier
Question
3 answers
I got set of data that includes:
Gender: categorical (classified as IV in jasp)
Ethnicity: categorical (classified as IV in jasp)
Congruent: continuous data (classified as DV in jasp)
Incongruent: continuous data (classified as DV in jasp)
I have been asked the following questions:
Is there a significant interaction between ethnicity and implicit association?
I am struggling to choose the correct test; I am trying ANOVA but actually I don’t know what I should measure to answer the question!
Is it the interaction between Ethnicity and gender? What about congruency data?
Relevant answer
Answer
Chi square test for association of categorical variables.
  • asked a question related to Classifier
Question
3 answers
Some journal write in its web site the acceptance rate (Ex. 12%).
Relevant answer
Answer
I think these are sully generated by that journal or publisher so they calculate the number of published manuscript divided by the number of submissions. This may be an indicator of the quality of the journal or how hard it is to get “in” - they seem to say look how few we accept of the ones that we receive.
To me, that also means if the % is low, please do not be disappointed if your paper is not accepted.
  • asked a question related to Classifier
Question
3 answers
I have a collection of sentences that is in an incorrect order. The system should output the correct order of the sentences. What would be the appropriate approach to this problem? Is it a good approach to embed each sentence into a vector and classify the sentence using multiclass classification (assuming the length of the collection is fixed)?
Please let me know if there can be other approaches.
Relevant answer
Answer
Something you could do is to identify linguistic rules that suggest a certain order. For example, before a personal pronoun can be used, a distinct name must be introduced, and the genus must agree with it.
He gave her an envelope.
She went to Peter.
Mary entered the room.
A knowledge base must provide information about actions that are done by objects of certain type, such that giving is an action performed by humans (and not by rooms), and information about gender of names.
Regards,
Joachim
  • asked a question related to Classifier
Question
1 answer
I am proposing a study to see if dress style can impact juror perceptions of a defendants during a trail.
So my IV is dress style (3 levels; unkept style, neutral, well dressed) this will be presented to participants using photos.
in regard to my DV's I have 7 questions asking participants to rate the defendants perceived guilt, trustworthiness, etc etc
this will be presented as a 7 point Likert scale (1 = not trustworthy at all, 7 = completely trustworthy)
I want to see if participant perceptions differ between the photos shown
I'm struggling to determine what type of data a Likert scale is classified as and therefore cannot determine what type of test I should run.
Any advice would be greatly appreciated. Thank you
Relevant answer
Answer
I think I would try with the Kendall-tau test.
  • asked a question related to Classifier
Question
1 answer
I would like to analyze the number (population) of MDSC in mouse spleen by FACS.
In general, since MDSCs co-express CD11b and Ly-6C and Ly-6G, would it be acceptable to determine the double positive population of Ly6C and Ly6G as MDSCs after gating the collected cells with CD11b+?
I am not sure what the difference is between gating CD11b+ cells and classifying them as Gr-1 (Ly6C/Ly6G)+ and Ly6G/Ly6C co-expressing cells.
Thank you in advance for your help.
Relevant answer
Answer
Total MDSCs in mouse spleens are CD11b+ and Gr1+. Further, they can be classified as:
1. CD11b+Ly6CloLy6G+ : polymorphonuclear-MDSC
2. CD11b+Ly6ChiLy6G− : monocytic-MDSC.
For more information in detail, lease see:
  • asked a question related to Classifier
Question
4 answers
My aim is to use six classifiers to test various ML tools and generate a model for each of them from the raw data ( Big analytic tools on the data set)
Relevant answer
Answer
Good Luck.
  • asked a question related to Classifier
Question
5 answers
Power Quality Disturbance (PQD) classification and detections algorithms constitute a decent part of the literature in Electrical Engineering. But, where are these classifiers and detectors useful?
Grid integration is done in accordance with the grid codes and that should restrict the PQDs in the grid. So, there may not be any significant PQDs present in the voltage signals obtained from the grid. Thus the use of PQD detection algorithms seems limited in this context.
Are there any other areas where these detection methods are widely used?
Detection by itself may not be useful unless it can lead to some control action correcting the disturbance. Are there any specific areas where such control scenarios exist?
Relevant answer
Answer
Dear Aditya Susarla:
You can benefit from this valuable article about your topic:
"Role of soft computing techniques for classification of power quality parameters and state-of-the-art mitigation techniques in Smart-Micro-Grid: A extensive review by ELKRC".
ABSTRACT:
The power system at a worldwide level is growing with the help of revolutionary transformation because of the integration of several distributed components such as advanced metering infrastructure, communication infrastructure, and electric vehicle distributed energy sources. The distributed components help to improve the reliability and increase the efficiency of the energy management system and ensure the security of the future power system. The study also discusses how artificial intelligence techniques are applied to provide effective support in the integration of renewable energy sources, integration of energy storage systems, management of the grid and home energy, and security and demand-responsive. It was found that the involvement of the smart grid uses different battery technologies to increase the efficacy of the grid. It includes using lead-acid (L/A) batteries, lithium-ion (Li-ion) batteries, sodium-sulfur (NAS) batteries, and vanadium redox flow batteries (VRB). It was also found that challenges faced in smart grid power quality that create an issue in fault detection and conduction of safety analysis.
The current research analyzes how artificial intelligence and market liberalization can potentially help to increase the overall social welfare of the grid. The facts related to the comprehensive review of the state-of-the-art artificial intelligence techniques to support various applications in a distributed smart grid and are also discussed in the research.
I hope it will be helpful...
Best wishes....
  • asked a question related to Classifier
Question
9 answers
Hello, I am very new to CNN. Currently, I am currently working to classify images with CNN and transfer learning. I am trying to learn in more detail. I’ll be very grateful if you suggest to me some papers or resources. Thank you.
Relevant answer
Answer
First, please study the basic Concept such as:
Then, focus on CNN's applications
A Convolutional neural network (CNN) is a neural network that has one or more convolutional layers and is used mainly for image processing, classification, segmentation, and also for other autocorrelated data
Finally, a proper reference will select that is related to it ,for example:
GOOD LUCK
  • asked a question related to Classifier
Question
7 answers
I am using two different methods, neural network (NN) and discriminant analysis of principal components (DAPC), to predict a binary variable from a big set of values. I obtain slightly different accuracy (number of correct guess/total) from the two methods, and I would like to test whether this difference is statistically significant. I stumbled upon a paper from Benavoli et al. (2017)(https://jmlr.org/papers/v18/16-305.html), in which they describe the typical approach:
1. choose a comparison metric;
2. select a group of datasets to evaluate the algorithms;
3. perform m runs of k-fold cross-validation for each classifier on each dataset;
afterwards, it is possible to obtain m*k differences in accuracies between the two classifiers, and evaluate if the average of those differences is statistically different from 0 (using a paired t-test).
The problem is that I have only one datasets available (n=74). I could run m=100 times the k-fold cross-validation on the same dataset, changing only the seed for the pseudorandom number generator (otherwise I get the exactly the same accuracy values m times!), but would that be a correct approach? It sounds a lot like pseudo-replication. Thanks in advance for your answer!
Relevant answer
Answer
Thanks all for your answers. After some research, in my specific case, I think the better approach would be to use two-poportions Z-test (at this link a nice explanation and working examples in R: http://www.sthda.com/english/wiki/two-proportions-z-test-in-r). This is because running a leave-one-out test, as proposed by Ankit Agrawal, I have each outcome of the prediction as 1 (success) or 0 (failure), and t-test is not suited to work on binary values.
  • asked a question related to Classifier
Question
1 answer
Hi, I am working on a biometric system based on EEG signals. I have a database of 100 users with 70 trail each. I have extracted the features of each user using wavelets, AR model, and tried different classifiers such as SVM, k-NN, MLP to classify them. But I got accuracy up to 50% only. Can someone please tell me which classifier or feature extraction algo to use to get better results.
Relevant answer
Each classifier has their strength and limitation. You can read about SVM and kNN in this article for an overview:
  • asked a question related to Classifier
Question
3 answers
What is the criteria followed to classify a propellant 'green'?
Relevant answer
  • asked a question related to Classifier
Question
27 answers
How effective is teaching & learning through examples? If there is a specific term or phenomenon or theoretical framework for it?
Relevant answer
Answer
Kindly see also the following useful RG:
  • asked a question related to Classifier
Question
3 answers
Could you please explain specifically these two classification algorithms and the similarities and differences when they are applied in remote sensing image classification? Thanks a lot.
Relevant answer
Answer
Bayesian analysis are based on Maximum Likelihood equation but add a multiplier in the numerator describing the prior probability that makes the ML probaility conditioned on the prior probability. This is the subject of and entire course on statistical inference so is difficult to describe unless you have a background in probabilities and Bayes theorem.
  • asked a question related to Classifier
Question
12 answers
I would like to ask regarding salami slicing in research.
1. When will we categorize papers as salami slicing?
2. Some published papers are part of larger research in which it employ same population, same methodology, same timeframe, but different tools (that measured different concepts) to fulfill different research objectives. This will save time, cost, and resources to achieve multiple research objectives. Is this considered as salami slicing and acceptable ethically for publication?
3. How do we handle the situation in (2) so that journal editorial will not classify it as salami slicing (or part of the research has been published elsewhere but with different research objective) and reject the submission?
Thank you.
Relevant answer
Answer
I believe the attached article can give more light on Salimi Publication.
  • asked a question related to Classifier
Question
8 answers
If I use Autoencoder for anomaly detection based on reconstruction error. Now If I have two or different classes or types of anomaly and my Autoencoder can only detect anomaly and cannot classify them. How can I classify or give probabilistic classification after that. Please provide me some idea on this. Thank you.
Relevant answer
Answer
Georgi Tancev
Thank you. Can you please clarify what does 'all k classes' mean in your first sentence. Do I have to train my Anomalous data also in autoencoder? Also please let me know if there is any resource I can follow to figure out this process. Thank you.
  • asked a question related to Classifier
Question
3 answers
Can anyone help to put the right parameters in the search area on ADNI webSite
I tend to use PET and MRI as fused modalities to classify AD
MRI subjects must have PET
Relevant answer
Answer
Pleasure Guelib Bouchra
  • asked a question related to Classifier
Question
6 answers
The following is the PR curve obtained using different kernels of SVM classifier.
How can it be interpreted? Does it indicate good classification performance?
Relevant answer
Answer
The typical ROC curve has the following axes: vertical true positive rate (sensitivity); horizontal false positive rate (1-specificity). That is why the presented curves (red and yellow go down at their ends (here the axes are different than in typical ROC).
Let's define the meaning of axes:
Precision = True positive/(true positive + false positive)
Recall = True positive/(true positive + false negative)
Now, reading the meaning of the presented curves is possible.
Only for the Gaussian model Precision=1 is achievable (it means false positive=0)
For Precision=1, the highest Recall is 0.48 (it means the lowest number of false positive). I've marked this with violet dot on the attached figure.
The remaining models (poly and linear) are worse in this meaning.
If the highest recall is the most desirable, it is achieved by all three models (poly, linear and gaussian) but.... for recall=1 (false negative = 0) the gaussian model gives the highest precision (0.85). I've marked it with gray dot.
In fact, each presented curve is a set of models - based on gaussian are the best.
There are two of them (violet and gray) that gives the best results:
the best precision (1) and the highest possible recall then = 0.48
or
the best recall (1) and the highest possible precision then = 0.85
Best regards
Hubert
  • asked a question related to Classifier
Question
7 answers
Hi, I'm looking to learn how to classify urban elements, mainly green areas, at a detailed level, only with high-resolution images, but I still can't find how to do it clearly (I know it's been done with LiDAR, but I don't have the equipment)
Does anyone have any manual or tutorial?
I will be eternally grateful
Relevant answer
Answer
Hi Dr Zuñiga
I hope that everything is alright at your end.
The answer of your question is based on interactions between you and the suggested answers. Questions like: level (city, hinterlands, rural areas, district, etc....). Exact information (very accurate, accurate, sensible, etc.). Available/used data (km, 100m, 30m, 15m, 43cm, etc...). Classification (Super/unsuper-vised, ----------many), (pixel-based, object-based, ------ etc...). Classifier (parametric, non-parametric, --------------- etc...), and other factors.
Keep in mind ________ It is an approach issue_________.
From your aforementioned answer, I may suggest, Object-based supervised classification using Support vector machines-SVM to analyse medium-high resolution remote sensing data.
Meanwhile, after you give us other hints of your investigation, the community will provide you other suggestions.
Enjoy your studies.
Emad Hawash
Natural Resources Dept.,
Faculty of African Postgraduate Studies-FAPS,
Cairo University.
  • asked a question related to Classifier
Question
6 answers
In literature, meta-heuristic algorithms are commonly classified into four groups: (1) Swam-based, (2) Physics-Based, (3) Human Behavior-Based, and (4) Evolutionary-Based. I was wondering what were the latest evolutionary-based algorithms (which employ evolutionary operators) that were developed in the last couple of years, since as far as I can tell, both swarm and physics-based algorithms received most of the attention lately. Thanks in advance.
Relevant answer
Optimizing the DNA fragment assembly using metaheuristics is one of the intriguing things in this field.
  • asked a question related to Classifier
Question
3 answers
I am using DecisionTree as a regression model (not as a classifier) on my continuous dataset using Python. However, I get mean squared error equals ZERO. Does it mean my model is overfitting or it can be possible?
  • asked a question related to Classifier
Question
4 answers
Hello everyone,
I would like to know if there is a package from R, Python, or something like that to classify plant species according to their use (food, construction, medicinal, ...), habitat, native climate, life cycle, whether it is crop or wild and vegetation type.
I have been doing this manually using the "Plants of the World Online" and "Encyclopedia of Life" websites, but I am afraid of making mistakes, particularly for native climate.
Could someone please suggest something?
Thank you very much for your attention.
Relevant answer