Questions related to Computer Science
I am currently undertaking a computer science with cyber security MSc and have been trying to find topics of study for the independent research project which would be interesting to me and I am struggling and am looking for a pointer in the right direction.
I would like to undertake something that is in some way practical in nature to keep it interesting, and the topic has to include some elements of security.
My interests through work are mainly developing serverless applications on AWS and event driven applications and cloud computing but I am really struggling to find something that is both interesting and isn't purely research based and theoretical.
Any pointers would be gratefully received, I still have another four months until I need to write the project proposal but I have already been trying to find something that sparks my interest for the last month or two with little success.
I want to dive deeper into data analytics, I have done quite a few basic projects e.g get a random dataset, clean/analysis it, and then make visualizations. However I want to conduct a more challenging project, e.g some web scraping, maybe using SQL and Python to analyze data and visualize it, to help solve some real-world potential scenarios. But as a typical computer science student, my brain isn't creative and struggling to think of valid ideas. One idea I had is to try to gather uber driving data at my uni and try to store this data into a SQL database and do some cleaning and analysis and try to visualize more busy spots etc.
Would be much appreciated to gather some ideas from those established in this field! thank you.
I am studying Computer Science and I am currently working on my Bachelor thesis. For that, I am looking for suitable datasets. My goal is to apply Process Mining to these datasets to identify and analyze interesting processes. However, the problem is that these datasets need to be in a certain format to be suitable for Process Mining. The data needs to have a Case Id, Activity, and Timestamp column. In other words, the data needs to be activity-based so that processes with different activity sequences can be found.
I wanted to ask if someone has any idea where I could find such datasets? I'd be most interested in datasets in sectors such as energy, waste management, public work (but other input would be helpful as well). So far I mainly could find the datasets from previous years' BPI challenges.
Here is a short page with more information about Process Mining and the desired format (including a brief example):
Any feedback would be highly appreciated.
Thanks in advance,
If I create a website for customers to sell services, what ranking algorithms I can use to rank the gigs on the first page? In order words like Google use HITS, and pagerank to rank webpages, when I create my website, what ranking algorithms I can employ for services based website?
Any assistance or refer to scientific papers that can help me?
How possible can a researcher in the computer science, software engineering school, or Information Technology related field suggest a research topic for a research student in business school especially marketing?
Recently, as a Ph.D. candidate, I have been confronted by some friends who are in business school studying Management, Marketing, etc. to suggest some research topics for them.
I would like to know the possibility of an I.T student to give a topic to such students.
I'm conducting a research on digital literacy and its linkage to the digital economy in a developing country like Pakistan.
I'm looking for experts in the following areas: economics, literacy, primary acedemia, digital economy, entrepreneurship, digital literacy, computer science, computer engineering, IT, as well as other associated fields.
I would be really grateful if you could take some time out to fill my questionnaire survey.
This quetionnaire corresponds to my first area of focus: impact of digital literacy on the digital economy.
For context: the digital economy embodies all economic transactions that either require the use of digital technologies or are related to the selling & purchasing of digital goods & services.
For the scope of this study, digital literacy has been defined through some key competences as outlined by the UN in their Digital Literacy Global Framework. The purpose of this study is to determine whether there is a relationship between digital literacy and growth in the digital economy. Furthermore, this study aims to map the relationships o fthe competencies of digital literacy against the factors leading to growth in the digital economy.
For any queries and concerns, you may reach out to us via email at email@example.com
There are high-status conferences such as NeurIPS, ICSE, and ACL. Sometimes they accept more than 1000 papers each year.
On the other hand, there are several Q1 journals (with high impact factors) in each category.
Based on your experience, what would be the pros and cons of each one for you as a researcher? How well they are received when you are applying for a position?
I have started my Master's degree in software engineering a few months ago, and I am looking currently for trends and hot topics in the software engineering area.
I would really appreciate any suggestions for my thesis topic
Suppose I want to teach beginner first- and second-year university students, possibly as a first programming language, or they maybe have some programming background in another language. What Python programming textbooks do you suggest?
Also, what Python programming textbooks do you suggest for teaching advanced topics?
The template of this journal keeps throwing errors of the kind: "There is no line to end here" and therefore cannot run. Anyone help me sort out this challenge. Or alternatively grant me a running template. Thanks
The question of how computers can contribute to controlling the COVID-19 pandemic is being posed to experts in artificial intelligence (AI) all over the world.
AI tools can help in many different ways. They are being used to predict the spread of the coronavirus, map its genetic evolution as it transmits from human to human, speed up diagnosis, and in the development of potential treatments, while also helping policymakers cope with related issues, such as the impact on transport, food supplies and travel.
But in all these cases, AI is only effective if it has sufficient examples to learn from. As COVID-19 has taken the world into unchartered territory, the "deep learning" systems, which computers use to acquire new capabilities, don’t necessarily have the data they need to produce useful outputs.
In the last week, one of my journal got rejected from International Journal of Human-Computer Interaction. Now I want to resubmit it in another journal. If anybody suggest me a Q2 or Q3 journals. The title and abstract of the journal is given below:
Title: Dynamic User Experience for efficiency enhancement based on facial expressions
Abstract: The main motive of Human-Computer Interaction is to make human comfortable while working with interactive computing device so that it can increase human efficiency and release the human trouble and saves humans’ time. In this paper, we recognize the face first then change the UI automatically based on his facial expression. Some of our personas also proposed the similar idea of building a system that would play music based on their facial expression. These scenarios gave us the idea of making an integrated system of dynamic user experience based on facial expression. So, we started to collect the data of our paper-based on questionnaires and interview approaches. Then made some low fidelity prototype during requirement gathering phase. We also made some high-fidelity prototypes using Axure RP to show the stakeholders the likely output of this paper. In the new phase, we have used the software engineering model and then implemented our code in visual studio with live server extension. Then we have followed the cognitive walkthrough model as our evaluation method. During the evaluation, our stakeholders need not have provided any input manually and it was easy to earn to use the system. We found that there should be a high-speed internet connection and we have to use VPN to handle some issues. The user was not feeling fatigued or discomfort at all because it is very easy to learn. Anyone can use it and who wants to use it just need to be in front of the camera that’s it. So, the user was very much comfortable and happy to use our system.
Thanks in advance.
Dear Researchers, Modellers, and Mathematicians,
As we know that in mathematics, computer science, and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system. A deterministic model will thus always produce the same output from a given starting condition or initial state. In this regard, I am looking forward to having examples from daily life events which are deterministic. Thank you!
I have an idea for a scientific article and I have prepared the practical part in it, and I am now calculating the results.
Whoever finds the ability to write the article (the theoretical part) well and in record time, please write to me to publish it at a conference (the last date for receiving research will be on 8-20 2021).
Research trends: data security
My friend and I wrote a research paper on computer science (cryptography). The article is a simple contribution. We need someone to join us and do a good grammar correction for the theory part. In addition, an improvement may be proposed to the proposed contribution.
With the increasing importance and implementation of computer applications in modern agriculture,
Should Agricultural universities make thier students sufficient in advanced computer knowledge or should they have depend only on pure computer scientists ??
Who will fill the gap between agriculture and computer ?
I am writing a review article on facial analysis in biomedical data science. As part of this there is a section on automatic and manual facial landmarking.
There is a large literature on this topic in computer science, but it is usually more focussed on computer vision than medical applications.
I am wondering if there exists established and respected software for automatic landmarking of facial images in a biomedical context? Any help is much appreciated.
- What are the hot topics and future topic for the Recommendation System?
- Can we publish High Impact Factor Papers on Recommendation System?
- Any other topic suggestions for PhD Computer Science that benefit the student in long term?
Hello Researchers, I would like to know some Q1 paid journals with fast publication in the field of computer science major?
I have a question, we want to use the application of ANNs in regression analysis and this is some sort of easy utilization for ANNs, but the question is " how many samples do we need to training? using 12 samples could be enough? " I produced these 12 samples by Fractional Factorial Design (FFD) method and I need to be sure about this. Therefore, I would be grateful if you could provide me with any information about this subject.
Many thanks in advance for your time and kind consideration.
Reference for FFD method: https://en.wikipedia.org/wiki/Fractional_factorial_design
I'm interesting in to know if you have had educational experiences with m-learning (mobile learning) in engineering or computer science. In the literature, it seems to be that there are few proposals with m-learning platforms or applications to strengthen the skills or competences of the students in the basis of engineering and even computer science. Typical applications are located, e.g., in EFL and math.
Under this context, What have been your experiences with m-learning in engineering education or computer science?, and What advantages or difficulties do you consider regarding m-learning?
Thank you for your answers.
Is International Journal of Advanced Computer Science and Applications IJACSA indexed in Scopus?
Does Journal have a sjr?
I have three queries mentioned below:
- I have submitted my manuscript around two months ago at "COMPUTERS AND ELECTRICAL ENGINEERING, AN INTERNATIONAL JOURNAL". The average number of weeks to the initial decision is 4-7 weeks (as written on a Journal webpage) and it's been around 11 weeks and still, I have not got any updates from them. Please let me know what can I do next?
- Also, is it possible to submit the same manuscript to more than one journal at once to save time? If any of the journals will reply me, I will withdraw the manuscript from other journals.
- Please let me know some free Scopus Indexed Journals related to Computer Engineering/Computer Security which will take little time to make decisions because I am in a hurry. It is my first time publishing a paper and I want it to be published anyhow at the earliest (because I need to go abroad and the deadline for scholarships is approaching).
During the research process in computer sciences, we need a set of tools such as:
- IDE for algorithms coding
- LaTex editor for writing papers
- Statistical toolkit for experiments
- Some CAD tools for design
From your point of view, which tools should I use for research in computer sciences? Could you please, provide examples of these tools?
Can you guess which one is the most mysterious and enigmatic physical thing among these things such as biological cells, light, elementary particles (e.g. electrons, neutrons or protons), viruses, fungi, bacteria, atoms, chemical compounds, biological cells, blood cells or finally plain old components, in the context of engineering paradigms (e.g. mechanical, electronics, or aerospace) for designing and building large products (e.g. cars, airplanes, computers, factory machinery or spacecraft)?
The greatest tools for acquiring and using knowledge for technological progress and great inventions are (i) scientific method and (ii) mathematics, where these two tools provide complementary perspectives for gaining deeper insights. Each act like a light to illuminate mutually complementary sides, perspectives or dimensions. Since software researchers refuse to use scientific method (i.e. light of science), software community wasted 50 years and failed to solve software crisis and ended up with a useless fake CBE-paradigm.
If fake scientists still don’t realize that it is a mistake to blatantly violate scientific principles, they are going to repeat same kind of mistakes for Artificial Intelligence research and development. Many things would stay enigmatic and end up in a crisis, like software crisis. Many things that are inexplicable and puzzling or enigmatic in the perspective of mathematics can become crystal clear from the scientific perspective, since light of scientific method illuminates the dark spots left by light of mathematics.
Today, greatest enigmas for researchers of software and computer science include answers to following simple questions such as what is meant by a component in the context of all the other engineering disciplines, and what is meant by CBE (Component Based Engineering) that successfully eliminated engineering crisis form designing and building large and complex products (unlike software crisis).
Even if we know just 30% about bacteria or viruses that has been documented in the textbooks, each and every piece of knowledge can only be included in the textbooks, if and only if the piece of knowledge is supported by falsifiable proof. It impossible to find a piece of knowledge that is not supported by a falsifiable proof. There is a possibility that 20% of the knowledge in the textbooks might be falsified by finding counter evidence in the future such as new discoveries or empirical evidence.
Since mankind have enough valid knowledge about things such as bacteria, light or electrons, researchers are able to invent great things such as treatments for many kinds of infections, fibre-optic networks or semi-conductor chips respectively.
On the other hand, none of the knowledge about the components in the textbooks for computer science or software is either tested (e.g. no one challenged) or supported by any falsifiable proof. But there is a possibility that up to 20% of the knowledge might be proven valid in the future. However, I am sure that 80% of the knowledge in the textbooks is invalid and not open for challenge.
Even simple things such as what component is and what is meant by CBE stayed an enigma and mysterious for many decades, since knowledge in the textbooks about components is untested and invalid. Fake scientists at NSF ( that I prefer to call National Fake Science Foundation) feel offended, if anyone challenges their myths about so called components.
Anything would be less enigmatic or mysterious, even if we only have 30% valid knowledge than another thing that has huge knowledge, but significant portion of the knowledge is invalid. Hence, plain old components are far more mysterious and enigmatic than the invisible things such as viruses, electrons and biological cells. We made many useful inventions by even by relying on the limited valid knowledge.
Can you name any physical thing on the Earth that is more mysterious and enigmatic for scientific community than plain old components used for designed and building large Component-Based Products (or CBP), by taking into consideration all the knowledge in the published scientific literature and textbooks for all scientific disciplines?
A thing must be the most mysterious and enigmatic, if there is a large BoK (Body of Knowledge) for the thing and if larger percentage of the BoK is invalid (e.g. untested and unproven). The main reasons that makes anything enigmatic is not just lack of sufficient valid BoK but also having large chunks of invalid knowledge.
Isn’t it fascinating? Even such simple to acquire knowledge would stay mysterious and enigmatic (and creates a paradox and crisis), if researchers refuse to use the light of scientific principles to illuminate dark spots that are in the realm of science, since such dark spots can’t be illuminated by the light of mathematics.
I invented solutions for software crisis by gaining scientific knowledge essential for understanding mysterious components essential for achieving the elusive and enigmatic CBE-paradigms, in the context of all the other engineering disciplines. The fake scientists of computer science foolishly refusing to use light of scientific method.
The NSF that supposed to uphold scientific principles and scientific method, but is breaking scientific principles, protocols and code of conduct for scientific discourse, which is essential for progress of science and technology. Any accepted theory (i.e. theory or concepts derived from the theory that are being used by practitioners of any craft or trade) must be treated as an assumption, if the theory is not supported by a falsifiable proof (that is backed by repeatable evidence and/or verifiable facts).
The practitioners of astronomy or astrology had been practiced their trade or craft until 16th century by relying on the 2300-year-old theory “the Earth is static at the centre” (and concepts or observations derived from the theory). Mankind falsely concluded that “the Earth is static at the centre” is self-evident fact, so no one bothered to support this unproven theory by finding a falsifiable proof.
Since there was no falsifiable proof for such core first-principles in the foundation, it was impossible to challenge the huge BoK (Body of the Knowledge) acquired and accumulated for 1800 years for creating the dominant paradigm until 16th century by relying on such core first-principles. The scientific community in dark ages used illegal circular logic to defend the core first-principles.
For example, they used the observable facts such as epicycles, non-uniform speeds of planets, lack of stellar parallax and retrograde motions to defend the presumption “the Earth is static at the centre”. Countless concepts, observations and other derived theories in the whole BoK that had been accumulated for 1800 years can be used to defend the belief “the Earth is at the centre”.
The scientific method, protocols and processes for discourse has been created and perfected to prevent this. The biggest problem to subvert a flawed dominant paradigm is overcoming the illegal circular logic, which rely on the huge BoK acquired and accumulated for the paradigm. This kind of thing can be prevented by having falsifiable proof for the core first-principles at the foundation of any dominant paradigm.
When there is a falsifiable proof and if the theory is flawed, it is straight forward to falsify the proof by finding one or more verifiable and/or repeatable counterevidence. This is the reason the scientific method is created, which requires that each theory must be supported by a falsifiable proof.
Unfortunately, today software researchers and experts using the huge BoK in the textbooks and published literature that has been acquired and accumulated for past 50 years by relying on untested and unproven core first-principles in the pre-paradigmatic foundation such as about so called components for software and computer science is a branch of mathematics etc.
About 80% of the accumulated knowledge we have in textbooks and other published literature about the components for software is untested, unchallenged and invalid. Having invalid knowledge makes anything enigmatic, mysterious or paradoxical. Anything would become more and more enigmatic, mysterious or paradoxical, if it acquires and accumulates more and more knowledge and if larger and larger percent of the knowledge accumulated is invalid.
Every piece of scientific knowledge for any physical thing in the textbook must be well tested, challenged, and musty be supported by falsifiable proof backed by empirical evidence that must be open for challenge. Scientists of computer science must be ashamed of them-selves, if they feel offended by counter evidence or facts to expose untested or unproven knowledge about the enigmatic components.
Isn’t it pathetic, if the NSF (National Fake Science Foundation) don’t know or can’t understand basic scientific principles, processes and basic code of conduct? I oppose passing “The Endless Frontiers Act (S. 3832)” to fund the Fake Science foundation, until fake scientists at NSF understand basic scientific principles and processes and strictly implement the code of conduct for upholding the truth.
I wish to file a court case to block the act (i.e. The Endless Frontiers Act) to prevent tens of billions of dollars flush down the drain by the fake scientists at CISE, since nearly 50% of the US$100 billion goes to the CISE of Fake Science Foundation.
We have a Computer Science and Communication Journal in the college (Journal of Computing and Communication ). We target to publish Research articles in all the disciplines of Computer Science and Communication. We plan to publish two issues per year. Can anyone tell me the ways and means to index the Journal in the google Scholar
I am doing a computer science dissertation on the topic '' Automate text tool to analysis reflective writing''.
The hypothesis set is ‘To what extent is the model valid for assessed reflective writing?’ I just want from the questionnaire( closed ended questions and one open question) to validate the proposed model.
I have used the used the 5 point likert scale for analysing the data, option given strongly agree, agree, neutral, disagree, strongly disagree. The sample size is 10 participants. I have chosen my participate based on their experience, career and knowledge of the reflective writing.
1) Which statistical analysis tool shall I use to analyse 10 sample size to validate the model? Please show me step by step on how to analyse the data?
2) What would be the associated hypothesis?
3) Can I use Content Validity Index with 10 sample size participants on the questionnaires using 5 point likert scale?
4) this step on my research Is it qualitative method or quantitative method?why?
If you have any suggestion on my hypothesis, the sample size and the tool I need to analyse?
Thank you in advance !
I would like to start a discussion on which index is more reliable, H-Index or i10-Index. Both are usable, however their ways of calculation are different. There is also G-Index. I am not asking on the differences but on their reliability. Welcome to any comments.
I have heard conflicting answers on this ranging from "do it to make your research accessible" to "only do it if you're invited" to "don't do it at all." The most moderate advice I saw was "one or two is fine as long as you have several other journal publications."
If the answer depends on my field, I'm in computer science and software engineering.
Rodgers’ evolutionary concept analysis is being used in Nursing field. I could not find any paper that prove that Rodgers’ evolutionary concept analysis has been used in any other fields other than Nursing.
Is it possible if i use it for computer science field ?
I can bet that no one in the world today (particularly in the software industry) knows or has the right answers to two simple questions: (i) What a component is, and (ii) What is meant by CBE (Component-Based Engineering), in the reality and context of all other engineering disciplines such as mechanical, electronics, and aerospace engineering.
Learning the right answers to these two simple questions would have two huge benefits: (i) Inventing effective solutions for the notorious software crisis by eliminating infamous spaghetti code, and (ii) Proving Computer Science is a fake science (i.e. paradox), that opens the door to transforming Computer Science into a real science that can address not only problems that have stood unsolved for decades (e.g. human-like computer intelligence that can be achieved by gaining valid scientific knowledge about the functioning and anatomy of bio-neurons in bio-neural networks) but also problems of the future such as bio-cellular computing, which cannot be solved by fake scientists or practitioners of fake science.
If a problem requires acquiring valid scientific knowledge, it is impossible for fake scientists practicing fake science to acquire such valid scientific knowledge essential to solving the problem. To provide tangible proof, I invented effective solutions for the infamous software crisis by gaining and using valid scientific knowledge that can provide the right answers to these simple questions about components, where scientific knowledge implies knowledge that clearly falls under the realm of science and is acquired without violating the core principles and proven rules of the scientific method.
I have been requesting software researchers to find right answers to the simple questions for over a decade, and my request has been seen as heresy. Please see attached PDF.
Why does the software research community find it repugnant or heretical when requested to recognize the reality and truth objectively? I feel, any Scientist must be ashamed of himself if he feels such a request is repugnant or heretical and resort to snubbing and personal attacks.
I am looking for a journal, related to Psychology and Computer Science. IF < 2.0. If anyone has published some related work, or know any journal. that would be nice.
I need suggestions for a good book that covers the basics of Blockchain Technology that will be prescribed for BS Computer Science students.
In this regard, I reviewed the articles indexed in DOAJ, but no suitable case was found.
I need an open-access journal in the field of computer science (cloud computing) that
- does not limit the number of pages of the submitted article
- does not have a processing or publishing fee
- is indexed in JCR or Scopus
Is it possible now or in the future to create an artificial intelligence that will draw knowledge directly from the analysis of Internet resources and learn this knowledge?
Can anyone suggest payment based fast track sure publication in computer science or cybersecurity SJR or JCR journals?
I have been through some discussion regarding survey paper writing tips and tricks. However, these are very generic. I want to know how to write a survey paper related to computer science topics (e.g., blockchain,.internet of things, so on). I have some following queries regarding the aforementioned concerns.
- How to design the flow of the survey paper?
- What will be the minimum length of the survey paper?
- How to pick up a reference paper and which criteria should be the first concern while selecting it? What is the minimum number of references that should I pick?
- Is it necessary to propose an idea in the paper? If yes then is it necessary to show a performance evaluation of the proposed scheme?
- While writing a survey paper which things should I focus on or care about?
Please share your experience regarding this.
Thanks for your time and input.
Thanks in advance.
I am developing a small computer program "kendo" (
For mzML files, I haven't found any complete list of the "accession" codes defining specific parameters (e.g. <cvParam cvRef="MS" accession="MS:1000511" name="ms level" value=""/>; <cvParam cvRef="MS" accession="MS:1000127" name="centroid spectrum" value=""/>; <cvParam cvRef="MS" accession="MS:1000285" name="total ion current" value=""/> ...)
Does anybody have such a list so I can generate clean mzML files ?
Thank you !
One interpretation includes the following explanation:
Application of computer science and technology, special purpose scanners, for recognition of signals obtained by excitation of magnetic fields in electrotechnical devices, apparatus and measurement.
I understand vaguely that the first author is supposed to be the one who "did the most work", but what counts as "work" in this comparison? Does "most" mean "more than all the other coauthors together" or just "more than any other coauthor"? What happens when the comparison is unclear? How often is "did the most work" the actual truth, versus a cover story for a more complex political decision?
I realize that the precise answer is different for every paper. I'm looking for general guidelines for how an outsider (like me) should interpret first authorship in your field. Pointers to guidelines from journals or professional societies would be especially helpful.
Kindly, suggest me some SCIE, ESCI or SCOPUS indexed computer science journals which are paid but fast response?
There are any types and number of truth values, not just binary, or two or three. It depends on the finesse desired. Information processing and communication seem to be described by a tri-state system or more, in classical systems such as FPGAs, ICs, CPUs, and others, in multiple applications programmed by SystemVerilog, an IEEE standard. This has replaced the Boolean algebra of a two-state system indicated by Shannon, also in gate construction with physical systems. The primary reason, in my opinion, is in dealing more effectively with noise.
Although, constructionally, a three-state system can always be embedded in a two-state system, efficiency and scalability suffer. This should be more evident in quantum computing, offering new vistas, as explained in the preprint
As new evidence accumulates, including in modern robots interacting with humans in complex computer-physical systems, this question asks first whether only the mathematical nature is evident as a description of reality, while a physical description is denied. Thus, ternary logic should replace the physical description of choices, with a possible and third truth value, which one already faces in physics, biology, psychology, and life, such as more than a coin toss to represent choices.
The physical description of "heads or tails", is denied in favor of opening up to a third possibility, and so on, to as many possibilities as needed. Are we no longer black or white, but accept a blended reality as well?
Hi everyone! I would like to write my bachelor's thesis on a topic that's currently relevant in the sphere of finance, marketing or computer science (or if it's possible a topic concerning all the three fields of interest). Those fields are the same upon which my bachelor is based (Bachelor of Science in Economics, Management and Computer Science).
I've some broad ideas about the topics, for example: the link between brand equity and financial performance; the effects of aggressive marketing on financial markets; the new generation of traders (covid has increase the number of retail investors with no previous experience); machine learning applied to behavioral finance (I really enjoy those last two topics but have no idea on how to connect them).
Obviously any kind of suggestions, regarding new topic (broad or specific) or the development of cited ones would be greatly appreciated.
Thank you in advance!
I would appreciate it if you could suggest me studies based on natural language processing (NLP) to help assisting medical emergency cases.
My research work focuses on the use of NLP and voice recognition for the medical emergency assistance.
So I would appreciate it if you suggest me some contributions that could be done in this area.
I'm currently working on my undergraduate thesis where I develop a genetic algorithm that finds suboptimal 2D positions for a set of buildings. The solution representation is a vector of real numbers where every three elements represents the position and angle of one building. In that every three elements, the first element represents the x position, the second represents the y position, and the third represents the angle. A typical solution representation would look like:
[ building 0 x position, building 0 y position, building 0 angle, building 1 x position, ... ]
I have already managed to create a genetic algorithm that produces suboptimal solutions and it uses uniform crossover and discards infeasible solutions. However, it is only fast for small problems (e.g. 4 buildings), and adding more buildings makes it too slow to the point that I think it devolves into brute force, which is definitely not what we want. I tried to keep infeasible solutions into the population but with a poorer fitness before, but that only results in best solutions that is worse than when I threw away the infeasible ones.
Now, I am looking for a crossover operator that can help me speed up the genetic algorithm and allow it to scale to more buildings. I have already experimented arithmetic crossover and box crossover but to no avail. So, I am hoping that the community can suggest crossovers that I could try. I would also appreciate any suggestions to improve my genetic algorithm (and not just for the crossover operator).
I am working on an intervention study with 3 different groups of students. One group represents the intervention group and consists of a seminar with a practical phase and computer science content. Another group has only a theory seminar with computer science content and the last group has a seminar with other content. The last group is used to check how stable the constructs are. The 3 measurement points are equally spaced as pre-inter-post-tests in a quasi-experimental setting. Latent growth curve modelling doesn´t fit! Is there a method that uses the strength of the 3 measurement time points with a small sample size?
I am seeking recommendations from the Research Gate community regarding studies about
the IT and Artificial intelligence solutions to assisting medical emergency cases.
What are the perspectives and future works that could be done in this area?
anyone suggest what are the merits and demerits of GAN and classical data augmentation in plant leaf disease detection and classification systems
Good morning everyone. As a part of my research work, I designed a network that extracts the leaf region from the real field images. when I search about performance evaluation metrics for the segmentation I found a lot of metrics. Here I provided the list
1. Similarity Index = 2*TP/(2*TP+FP+FN)
2. Correct detection Ratio = TP/TP+FN
3. Segmentation errors (OSE, USE, TSE)
4. Hausdorff Distance
5. Average Surface distance
6. Accuracy = (TP+TN)/(FN+FP+TP+TN);
7. Recall = TP/(TP+FN);
8. Precision = TP/(TP+FP);
9. Fmeasure = 2*TP/(2*TP+FP+FN);
10. MCC = (TP*TN-FP*FN)/sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN));
11. Dice = 2*TP/(2*TP+FP+FN);
12. Jaccard = Dice/(2-Dice);
13. Specitivity = TN/(TN+FP);
14. Sensitivity = TP/(TP+FN);
suggest to me which performance evaluation metrics are best suited for my work. thank you.
Apparently, the largest technology companies are already working on a new type of electronic gadgets, which in the next stage of the current technological revolution, known as Industry 4.0, will replace smartphones.
Therefore, I am asking you: What type of electronic gadget will replace smartphones in the future?