Questions related to Computer Science
I am currently writing thesis for my undergraduate where i need to design a machine learning model to estimate the soil data between 2 borehole based on the bore log data. Sorry in advance if i have terminology mistake as I am a computer science student.
Some journal listed as good journal in SCImago Journal & Country Rank (http://www.scimagojr.com) with relatively good h-index for example Journal of Computer Science (from Science Publication) is identified as possible predatory Journal in Beall's list. Which one I should follow?
The occurrence of a new, extremely pathogenic betacoronavirus, SARS-CoV-2 (2019-nCoV), is responsible for the CoVid-19 pandemic health emergency. Accordingly, SARS-CoV-2 represents a serious global health warning characterized by high mortality, a high contagion rate, and a lack of clinically approved drugs and vaccines. In order to find safe and effective therapeutic options to treat this infectious disease, computer science could play an extremely relevant role to better understand the virus pathogenic mechanism as well as to propose novel therapeutic strategies. Accordingly, due to progress in computer science, in silico methodologies in medicinal chemistry, pharmacology, biology, genetics, and virology cover relevant tasks in modern research in these fields. Furthermore, due to the current global health warning, such computational techniques could speed up research in order to provide innovative and targeted approaches to fight the coronavirus emergency. In light of this, this Special Issue will highlight progress in terms of drug discovery, virus biology, and epidemiology to provide researchers with the most innovative computer-driven methodologies for fighting SARS-CoV-2.
For this Special Issue of Computation, we invite researchers in the fields of computational drug discovery (including drug repurposing approaches), computational biology/genetics, virology, bioinformatics, and epidemiology to submit original research, short communications, and review articles related to the use of computation to fight SARS-CoV-2.
Dr. Simone Brogi Prof. Vincenzo Calderone Guest Editors
I know that most of you have good experience with writing a strong research paper with novelty and originality of the ideas and results. I want some strategies and steps to start writing again because, since my master's thesis, I have not written something quite strong.
Therefore, please I need your following strategies, not links because I could find a lot of links. However, I'm asking the experts in my field (Computer Science) and especially in Deep Learning!
Thanks all in advance!
please i need suggestions about a research based topic that would help me achieve a great result for my dissertation
I hope you are healthy and safe during this quarantine.
For any machine learning model, we evaluate the performance of the model based on several points, and the loss is amongst them. We all know that an ML model:
1- Underfits, when the training loss is way more significant than the testing loss.
2- Overfits, when the training loss is way smaller than the testing loss.
3- Performs very well when the training loss and the testing loss are very close.
My question is directed to the third point. I am running a DL model (1D CNN), and I have the following results: (Note that, my initial loss was 2.5)
- Training loss = 0.55
- Testing Loss = 0.65
Nevertheless, I am not quite sure if the results are acceptable. Since the training loss is a bit high (0.5). I tried to lower the training loss by giving more complexity to the model (Increasing the number of CNN layers and MLP layers); however, this is a very tricky process as whenever I increase the complexity of the architecture, the testing loss increases, and the model easily overfits.
Finally, to say that our model performed very well, should we get a low training loss (say less than 0.1) or my case is still considered good too?
I look forward to hearing from you,
Thanks and regards,
If I am going to submit my work to a CS conference, and also make my Python code available publicly, is it absolutely necessary that I set seed in my Python code?
I ran an experiment, but I realized that I forgot to set seed. Will my publication be rejected at the conference because the seed is not set?
If we have a problem and we build four methods to solve it, should we say we build a model that contains four methods ? or should we use another word instead of model ?
I have the following situation: I have a paper X about topic Y. For paper X I did a forward search with Web of Science (checking all new papers which cite paper X). Then I have downloaded all articles I have identified via forward search (approx. 1'000 Papers). Now I would like to sort these papers according to the frequency of specific keywords used.
For example: I have found paper Z via forward search (so paper Z cites paper X which is about topic Y). Now I want to check if paper Z is also concerned about topic Y or if it just refers to it in passing. For that I search for specific keywords which correspond to topic Y. According to the frequency of the specific keywords mentioned in paper X, I want to classify it in the category "relevant" or "not relevant". Now, how can I determine the threshold for the keywords? That is, if paper X only uses the specific keyword once it is most probably not relevant to topic Y. But if it mentions the specific keyword 20 times it is probably relevant for topic Y.
Is there a recognized methodology to determine or approximate a threshold for the keyword frequency which allows to distinguish if a paper is relevant to topic Y or not?
With this approach I hope to reduce the 1'000 papers to those which are about topic Y.
Can someone suggest a fully funded distance learning postgraduate diploma in computer science or related area? It would be a great help😊
Hi! I am student at the University of Maribor - Faculty of electrical engeneering and computer science (specifically Electronics). I want to know more about how can I merge the BDDs or ZBDDs with the electronic problems? Any concrete ideas?
Thanks, bGood 😉
Currently there is a trend to apply virtual methods, ICT based, to teaching in HE . Frequently professors face the situation, when they have been teaching in face-to-face modality and want/need to do the same, but in distance education, virtual modality. Do anybody have a practical experience, or knowing about a specific methodology for Computer Science courses?
My research focus on assessing an organization security culture , and really some persons advice me to use fuzzy methods , my question is that Does fuzzy AHP and Topsis are related to Computer Science because my paper must related to computer science?
Please I am an Msc Computer Science and Technology student and my research area is software testing. My professor is really interested in Fault Localization but I am having huge challenges in performing experiment. Please could you recommend me to any company for intern so i can gain industrial experiences in software testing.
I am currently in China and i can travel to any country just to acquire the knowledge please.
Thanks in advance
To keep it precise and simple what topics must be included and what learning path mustbe followed for designing computer science syllabus for future.
I am looking to enter the field of Complex Systems or Complexity Science as a postgraduate in my Computer Science faculty, specifically focusing on Agent-Based Modeling. However, I am unclear on the current problems being addressed, what the research trends are, and how it may be applied in industry from a CS perspective, because there is currently no one in my faculty involved in Complex Systems. Any clarification on this issue would be greatly appreciated, thank you.
Hi friends, I am looking for the help to decide the topics for the PhD proposal in Data Science. I am from BPFI domain and working in Retail banking. I have had an opportunity to work some of compliance and regulatory projects e.g Digital transformation, Replatform, Data Migration, BASEL, SEPA, PSD2, CRM, GDPR and recently completed my masters in computer science (Data Analytics)
Please also help me with the challenges and post PhD future in academic as well as jobs.
Thanks in advance !
In general the first principle is a basic assumption that cannot be deduced any further. Related to different fields of human activity there are different definitions of first principles, for example for engineering those are the laws of physics. Often great innovation in science/engineering happens when the new idea is not build on top of the current state of the art or commonly accepted technology. Instead the problem is initiated from those first principles or in other words "what we know for sure" and re-build from there.
So, what are the first principles known so far in computer vision, particularly in object detection. Are there fundamental "can do" and "cant do" that take its roots and proofs in computer science, physics, mathematics?
Please if you state one in answer do provide
What are pros and cons of each?
Have you applied what is your experience.
What is funding limit?
Is it every year or twice a year?
Range of project funding duration?
Competitiveness of funding?
Is a one university based or a collaborative kind of funding?
“AIs will colonize and transform the entire cosmos,” says Juergen Schmidhuber, a pioneering computer scientist based at the Dalle Molle Institute for Artificial Intelligence in Switzerland, “and they will make it intelligent.”
What do you think? do you think the AI will change everything in the life? do you believe that the AI will become a threat for human or not?
and if yes, how near is that day?
I feel very happy to tell you ,I am interested in networking,i would like to continue my research in the field of wireless networks.i need some suggestion regarding where i need to start my journey in the field of computer science and networking.i am also thankful to you if you suggest present research trends and evolution .
In layman terms, Artificial Intelligence is an area of computer science where computers are developed to behave much the way as humans do. The research topic will aim to illustrate the various levels of AI in various companies implemented till date, the future projects that may impact and discussion of the argument of whether AI will 'support' HR Industry or 'replace' the current workforce, till what extent will the HR Industry be impacted globally. Which skills can be automated and which cannot be automated are a few things which i want to reach at the end!
I need help plz, anybody can help me to find a good master thesis topic in the field of computer science/IT.
The topic related to improving wireless network performance/security using one of Artificial intelligence techniques.
any help would be appreciated
thank you in advance
There are a lot of problems in medicine that needs the newest technologies in computer science fields like Machine Learning and Deep Learning to solve them.
Can anyone mention some of these problems that are unsolved till now?
I'm looking forward to getting some suggestions for writing a research proposal about car plate detection using DL. Looking for the following:-
1- Writing Style
2- Topics for Ph.D. research proposal in Deep Learning.
3- How to engage the attention of the reader in my research proposal?
4- Examples of research proposals using deep learning techniques!!
AI has been an interesting topic in other fields especially in Instrumentation Engineering and Computer Science. Can we, Geotechnical Engineer, use AI in our field? If yes, then, how we can use it?
Journal, Magazines and Letters publish scientific articles. What is technical difference between these articles and their recognition?
Writing Style, technical soundness, number of words etc.
surfing the web I wasn't able to find a simple and effective answer to my question. In other words, I need to speed up a code I wrote using several packages such as raster, rgdal, biomod2 and ncd4 on GIS data: do I need to wait somebody else (or me too) to develop a NEW ratser etc. package which forces R to use GCPU or is it already possible siply loading additional packages before my code run such as gpuR, parallel or what else?
All my bests
I want to ask what could the scope of IOT in agriculture or smart agriculture if I want to start a research in this field?
Also i need some suggestion on what could be the area of research in this field for a person from computer science field?
someone knows information on Summer Student Programme in CERN?
for example your Experience , condition there, Is it desirable or not, Do you think I 'll sign up or not and Any other information.
I am materials engineering.
I am doing MS in computer science my sub field is networking, i have interest to do research in IOT that's why I need guide and help for selection of topic for my research. Please anyone enlist best topics according to my interest. Thank you in anticipation.
Can anyone answer this question that has been perplexed me for years? What kind of scientific discipline blatantly violates basic principles or proven rules of scientific method? What kind of scientist fiercely defends such blatant violation of basic principles or proven rules of scientific method?
Last time a scientific discipline that blatantly violated scientific method was before 17th century, when researchers fiercely defended geocentric paradox in violation of scientific method. In their defense, most of the basic rules and principles of scientific method were not yet known or properly established. Most of the basic rules and principles of scientific method were formed and which have been perfected since 17th century by many great philosophers of science and brilliant scientists, particularly based on valuable lessons and insights learned from the painful experiences gained from subverting geocentric paradox, which transformed the basic science from fake science into a real science.
What is a scientific discipline? A discipline can be a scientific discipline, if and only if the BoK (Body of Knowledge) in all the published textbooks and accepted research publications for the discipline must have been acquired and accumulated without violating basic principles and rules of scientific method. The purpose of modern scientific method is perfecting the quality of knowledge by finding and eliminating imperfections and/or anomalies. Scientific method doesn’t offer a recipe, hints, and guidelines or impose restrictions for doing research to acquire new knowledge, but provide tools to keep scientific research in the right path by detecting mistakes that can divert research efforts into a wrong path.
Each piece of knowledge in the BoK must be supported by falsifiable proof (backed by evidence and facts), where each piece of knowledge and its proof is open for challenge and perfected by rigorous testing and empirical validation. The research community in 17th blatantly violated basic scientific rule, when they tried to suppress and tacitly sabotage efforts to expose 2300-year-old unproven flawed presumption (i.e. the Earth is at the center) in it’s vary foundation.
Except computer science, I could not find any evidence that any other scientific discipline violated scientific method so blatantly. It is beyond my comprehension, why researchers of computer science fiercely defending such blatant violation of basic principles or proven rules of scientific method.
Unfortunately, software researchers acquired so much invalid BoK by blatantly violating scientific method. Since it is impossible to solve any problem by relying on invalid knowledge, software researchers concluded that it is impossible to solve certain problems (e.g. real-CBD/CBE or real computer intelligence). But it is not hard to solve those problems by acquired relevant valid knowledge. Please refer to ValidKnowledge.pdf for more information.
P.S: I also failed to find a real scientist, who can understand code of conduct for real scientists: CodeOfConduct.pdf
Hi, I'm working on sequence alignment algorithms. My background is Computer Science. Given two sequences, what could be the max length of a gap and how many insertions/deletion at one stretch I may consider? I think more than one insertion/deletion at one stretch is useless...? My algorithm will accept a large text file, and report locations in the file where the best regions are found.
When I executed these two strings on ebi.ac.uk, I got the following result, 7 pairs match.
EMBOSS_001 A T C G A C T A A C C A ---- EMBOSS_001 - T C A G C T - T C C A G C T A
However, my algorithm reports an 8 pairs match, which one is better? Please suggest. Many Thanks in advance...
Query A T C G A C T A A C C A File: - T C A G C T T C C A G C T A
Being a computer science student, I don't know much about statistical testing. However, recently, a lot of work has reported statistical validation of their result. In machine learning-based prediction of effector proteins, how do you apply statistical tests to validate the result?
Hello everyone!, my team and I are working on a mobile application dedicated for promoting mental health and we need some insight.. Please take our survey and if you would like to help further you can spread it in your communities. If you have any suggestions we would love to hear them.
I completed a study in which participants experienced two developed interaction conditions and completed the same questionnaire post each condition. I am trying to analyse the data to look for correlations between question answers and whether the condition had any significant effect on answers.
I have performed a Wilcoxon signed rank test which showed no statistically significant difference in answers between the two conditions. However, I have also performed a spearman correlation test which showed some statistically significant differences. This contradiction has me a bit confused.
I have been doing a lot of reading online to work out the tests that i could use. If anyone could help shed some light on this it would be most appreciated.
There seems to be a lot of debate between subject areas so to clarify things, my research is in the field of computer science.
It is my understanding that machine learning approaches perform best for predicting secondary structures in proteins ( having prediction accuracy of up to 80% ). However, protein structure prediction with ML relies on finding homologous regions established from previously determined structures. So, it won't work for proteins for which no known homologues exist. My background is computer science and, not being from the field of biochemistry, I wonder whether non-ML methods like improved Chou-Fasman and GOR are still being worked on.
How Machine Learning can be helpful to developing production of new materials?
In my opinion it is so useful in materials engineering.
My field of study: Computer Science
Topic of Interest: Cyber Security, machine learning and virtualization. But I am fine with other are too
Thank you in advance
I would like to announce that we have started a Special Issue on Artificial Intelligence and Blockchain in the IJIMAI journal. Publication in IJIMAI is peer reviewed, open access and free of charge.
Additionally, is was recently announced that IJIMAI is indexed in Science Citation Index Expanded (Clarivate Analytics) beginning with vol. 4(3) March 2017. The journal will be listed in the 2019 Journal Citation Reports with a 2019 Journal Impact Factor when released in June 2020.
If you are working on interesting Blockchain and AI synergies, I would like to invite you to contribute to this SI.
Please, find all the info in the SI dossier:
Research Proposal Special Issue Artificial Intelligence and Blockchain (Intern...
Shall you contribute a paper, please submit it through email to either editor.
Is the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) applicable for research in the Computer science field? I noticed that often this method of literature research is used only on medical studies.
If not... What do you suggest as a method for a systematic and structured literature review?
I appreciate your contribution.
- The majority of university professors are from their specific field where they were trained
- It is likely that an expert professor of his discipline or professional specialty will allow him to transmit the specific knowledge to his students; but nothing ensures that in reality this happens
- At present, university professors are hired not only for having a bachelor's degree or a degree; but also, for its competence in telecommunications and computer science, for being an indispensable tool in all human knowledge and doing activities
- University professors, experts in their discipline and competent in communication and computer science, are more likely to have a successful teaching practice
- Training in Pedagogy or Education Sciences, are already indispensable in university professors to be better teachers, for the management in methodology and instruments to improve the significant learning of their students
Those knowledge representation forms can be considered traditional, because they are related with historically first times, mainly symbolic period of Artificial Intelligence. Are they applied so widely, to justify their inclusion in a general undergraduate course for Computer Science students?
I'm trying to run a tensorflow program that requires at least a cuda cpu (I think).
Google Cloud has a 300 dollar voucher for their parallel processing service. From what I understand, the 300 dollars can run out real quick.
I may go buy a cuda cpu.
Does anybody have a computer that can run Open AI's GPT-2's (see below) application? Perhaps we can collaborate on a paper.
I was thinking of contacting somebody at my school's computer science department but don't want to ruffle any feathers. Besides, not sure if they even have sophisticated computer equipment. No harm in asking.
I build a framework that describes how culture influence requirement engineering activities. I want to represent the framework using set theory and mathematical modelling. However, I do not have the basic to build it. Is there any article.books/publish papers on software engineering or computer science that might help me to understand the basic to represent the framework using set theory and mathematical modelling
One of my potential research areas, when I was working on my M.S. in Computer Science, was data sonification. There’s so much information on data visualization, and it is a tool that is used in so many industries. But there seems to be a lot less interest in data sonification. While I admit that the use cases are more limited, I think there’s still plenty of research that can be done.
For one thing, time series data may be more usefully encoded, at least in some cases, as sound, rather than a visualization.
I was thinking about what would be truly impossible problems in machine learning, even with unlimited data. I quickly taught of the well known halting problem, which is known to be impossible to decide.
However humans are in many cases able to decide the problem by analyzing code. While my initial instincts are that machine learning could never fully solve the problem, would it be reasonable to believe that it would be possible to improve on human performance?
After all any program can be converted to a turing machine which is a grammar. A sort of compiler can be used to build a meaning representation, which could then be used to train a machine learning application.
And could working on such problems allow us to study better models that incorporate some sort of reasoning capability?
In a research (e.g. computer science, etc.) that involves enhancing existing algorithm or introducing a new algorithm: 1. What will the research design involves? 2. How does one quantify the research method (since research method is either quantitative or qualitative or mixed methods) ?
Do you think scientists are more susceptible to concluding that a false positive is a real positive or that a false negative is a real negative?
As I understand,
A) A false positive is when a scientist concludes that an experiment/assay succeeded when actually it failed.
B) A false negative is when a scientist concludes that an experiment/assay failed when actually it succeeded.
A scientist can waste a lot of resources on a false positive if (s)he doesn't notice that the material (s)he is testing is not what (s)he thinks it is. All observations are irrelevant to what you think you are studying as well as anything you publish about it.
A scientist could dismiss a powerful medical drug by falling for a false negative. The drug works. You just didn't test its properties well never to return to it ever again. Are theorists more susceptible to falling for false positives and experimentalists are more susceptible to falling for false negatives, for instance? For theorists, maybe we talking about models more than assays.
Also do you think scientists are conditioned to look out for false positives or false negatives more so? Which one is more dangerous to science?
Nearly all experiments and theories require optimization so false positives and false negatives are certainly possible occurrences.
How to write an effective research proposal in computer science and engineering so that it has the maximum probability to be selected? What kind of proposal are given more preference? What are the parameter on which a research proposal is evaluated?
Could anyone please suggest which one is easier to get accepted for conference proceedings inclusion?
Lecture Notes in Computer Science(LNCS), Lecture Notes in Artificial Intelligence (LNAI), Lecture Notes in Bioinformatics (LNBI), LNCS Transactions, Lecture Notes in Business Information Processing (LNBIP), Communications in Computer and Information Science (CCIS), Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (LNICST), and IFIP Advances in Information and Communication Technology (IFIP AICT), formerly known as the IFIP Series.
The Genetic Algorithm can be modified in different ways. Even Goldberg's book, Genetic Algorithms in Search, Optimization and Machine Learning, specifies various enhancements that could be done for genetic search.
Which is the exact method implemented in Weka for genetic search for attribute selection? The software does refer Goldberg's book, but I'm still not sure about its deeper specifics.
Some of which are
- Precise fitness evaluation function
- If fitness modification to add weight to diversity is implemented
- Structure encoding of the chromosome
- Type/variant of crossover implemented: single point, multi-point, and which point to be exact.
Looking for independent recommendations
Its based on my research paper. The link is below
preferably somebody from computer science/cyber-security background.
if you agree i will provide in the letter to sign in , you just have to review hand sign and scan it back to me.
Thank you so much for your time and consideration.
I am currently studying masters in software engineering and management and looking for thesis topics related to android.
it would be great if you could provide some ideas.
I need to write a proposal and then apply for PhD and i am interested in machine learning computer science field, so i need to know what are the latest topics and what already done and then find a problem to write my proposal any help to start read about latest work done in machine learning and what is the newest topics.
I will like to investigate the effect of bore hole drilling on environment relating to vibration of the land and possible side effect. My area of concentration is Intelligent System Engineering in Computer Science field.
I am working on the different use and perception of different terms for describing a procedure / activity to solve a problem from the point of view of different research areas; these terms are 'method', 'tool', 'strategy', 'approach', 'guideline', 'framework', 'methodology', and similar. For this purpose, I plan to compare the above terms, their definition and usage from the point of view of product development research with those of business management research as well as human-computer interaction/ computer sciences. My hypotheses are that they 1) name similar activities differently and 2) in some cases use different definitions for the same terms. The definitions I am looking for are usually published in standard works and textbooks. However, I find it difficult to find these for the two disciplines that are unfamilar to me, business administration and computer science. Can someone help me?
I would like to know the list of ISI indexed journals that publish robotic motion planning problems including robotic related journals and computer science journals. Moreover, it is very important that their review process is not too long.
should we consider data science subject under computer science or statistics? If we have to choose one of these.