Science topics: Computer Science
Science topic
Computer Science - Science topic
Explore the latest questions and answers in Computer Science, and find Computer Science experts.
Questions related to Computer Science
if we assume 20 is the upper bound of an objective function (OF) based on permutation a square matrix (nxn) , The minimum value of the OF=22,
then we run a random operator to compute the OF by permutative manner ( number of possible solutions = n! ).
Regarding to constrained optimization, How can we control the computation to give an error without having to run in an infinity loop?
Kindly, suggest me some SCIE, ESCI or SCOPUS indexed computer science journals which are paid but fast response?
Information processing and communication are described by a tri-state system, in classical systems such as FPGAs, ICs, CPUs, and others, in multiple applications programmed by Verilog, an IEEE standard. This has replaced the Boolean algebra of a two-state system indicated by Shannon, also in gate construction with physical systems. The primary reason, in my opinion, is in dealing more effectively with noise.
Although, constructionally, a three-state system can always be embedded in a two-state system, efficiency and scalability suffers. This should be more evident in quantum computing, as explained in the preprint
As new evidence accumulates, including in modern robots interacting with humans in complex computer-physical systems, this question asks first whether only the mathematical nature is evident as a description of reality, while a physical description is denied. Thus, ternary logic should replace the physical description of choices, with a possible and third truth value, which one already faces in physics, biology, psychology, and life, such as more than a coin toss to represent choices.
The physical description of "heads or tails", is denied in favor of opening up to a third possibility, and so on, to as many possibilities as needed. Are we no longer black or white, but accept a blended reality as well?
Hi everyone! I would like to write my bachelor's thesis on a topic that's currently relevant in the sphere of finance, marketing or computer science (or if it's possible a topic concerning all the three fields of interest). Those fields are the same upon which my bachelor is based (Bachelor of Science in Economics, Management and Computer Science).
I've some broad ideas about the topics, for example: the link between brand equity and financial performance; the effects of aggressive marketing on financial markets; the new generation of traders (covid has increase the number of retail investors with no previous experience); machine learning applied to behavioral finance (I really enjoy those last two topics but have no idea on how to connect them).
Obviously any kind of suggestions, regarding new topic (broad or specific) or the development of cited ones would be greatly appreciated.
Thank you in advance!
Dear Friends,
Can you guess which one is the most mysterious and enigmatic physical thing among these things such as biological cells, light, elementary particles (e.g. electrons, neutrons or protons), viruses, fungi, bacteria, atoms, chemical compounds, biological cells, blood cells or finally plain old components, in the context of engineering paradigms (e.g. mechanical, electronics, or aerospace) for designing and building large products (e.g. cars, airplanes, computers, factory machinery or spacecraft)?
The greatest tools for acquiring and using knowledge for technological progress and great inventions are (i) scientific method and (ii) mathematics, where these two tools provide complementary perspectives for gaining deeper insights. Each act like a light to illuminate mutually complementary sides, perspectives or dimensions. Since software researchers refuse to use scientific method (i.e. light of science), software community wasted 50 years and failed to solve software crisis and ended up with a useless fake CBE-paradigm.
If fake scientists still don’t realize that it is a mistake to blatantly violate scientific principles, they are going to repeat same kind of mistakes for Artificial Intelligence research and development. Many things would stay enigmatic and end up in a crisis, like software crisis. Many things that are inexplicable and puzzling or enigmatic in the perspective of mathematics can become crystal clear from the scientific perspective, since light of scientific method illuminates the dark spots left by light of mathematics.
Today, greatest enigmas for researchers of software and computer science include answers to following simple questions such as what is meant by a component in the context of all the other engineering disciplines, and what is meant by CBE (Component Based Engineering) that successfully eliminated engineering crisis form designing and building large and complex products (unlike software crisis).
Even if we know just 30% about bacteria or viruses that has been documented in the textbooks, each and every piece of knowledge can only be included in the textbooks, if and only if the piece of knowledge is supported by falsifiable proof. It impossible to find a piece of knowledge that is not supported by a falsifiable proof. There is a possibility that 20% of the knowledge in the textbooks might be falsified by finding counter evidence in the future such as new discoveries or empirical evidence.
Since mankind have enough valid knowledge about things such as bacteria, light or electrons, researchers are able to invent great things such as treatments for many kinds of infections, fibre-optic networks or semi-conductor chips respectively.
On the other hand, none of the knowledge about the components in the textbooks for computer science or software is either tested (e.g. no one challenged) or supported by any falsifiable proof. But there is a possibility that up to 20% of the knowledge might be proven valid in the future. However, I am sure that 80% of the knowledge in the textbooks is invalid and not open for challenge.
Even simple things such as what component is and what is meant by CBE stayed an enigma and mysterious for many decades, since knowledge in the textbooks about components is untested and invalid. Fake scientists at NSF ( that I prefer to call National Fake Science Foundation) feel offended, if anyone challenges their myths about so called components.
Anything would be less enigmatic or mysterious, even if we only have 30% valid knowledge than another thing that has huge knowledge, but significant portion of the knowledge is invalid. Hence, plain old components are far more mysterious and enigmatic than the invisible things such as viruses, electrons and biological cells. We made many useful inventions by even by relying on the limited valid knowledge.
Can you name any physical thing on the Earth that is more mysterious and enigmatic for scientific community than plain old components used for designed and building large products, by taking into consideration all the knowledge in the published scientific literature and textbooks for all scientific disciplines?
A thing must be the most mysterious and enigmatic, if there is a large BoK (Body of Knowledge) for the thing and if larger percentage of the BoK is invalid (e.g. untested and unproven). The main reasons that makes anything enigmatic is not just lack of sufficient valid BoK but also having large chunks of invalid knowledge.
Isn’t it fascinating? Even such simple to acquire knowledge would stay mysterious and enigmatic (and creates a paradox and crisis), if researchers refuse to use the light of scientific principles to illuminate dark spots that are in the realm of science, since such dark spots can’t be illuminated by the light of mathematics.
I invented solutions for software crisis by gaining scientific knowledge essential for understanding mysterious components essential for achieving the elusive and enigmatic CBE-paradigms, in the context of all the other engineering disciplines. The fake scientists of computer science foolishly refusing to use light of scientific method.
The NSF that supposed to uphold scientific principles and scientific method, but is breaking scientific principles, protocols and code of conduct for scientific discourse, which is essential for progress of science and technology. Any accepted theory (i.e. theory or concepts derived from the theory that are being used by practitioners of any craft or trade) must be treated as an assumption, if the theory is not supported by a falsifiable proof (that is backed by repeatable evidence and/or verifiable facts).
The practitioners of astronomy or astrology had been practiced their trade or craft until 16th century by relying on the 2300-year-old theory “the Earth is static at the centre” (and concepts or observations derived from the theory). Mankind falsely concluded that “the Earth is static at the centre” is self-evident fact, so no one bothered to support this unproven theory by finding a falsifiable proof.
Since there was no falsifiable proof for such core first-principles in the foundation, it was impossible to challenge the huge BoK (Body of the Knowledge) acquired and accumulated for 1800 years for creating the dominant paradigm until 16th century by relying on such core first-principles. The scientific community in dark ages used illegal circular logic to defend the core first-principles.
For example, they used the observable facts such as epicycles, non-uniform speeds of planets, lack of stellar parallax and retrograde motions to defend the presumption “the Earth is static at the centre”. Countless concepts, observations and other derived theories in the whole BoK that had been accumulated for 1800 years can be used to defend the belief “the Earth is at the centre”.
The scientific method, protocols and processes for discourse has been created and perfected to prevent this. The biggest problem to subvert a flawed dominant paradigm is overcoming the illegal circular logic, which rely on the huge BoK acquired and accumulated for the paradigm. This kind of thing can be prevented by having falsifiable proof for the core first-principles at the foundation of any dominant paradigm.
When there is a falsifiable proof and if the theory is flawed, it is straight forward to falsify the proof by finding one or more verifiable and/or repeatable counterevidence. This is the reason the scientific method is created, which requires that each theory must be supported by a falsifiable proof.
Unfortunately, today software researchers and experts using the huge BoK in the textbooks and published literature that has been acquired and accumulated for past 50 years by relying on untested and unproven core first-principles in the pre-paradigmatic foundation such as about so called components for software and computer science is a branch of mathematics etc.
About 80% of the accumulated knowledge we have in textbooks and other published literature about the components for software is untested, unchallenged and invalid. Having invalid knowledge makes anything enigmatic, mysterious or paradoxical. Anything would become more and more enigmatic, mysterious or paradoxical, if it acquires and accumulates more and more knowledge and if larger and larger percent of the knowledge accumulated is invalid.
Every piece of scientific knowledge for any physical thing in the textbook must be well tested, challenged, and musty be supported by falsifiable proof backed by empirical evidence that must be open for challenge. Scientists of computer science must be ashamed of them-selves, if they feel offended by counter evidence or facts to expose untested or unproven knowledge about the enigmatic components.
Isn’t it pathetic, if the NSF (National Fake Science Foundation) don’t know or can’t understand basic scientific principles, processes and basic code of conduct? I oppose passing “The Endless Frontiers Act (S. 3832)” to fund the Fake Science foundation, until fake scientists at NSF understand basic scientific principles and processes and strictly implement the code of conduct for upholding the truth.
I wish to file a court case to block the act (i.e. The Endless Frontiers Act) to prevent tens of billions of dollars flush down the drain by the fake scientists at CISE, since nearly 50% of the US$100 billion goes to the CISE of Fake Science Foundation.
Best Regards,
Raju Chiluvuri
One of important aspects of a researcher career is searching for funding of his/her scientific projects. What kinds of projects in computer science are likely to be supported? What are the most important aspects in a project proposal? What are the best opportunities for a young scientist?
Hi,
I have been through some discussion regarding survey paper writing tips and tricks. However, these are very generic. I want to know how to write a survey paper related to computer science topics (e.g., blockchain,.internet of things, so on). I have some following queries regarding the aforementioned concerns.
- How to design the flow of the survey paper?
- What will be the minimum length of the survey paper?
- How to pick up a reference paper and which criteria should be the first concern while selecting it? What is the minimum number of references that should I pick?
- Is it necessary to propose an idea in the paper? If yes then is it necessary to show a performance evaluation of the proposed scheme?
- While writing a survey paper which things should I focus on or care about?
Please share your experience regarding this.
Thanks for your time and input.
Thanks in advance.
Hello,
I would appreciate it if you could suggest me studies based on natural language processing (NLP) to help assisting medical emergency cases.
Hello,
I am seeking recommendations from the Research Gate community regarding studies about
the IT and Artificial intelligence solutions to assisting medical emergency cases.
What are the perspectives and future works that could be done in this area?
Best regards,
Hello,
My research work focuses on the use of NLP and voice recognition for the medical emergency assistance.
So I would appreciate it if you suggest me some contributions that could be done in this area.
Best regards,
Dear colleagues,
I am working on an intervention study with 3 different groups of students. One group represents the intervention group and consists of a seminar with a practical phase and computer science content. Another group has only a theory seminar with computer science content and the last group has a seminar with other content. The last group is used to check how stable the constructs are. The 3 measurement points are equally spaced as pre-inter-post-tests in a quasi-experimental setting. Latent growth curve modelling doesn´t fit! Is there a method that uses the strength of the 3 measurement time points with a small sample size?
Thank you
Martin
recently i prepared a research work for publication in IJCSSE. The site has been shut for weeks now. I feel the journal could be predatory .
anyone suggest what are the merits and demerits of GAN and classical data augmentation in plant leaf disease detection and classification systems
I'm currently working on my undergraduate thesis where I develop a genetic algorithm that finds suboptimal 2D positions for a set of buildings. The solution representation is a vector of real numbers where every three elements represents the position and angle of one building. In that every three elements, the first element represents the x position, the second represents the y position, and the third represents the angle. A typical solution representation would look like:
[ building 0 x position, building 0 y position, building 0 angle, building 1 x position, ... ]
I have already managed to create a genetic algorithm that produces suboptimal solutions and it uses uniform crossover and discards infeasible solutions. However, it is only fast for small problems (e.g. 4 buildings), and adding more buildings makes it too slow to the point that I think it devolves into brute force, which is definitely not what we want. I tried to keep infeasible solutions into the population but with a poorer fitness before, but that only results in best solutions that is worse than when I threw away the infeasible ones.
Now, I am looking for a crossover operator that can help me speed up the genetic algorithm and allow it to scale to more buildings. I have already experimented arithmetic crossover and box crossover but to no avail. So, I am hoping that the community can suggest crossovers that I could try. I would also appreciate any suggestions to improve my genetic algorithm (and not just for the crossover operator).
Thanks!
Good morning everyone. As a part of my research work, I designed a network that extracts the leaf region from the real field images. when I search about performance evaluation metrics for the segmentation I found a lot of metrics. Here I provided the list
1. Similarity Index = 2*TP/(2*TP+FP+FN)
2. Correct detection Ratio = TP/TP+FN
3. Segmentation errors (OSE, USE, TSE)
4. Hausdorff Distance
5. Average Surface distance
6. Accuracy = (TP+TN)/(FN+FP+TP+TN);
7. Recall = TP/(TP+FN);
8. Precision = TP/(TP+FP);
9. Fmeasure = 2*TP/(2*TP+FP+FN);
10. MCC = (TP*TN-FP*FN)/sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN));
11. Dice = 2*TP/(2*TP+FP+FN);
12. Jaccard = Dice/(2-Dice);
13. Specitivity = TN/(TN+FP);
14. Sensitivity = TP/(TP+FN);
suggest to me which performance evaluation metrics are best suited for my work. thank you.
I am currently writing a proposal for my computational social science thesis, I need help to focus on the sentiment analysis of how each sentiment change over time in a social science field. The current research question I have in mind is: how do the sentiments of English tweets regarding COVID-19 as a threat evolve over the year 2020? However, it does not seem to link with social scientific topics but rather a computer science project. I got my research idea based on this paper Any advice is much appreciated!
Dear Colleague,
The University of Dayton’s Department of Computer Science invites applications for multiple tenure-track Assistant Professor positions beginning on August 16, 2021.
The Department seeks experts committed to excellence in undergraduate and graduate education who also have a focus on research. The individuals holding these positions are expected to teach undergraduate and graduate courses, pursue an externally funded research program, advise and mentor students, and engage in service to the university and the community. Applicants must have completed all course work needed for a Ph.D. in Computer Science or equivalent related field, the potential for quality teaching and scholarly research, and articulate a commitment to excellence in undergraduate and graduate education with a focus on research.
The Department of Computer Science offers a stimulating academic environment with active research programs in growing areas of computer science. Recently, the Department has relocated to a brand new state-of-the-art facility with numerous cutting-edge research labs and classrooms, and a commitment to further expand and grow the faculty and student opportunities. The department offers Bachelor’s, Master’s, and Ph.D. degrees in Computer Science, and also a certificate in Autonomous Systems and Data Science.
View the complete job description, information about the University, and application instructions here: https://sciences.academickeys.com/job/zlja1rkx/
Warm regards,
Valerie Woodruff
Academic Keys
--
- Valerie Woodruff
Ph. +1.203.693.1101
Hi Everybody
I am a phd student, working currently in detecting miRNA target based on the many-to-many relation between miRNAs and targets .I have created miRNA-Target modules .
I have used Guided Clustering technique to achieve my goals.
My background is Computer Science and I need to interpret my results, any Volunteers?
At the Computer Science Department at the beginning of the first semester there are p freshmen (study) groups: group i contains ni students, for all i = 1, p. For the second semester the Department wants to reorganize these groups in such a way that:
(1) the new organizing schema has r groups;
(2) the new group j contains mj students, for any j = 1, r;
(3) any new group cannot contain more than c students which were classmates in a same old group from the former organizing schema (c ∈ NN∗ \ {1}).
We want to know if such a new organizing schema exists.
(a) Devise a network flow model for building (if possible) the new organizing schema.
(b) Prove a characterization of the existence of a solution to this problem in terms of maximum flow in the above network. (In the affirmative case provide a way of building the new schema: which old group will have to cede and how many students to which new group).
(c) What is the time complexity for deciding if a solution exists? (Discuss the time complexity for at least three algorithms.)
Dear Friends and Colleagues from RG,
I wish You all the best in the New Year.
I wish you a successful continuation and successes in scientific work, achieving interesting results of scientific research in the New Year 2019 and I also wish you good luck in your personal life, all the best.
In the New Year, I wish You success in personal and professional life, fulfillment of plans and dreams, including successes in scientific work, All Good.
In the ending year, we often ask ourselves:
Have we successfully implemented our research plans in the ending year? We usually answer this question that a lot has been achieved, that some of the plans a year ago have been realized, but not all goals have been achieved.
I wish You that the Next Year would be much better than the previous ones, that each of us would also achieve at least some of the planned most important goals to be achieved in personal, professional and scientific life.
I wish You dreams come true regarding the implementation of interesting research, I wish You fantastic results of research and effective development of scientific cooperation.
I wish You effective development of scientific cooperation, including international scientific cooperation, implementation of interesting research projects within international research teams and that the results of scientific research are appreciated, I wish You awards and prizes for achievements in scientific work.
I wish You many successes in scientific work, in didactic work and in other areas of your activity in the New Year, and I also wish you health, peace, problem solving, prosperity in your personal life, all the best.
Thank you very much.
Best wishes.
I wish you the best in New Year 2019.
Happy New Year 2020.
Dariusz Prokopowicz
If we have three different domain of data (e.g. security, AI and sport) and we did 3 different case study or experiments (1 for each domain) and we estimate the Precision, Recall and F-measure for each experiment. How we can estimate the overall Precision, Recall, F-measure for the model. Is use normal mean average is suitable or F1 or p-value? Which one is better?
Q3,Q4 will be okay. I need scopus publication without APC and fast published. The paper is from Computer science discipline.
I am looking for a journal, related to Psychology and Computer Science. IF < 2.0. If anyone has published some related work, or know any journal. that would be nice.
Dear Friends,
I can bet that no one in the world today (particularly in the software industry) knows or has the right answers to two simple questions: (i) What a component is, and (ii) What is meant by CBE (Component-Based Engineering), in the reality and context of all other engineering disciplines such as mechanical, electronics, and aerospace engineering.
Learning the right answers to these two simple questions would have two huge benefits: (i) Inventing effective solutions for the notorious software crisis by eliminating infamous spaghetti code, and (ii) Proving Computer Science is a fake science (i.e. paradox), that opens the door to transforming Computer Science into a real science that can address not only problems that have stood unsolved for decades (e.g. human-like computer intelligence that can be achieved by gaining valid scientific knowledge about the functioning and anatomy of bio-neurons in bio-neural networks) but also problems of the future such as bio-cellular computing, which cannot be solved by fake scientists or practitioners of fake science.
If a problem requires acquiring valid scientific knowledge, it is impossible for fake scientists practicing fake science to acquire such valid scientific knowledge essential to solving the problem. To provide tangible proof, I invented effective solutions for the infamous software crisis by gaining and using valid scientific knowledge that can provide the right answers to these simple questions about components, where scientific knowledge implies knowledge that clearly falls under the realm of science and is acquired without violating the core principles and proven rules of the scientific method.
I have been requesting software researchers to find right answers to the simple questions for over a decade, and my request has been seen as heresy. Please see attached PDF.
Why does the software research community find it repugnant or heretical when requested to recognize the reality and truth objectively? I feel, any Scientist must be ashamed of himself if he feels such a request is repugnant or heretical and resort to snubbing and personal attacks.
Best Regards,
Raju Chiluvuri
Mac OS vs Linux vs Windows??
I personally use MacOS but would like to know what other people use for their research work, preferably researchers associated with Computation work. If possible do let me know the reason. This is just a survey.
I am using data augmentation and decaying learning rate after each epoch. If I don't use data augmentation but keeping callback, then my training accuracy reaches to 99.65% but not validation. Another scenario is, if I remove callback of learning rate decay but keeping data augmentation then training accuracy also improves and reaches 99% but not validation. Why is it stucking with current configuration (lr decay + Data Augmentation) ?
What could be the reason to have this problem with data augmentation?
Dear All,
I have the data of metabolic pathways in case and control, well there are pathways which are represented several times due to the involvement of reactions in case, well now I would like to see if the no. of reactions in metabolic pathways in the case are significant when compared to no. of reactions in the control sample. I have attached an excel sheet to this message which includes pathway names and the no. of reactions in case and control. My next step is to perform fisher exact test using R and to identify false discovery rate in R. kindly let me know how this could be performed by using R programme
Hello,
My situation is a little strange because I am a Statistics PhD student but I ended up doing research in Natural Language Processing (NLP)/ Deep Learning due to my supervisor's recommendation.
I feel strange because although I can read the NLP paper, understand them, and come up with my own research topics (so far I have identified 3 different research topics), I am not a part of computer science department NLP lab. My supervisor into applying/developing machine learning algorithms for analyzing open-ended questions, so I guess his work is somewhat related to NLP, but he is not really an expert in NLP/Deep Learning. I do like my current supervisor, he is trying to help me out in the best way he can, and I want to continue work with him. But I am wondering,
- Is it advisable for me to seek a professor from computer science NLP lab to be my co-supervisor (upon my supervisor's consent)?
- What are the advantages of doing a NLP research in a big computer science NLP lab?
- Is collaboration important for a PhD student to do research in NLP? I see many PhD students who publishes at top NLP conferences are often co-authored with multiple number of collarborators. I am just doing NLP research with my supervisor, and my supervisor and I don't really have any connection with NLP researchers. Should I make an effort to find NLP researchers whom I can work with for my research (of course, I will be doing most of the work since I want to be the first author)?
- If I need to seek collaborator / co-supervisor who are familiar with the field of NLP, how should I approach them? can I try sending them emails and see if they are interested in my research? I guess the best way is to talk to people at a conference but due to COVID-19, everything is taking place online and I doubt whether I will be able to make any connections.
- Do PhD student from a big NLP lab have better computational resource than the PhD students who does NLP research outside of those labs? My supervisor recently set up an account for the national supercomputer so that I may take an advantage of it, but since almost every researchers in my country have access to this resource, when I submit the job on SLURM the queue wait time can be long. Are PhD students from computer science NLP lab often free of this problem? (do they have a access to better computational resource?)
...Lots of questions! Could someone advise me on these issues?
Thank you so much for your time,
Hi reader,
I'm conducting a research on digital literacy and its linkage to the digital economy in a developing country like Pakistan.
I'm looking for experts in the following areas: economics, literacy, primary acedemia, digital economy, entrepreneurship, digital literacy, computer science, computer engineering, IT, as well as other associated fields.
I would be really grateful if you could take some time out to fill my questionnaire survey.
This quetionnaire corresponds to my first area of focus: impact of digital literacy on the digital economy.
For context: the digital economy embodies all economic transactions that either require the use of digital technologies or are related to the selling & purchasing of digital goods & services.
For the scope of this study, digital literacy has been defined through some key competences as outlined by the UN in their Digital Literacy Global Framework. The purpose of this study is to determine whether there is a relationship between digital literacy and growth in the digital economy. Furthermore, this study aims to map the relationships o fthe competencies of digital literacy against the factors leading to growth in the digital economy.
For any queries and concerns, you may reach out to us via email at gem1974@giki.edu.pk
Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.
1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?
2. Then I thought well if we assumed that the answer to 1 is yes then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.
I'm looking foe different insights :)
Thanks.
Journal of Management in Engineering
American Society of Civil Engineers (ASCE)
Special Collection-Call for Papers
Management of Resilience in Civil Infrastructure Systems: An Interdisciplinary Approach
Objective: The objective of this special issue is to document scholarly interdisciplinary contributions in the field of resilience, with a specific focus on the engineering managerial issues at the interface of the built environment with social and ecological systems. In particular, we look for interdisciplinary contributions to management of resilience in civil infrastructure systems from other engineering disciplines as well as disciplines such as management, sociology, ecology, political science, psychology, urban sciences, geography, and economics.
Submission information: Authors wishing to submit papers should contact the Guest Editors [Nader Naderpajouh (nnp@rmit.edu.au), Juyeong Choi (jchoi@eng.famu.fsu.edu), and David Yu (davidyu@purdue.edu)] to submit an extended abstract (maximum 800 words) summarizing a proposed submission. The guest editorial team are from the fields of civil and environmental engineering and social sciences, and are encouraging submissions from a range of disciplinary perspectives and using a wide range of quantitative and qualitative methods. The authors are specifically required to highlight the interdisciplinary links and contributions from other fields to the scholarly field of civil engineering. In order to be considered for the special issue, authors should submit extended abstracts by January 31, 2019. The Guest Editors will review submitted proposals upon their receipt and contact authors with invitations to submit full-length articles soon thereafter. Note that in case of earlier submission the review process will start immediately.
Guest Editors
Nader Naderpajouh, RMIT University (nnp@rmit.edu.au)
Juyeong Choi, FAMU-FSU College of Engineering (jchoi@eng.famu.fsu.edu)
David Yu, Purdue University (davidyu@purdue.edu)
Young Hoon Kwak, The George Washington University (kwak@gwu.edu)
Deadline for extended abstract: January 31, 2019
Links:
Please could you urgently assist in getting this article titled “Influence of time pressure on aircraft maintenance errors” by Kimura Akisato, T. V. Thaden, William D. Geibel Published 2008 by Computer Science
I am not sure if any one study on this topic. It has been observed in certain cases that success-full researchers have poor academic record. These are exceptions, I am interested to know any study on this topic. Please also cite examples that shows negative or positive examples. Ideally there should be high positive correlation, if it is not than why?
I mean if person got good grade/marks or rank at school/university level will be a grate scientist. For example if topper of JEE (top exam in India for getting admission in engineering college) join research, he/she will be best scientist in the world.
I am starting my Masters Dissertation and my area of interest is on Fraud Detection in Banking Transactions using Machine Learning.
I am interested to see what others have done around this context so if you have an idea or a reference or just a comment that will give me a lead, will be much appreciated.
Frank Tilugulilwa
Masters in Computer Science - candidate (University of Dar es Salaam)
Dear all,
I hope this question finds you in good health & spirit!
This is a repeated question but in a different way.
How can I add a research paper to Google Scholar? I have the following three papers, all of them have been added manually to the "Research Scholar":
Nidhal El-Omari, “Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem”, International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Nidhal El-Omari, “An Efficient Two-level Dictionary-based Technique for Segmentation and Compression Compound Images”, Modern Applied Science, The Canadian Center of Science and Education, published by Canadian Center of Science and Education, Canada, p-ISSN:1913-1844, e-ISSN:1913-1852, DOI:10.5539/mas.v14n4p52, 14(4):52-89, 2020.
Nidhal El-Omari, M. H. Alzaghal, and Sameh Ghwanmeh, “ICT and Emergency Volunteering in Jordan: Current and Future Trends”, Computer Science and Information Technology, published by Horizon Research Publishing Corporation (HRPUB), p-ISSN:2331-6063, e-ISSN:2331-6071, DOI: 10.13189/csit.2015.030402, 3(4):105-112, 2015.
But, I couldn't find any one of them when I search for the usual way. To be sure, you can see the three attached snapshots.
Thank you!
Regards,
Isn’t that grateful that everyone knows how to make a phone, know what are capacitors and transistors used in computers. Maybe China is ahead of all because they can make one at home despite availability of all materials at cheaper cost.
Good Morning All,
I am pursuing phd in computer science. I am thankful to all the people who are guiding me whenever I have been stuck. I have got a list of parameters or features related to my topic, but now I am confused how to select the feature from that list? How to represent this feature during our RDC presentation? Do we need to only list out the attributes or do we need to give data in detail for those attributes?
Hello Friends,
I am a master's student at the University of Baghdad, Department of Computer Science in the second stage, and the topic of my research in mobile and network computing about tracking, movement and mobility via mobile phone. So, I need a spatiotemporal data about the participants that give temporal and spatial indicators with the type of activity?
Any one can inform me where can I get it?
I need help understanding how to use the Java Netbeans IDE for writing and compiling programmes
Dear Friends,
Kindly allow me to ask you a very basic important question. What is the basic difference between (i) scientific disciplines (e.g. physics, chemistry, botany or zoology etc.) and (ii) disciplines for branches of mathematics (e.g. caliculus, trigonometry, algebra and geometry etc.)?
I feel, that objective knowledge of basic or primary difference between science and math is useful to impart perfect and objective knowledge for science, and math (and their role in technological inventions & expansion)?
Let me give my answer to start this debate:
Each branch of Mathematics invents and uses complementary, harmonious and/or interdepend set of valid axioms as core first-principles in foundation for evolving and/or expanding internally consistent paradigm for each of its branches (e.g. calculous, algebra, or geometry etc.). If the foundation comprises of few inharmonious or invalid axioms in any branch, such invalid axioms create internal inconsistences in the discipline (i.e. branch of math). Internal consistency can be restored by fine tuning of inharmonious axioms or by inventing new valid axioms for replacing invalid axioms.
Each of the Scientific disciplines must discover new falsifiable basic facts and prove the new falsifiable scientific facts and use such proven scientific facts as first-principles in its foundation, where a scientific fact implies a falsifiable discovery that cannot be falsified by vigorous efforts to disprove the fact. We know what happened when one of the first principles (i.e. the Earth is static at the centre) was flawed.
Example for basic proven scientific facts include, the Sun is at the centre, Newton’s 3 laws or motion, there exists a force of attraction between any two bodies having mass, the force of attraction decreases if the distance between the bodies increase, and increasing the mass of the bodies increases the force of attraction. Notices that I intentionally didn’t mention directly and/or indirectly proportional.
This kind of first principles provide foundation for expanding the BoK (Body of Knowledge) for each of the disciplines. The purpose of research in any discipline is adding more and more new first-principles and also adding more and more theoretical knowledge (by relying on the first-principles) such as new theories, concepts, methods and other facts for expanding the BoK for the prevailing paradigm of the discipline.
I want to find answer to this question, because software researchers insist that computer science is a branch of mathematics, so they have been insisting that it is okay to blatantly violating scientific principles for acquiring scientific knowledge (i.e. knowledge that falls under the realm of science) that is essential for addressing technological problems for software such as software crisis and human like computer intelligence.
If researchers of computer science insist that it is a branch of mathematics, I wanted to propose a compromise: The nature and properties of components for software and anatomy of CBE (Component-based engineering) for software were defined as Axioms. Since the axioms are invalid, it resulted in internally inconsistent paradigm for software engineering. I invented new set of valid axioms by gaining valid scientific knowledge about components and CBE without violating scientific principles.
Even maths requires finding, testing, and replacing invalid Axioms. I hope this compromise satisfy computer science scientists, who insist that software is a branch of maths? It appears that software or computer science is a strange new kind of hybrid between science and maths, which I want to understand more (e.g. may be useful for solving other problems such as human-like artificial intelligence).
Best Regards,
Raju Chiluvuri
I know this question is clear but I want to know the difference between these facts especially in computer science?
Is there a freely available test dataset of research papers in computer science field? I need to evaluate my text retrieval system which uses a computer science domain ontology, so the dataset should come with a ground truth/relevance judgments for text retrieval task and/or classification task.
I appreciate any suggestion. Thanks
I need a very good text book which is self explanatory and easy to understand for self study by a beginner in JAVA PROGRAMMING
Since technology has seemed to extensively pervade virtually every facet of medicine, do you feel that students of medicine (MD or MBBS) should be better equipped with knowledge and skills in mathematics, physics, biomedical image processing (to better process medical images for diagnostics and surgical planning), biomedical signal processing (for better analysis of bioelectrical signals, e.g. EEG, EKG, EMG), and basic computer science?
Care to discuss?
I want to check whether the ( Journal of Advances in Mathematics and Computer Science ISSN: 2456-9968 ) is a fake one or not? I could not find it in the Clarivate Analytics list.
Thanks for ur help!
Rodgers’ evolutionary concept analysis is being used in Nursing field. I could not find any paper that prove that Rodgers’ evolutionary concept analysis has been used in any other fields other than Nursing.
Is there really a significant difference between the performance of the different meta-heuristics other than "ϵ"?!!! I mean, at the moment we have many different meta-heuristics and the set expands. Every while you hear about a new meta-heuristic that outperforms the other methods, on a specific problem instance, with ϵ. Most of these algorithms share the same idea: randomness with memory or selection or name it to learn from previous steps. You see in MIC, CEC, SigEvo many repetitions on new meta-heuristiics. does it make sense to stuck here? now the same repeats with hyper-heuristics and .....
What kind of technique is used for systems of IoT?
What are the challenges for testing IoT?
Do we need to modify traditional testing technique for IoT?
I really appreciate if anyone discuss or refer some papers.
Thank you
Why most researchers are shifting from tensorFlow to Pytorch?
I wanted to get a scatter plot but the constraint is to plot these points over an image with same dimension as the range of points. Is is possible? If yes which library is good to start with? I tried using gnuplot but it is causing problems. For now I have a code which was stated here but didn't work. I tried using a bitmap image with the points to be plotted in data1.dat file and used the gnuplot script which is stated in the link
set terminal pngcairo transparent
set output 'Figure1.bmp'
plot "Co_ordinates0.dat"
set output
To apply graph theory concepts in computer science and engineering.
When Einstein published 4 great papers in one year, it was something very special. Now this is not the case,
there are many professors who publish papers every week even if they don't do any work in this papers. What is the reason to write a professor name in your paper from your perspective?
How can a young professor reach that in the beginning of academic career path?
I found so many papers aplying the Deep Deterministic Policy Gradient (DDPG) algorithm implementing a critic neural network (NN) architecture where the action vector skips the first layer. That is, the state vector is connected to the first layer, but the actions are connected directly to the second layer of the critic NN.
Actually, in the original DDPG paper ("CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING", Lillicrap 2016) they do that. But they do not explain why.
So... why is this? Which are the advantages of this architecture?
Thanks in advance.
Can we affirm that whenever one has a prediction algorithm, one can also get a correspondingly good compression algorithm for data one already have, and vice versa?
This Special Issue will focus on control, modeling, various machine learning techniques, fault diagnosis, and fault-tolerant control for systems. Papers specifically addressing the theoretical, experimental, practical, and technological aspects of modeling, control, fault diagnosis, and fault-tolerant control of various systems and extending concepts and methodologies from classical techniques to hybrid methods will be highly suitable for this Special Issue.
Potential themes include, but are not limited to:
Modeling and identification
Adaptive and hybrid control
Adaptive and hybrid observers
Reinforcement learning for control
Data-driven control
Fault diagnosis
Fault-tolerant control of systems based on various control and learning techniques
Prof. Dr. Jong-Myon Kim
Prof. Dr. Hyeung-Sik Choi
Dr. Farzin Piltan
There are many subjects covered in Computer Science studies. we should do the discussion on Which subject is most important in Computer Science? and why?
I am currently enrolled in bachelors computer science program and i have chosen my final year project in "securing supply chain using block chain" i am currently searching different research works in this field any help would be appreciated
Dear Friends,
How could any proof for disruptive discovery or theory could see the light of day (or error in our knowledge can be exposed), if no one is willing to investigate evidence that can prove the disruptive discovery or theory?
For example, how is it possible to expose flawed basic beliefs (e.g. such as the Earth is static at the center), if no one is willing to look at evidence that can expose such basic errors? If anyone try to expose such error, often many scientists or researchers resort to personal attacks or humiliating insults to suppress open honest debate.
The basic moral and ethical obligation or sacred duty of every scientist is pursuit of absolute Truth directly or indirectly, where indirectly also includes moral and ethical obligation to validation of sacred tenets for upholding Truth. Refusal to investigate conclusive evidence that can prove a new discovery of fact is tantamount to promoting an error by suppressing the Truth. Suppressing truth (by any scientist) is a volition of scientific method and moral code of conduct (for anyone consider himself a scientist).
How any new discovery of fact, basic error in mankind’s knowledge or new theory could see the light of day, if each member of community of researchers or scientists evade their mandatory moral obligation of investigating evidence and facts that can prove the theory or expose a flawed belief? Any real discovery only shines under rigorous validation or scrutiny by brilliant critics or opponents.
No researcher or scientist should ever ask anyone to blindly believe his/her discovery or theory. Every discovery or theory must be backed by falsifiable proof, evidence and reasoning. Falsifiable doesn’t imply that the discovery or theory is flawed, but it can be falsified, if it is flawed, for example, by finding a counter evidence or sound counter reasoning.
Scientific research is nothing but pursuit of absolute Truth (and upholding the Truth), which also includes getting closer and closer to the Truth by eliminating imperfections in our BoK (Body of Knowledge). The community of researchers and scientists are morally and ethically obligated to uphold the Truths, by investigating the evidence to determine the validity of the discovery or theory.
What would have happened, if everyone ignored or snubbed seminal theories or discoveries of a young 25-year-old low level clerk (named Einstein) at a patent office in Bern? Research community successfully suppressed disruptive discovery of Copernicus for hundred years, which eventually prevailed due to great sacrifices of researchers like Giordano Bruno and Galileo, which resulted in a scientific revolution.
Mankind would be still in the dark ages without their sacrifices to uphold the Truth. Disruptive or outside of box discoveries expose inconvenient Truths/facts, so face fierce resistance and hostilities.
Almost every disruptive or revolutionary discovery faces fierce resistance and opposition. If any scientist disagrees with a theory and proof backed by evidence, scientific process requires channeling the fierce resistance and opposition for falsifying the evidence and facts for invalidating proof. Only incompetent or ignorant people resort to personal insults. Any determined efforts to falsify proof for any discovery end up proving the discovery, if the discovery is Truth/fact. But it is unethical to suppress or snub the discovery to evade such mandatory moral obligation of investigating evidence by resorting to personal attacks or insults.
What would you do, if you stumbled onto a revolutionary discovery, and if no one in the scientific or research community is willing to investigate the evidence and facts, which can provide conclusive proof for the discovery by employing unethical evasive tactics such as personal attacks or humiliating snubs to suppressing facts?
What can you do, if research community ostracizes you (e.g. by resorting to personal attacks, humiliation and snubbing), when you politely request for an opportunity to present evidence that can provide conclusive proof for your theory or discovery?
Assume, you spent more than 12 years making sure that you are absolutely right by accumulating many proofs, where each proof is backed by more than enough evidence. If you are not very wealthy and powerful, you would be helpless (e.g. can do nothing), if research community refuses to look at your evidence that can prove your discovery.
What can you do, if you don’t have large financial resources to force the research community to investigate your evidence, for example, by dragging them to courts for abdicating their moral and ethical obligations (e.g. upholding the Truth) or for gross negligence, in case if researchers are being funded by taxpayer money and having mandatory obligation to find and promote such discoveries?
It is a laudable example that few great researchers took time to investigate disruptive discovery by a low-level young patent clerk. But in case of Galileo and others, research community blatantly abdicated their moral obligation and failed the mandate of scientific method or process for upholding the Truth. Most people claim to be a scientist doesn’t even know what is meant by being a scientist and what are the moral or ethical obligations and mandate of scientific method.
How can we advance mankind’s scientific knowledge into new unexplored frontiers, if research or scientific community abdicates their sacred duty – Pursuit of absolute Truth, flawless knowledge and wisdom? One must stop pretending to be a scientist, if he is not willing to fulfill moral or ethical obligations and mandate of scientific methods for pursuit flawless knowledge and/or upholding of the Truth.
Best Regards,
Raju Chiluvuri
I have found a beautiful technique to solve math problems such as:
- Goldbach’s conjecture
- Riemann hypothesis
The technique uses the notions of regular languages. The complexity class that contains all the regular languages is REG. Moreover, these mathematical proofs are based on if some unary language belongs to NSPACE(S(log n)), then the binary version of that language belongs to NSPACE(S(n)) and vice versa. The complexity class NSPACE(f(n)) is the set of decision problems that can be solved by a nondeterministic Turing machine M, using space f(n), where n is the length of the input.
We prove there are non-regular languages that define mathematical problems. Indeed, if those math problems are not true, then they have a finite or infinite number of counterexamples (the complement languages contain the counterexample elements). However, we know every finite language is regular. Therefore, those languages are true or they have an infinite number of counterexamples, because if they have a finite number of counterexamples, then the complement language should be in REG, that is, this complement must be a regular language. Indeed, we show some mathematical problems cannot have a finite number of counterexamples using the complexity result, that is, we demonstrate their complement languages cannot be regular. In this way, we prove these problems should be true or they have an infinite number of counterexamples as the remaining only option.
See more in my notions:
I have to find out what software and operating systems are used in education. More into lower education than in high and in different countries (USA, Australia, Great Britain, Germany, France, Scandinavia, Romania, Hungary, the Czech Republic, and other mostly European countries). Do you have some idea where can I find such statistical data?
Hi Everyone. Please suggest me the SCI journals list which don't contains article processing charges.
Could anyone please suggest which one is easier to get accepted for conference proceedings inclusion?
Lecture Notes in Computer Science(LNCS), Lecture Notes in Artificial Intelligence (LNAI), Lecture Notes in Bioinformatics (LNBI), LNCS Transactions, Lecture Notes in Business Information Processing (LNBIP), Communications in Computer and Information Science (CCIS), Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (LNICST), and IFIP Advances in Information and Communication Technology (IFIP AICT), formerly known as the IFIP Series.
We want to have 5'000 to 10,000 words from several technical languages judged according to emotional criteria (valence, arousal, imaginability). Computer science often uses a few raters (2-5). In other disciplines ten or more raters are used (e.g. psychology, linguistics).
Hello,
I have seen interesting studies on energy which use Machine Learning algorithms. As I have a mechanical engineering background I am not sure if I can learn and use machine learning. Is it required to have a computer science background? And are the available tools for machine learning easy to use by people from other disciplines?
Thank you
We want to have 10,000 sentences judged according to emotional criteria (valence, arousal, etc.). Computer science often uses a few raters (2-5). In other disciplines ten or more raters are used (e.g. psychology, linguistics).
The sentences originate from several technical languages. In addition to teaching materials, they include general discussions during the study of these disciplines. The sentences usually range from medium difficulty to purely technical language.
In an online website, some users may create multiple fake users to promote (like/comment) on their own comments/posts. For example, in Instagram to make their comment be seen at the top of the list of comments.
This action is called Sockpuppetry. https://en.wikipedia.org/wiki/Sockpuppet_(Internet)
What are some general algorithms in unsupervised learning to detect these users/behaviors?
I am graduate from other major, and now working in computer science and technology, my roomates are mostly researching on image recognition. I want to switch to it and learn how to cooperate with them. But I am still out of way.
Please help me how to gain the access to image recognition. I am skilled at nature inspired algorithms now. I have read some papers on image segmation but I don't know what to do. If you are skilled, could you show me a way?
I understand vaguely that the first author is supposed to be the one who "did the most work", but what counts as "work" in this comparison? Does "most" mean "more than all the other coauthors together" or just "more than any other coauthor"? What happens when the comparison is unclear? How often is "did the most work" the actual truth, versus a cover story for a more complex political decision?
I realize that the precise answer is different for every paper. I'm looking for general guidelines for how an outsider (like me) should interpret first authorship in your field. Pointers to guidelines from journals or professional societies would be especially helpful.
I'm looking for Ph.D. programs (Scholarships) in Europe/USA/Canada/Australia/Great Britain.
Professors who are looking for Ph.D. candidate, I'm ready to work with any new subjects in Computer Science Field and especially in Deep Learning/Machine Learning.
I really appreciate your help!
In which scientific studies you run or plan to run would be artificial intelligence helpful?
Please reply
Best wishes
If somebody wants to mathematically model data, information, and knowledge. Data represents a raw material for processing service delivery solutions to produce information. Knowledge acquired by handling such information by experts in a special field such as computer science, psychology, mathematics, and statistics. How can mathematical models be developed to describe knowledge acquired by individual, population, or community?
Hi, as part of my bachelor thesis on the design of programming languages for teaching mathematics in the 21st century, I have planned to discuss the evolution of (the) major programing languages which focus on the idea that computer programming could play an integral role in STEM education.
In order to analyze different programming languages as a framework for teaching (primarily) mathematical concepts, I am currently searching for (citable) research projects providing insights into the historical development of educational programming languages. – Are you familiar with any research on the evolution of educational programming languages?
Many thanks in advance for your contributions,
Tobias
Hello everyone, could some people suggest a good syllabus for graph theory and discrete mathematics for Computer science - Network department, please.
Thank you in advance.
Which Q1 and Q2 research journal of computer science and cybersecurity area journal are most suitable for speedy review and publication process preferably not the paid journal?
The major spices and ingredients to add flavour in a paper for a journal in the domain of Computer Vision and Machine Learning.
I am trying to make an NN for meteorological prediction, for which I have input data of only one meteorological station. I have target data of more than one location.
I have to train the NN in such a way that I gove it two input values (e.g.current temperature, pressure) and want to obtain temperature output at more than 50 locations and more than one time steps.
e.g. the input :
(25, 101.32)
should give output:
temperature after 2 hrs = 25, 26, 28, 29, 27.5
temperature after 4 hrs = 24, 23, 26, 26.5, 27
In which pattern to arrange the input and target data, and how to change the number of NN output nodes to obtain these results?
Hello Researchers, I would like to know some Q1 paid journals with fast publication in the field of computer science major?