Science topics: Computer Science
Science topic

# Computer Science - Science topic

Explore the latest questions and answers in Computer Science, and find Computer Science experts.
Questions related to Computer Science
• asked a question related to Computer Science
Question
I need help with Queuing theory, easy explanation to M/M/C?
What are the parameters of the M/M/C queuing model?
This is a personal rewording of ideas expressed by A. S. Tanenbaum at its book Distributed Operating Systems, but applied to communications:
It can be proven ( http://en.wikipedia.org/wiki/M/M/1_model Kleinrock, 1974) that the mean time between issuing a request to send a message and getting it completely transmitted, T (queueing time + service), is related to lambda (arrival rate in packets/s) and mu (mean service time) by the formula T = 1/(mu - lambda).
Consider a communication link at 64 kbit/s processing packets with exponentially distributed lengths with average packet size of 50 bytes, then the mean service time (1/mu) is 6.25 ms and this link should be able to handle up to 160 packet/s (maximum lambda). If it just gets 120 packets/s, then the mean Tx time will be 25 ms.
Suppose now that we have n communication lines at 64 kbit/s processing the same type of packets (average length 50 bytes exponentially distributed) at an arrival rate of 120 packets/s. The mean Tx time is the same, 25 ms. Now consider what happens if we use a single link able to send packets at n.64 kbit/s. Instead of having n communication lines at 64 kbit/s we got a single communication line n times faster, with an input rate n.lambda and a service rate n.mu, so the mean response time has got divided by n also.
This surprising result says that by replacing n small communications links by a big one that is n times faster, we can reduce the average response time n-fold ( http://en.wikipedia.org/wiki/Queueing_model#Multiple-servers_queue ).
This result is extremely general and applies to a large variety of systems. It is one of the main reasons that airlines prefer to fly a 300 seat 747 once every 5 hours to a 10 seat business jet every 10 minutes.
Dividing the communications capacity into small channels, each with few users statically assigned, is a poor match to a workload of randomly arriving requests. Much of the time, a few lines are busy, even overloaded, but most are idle. It is this wasted time that is eliminated in the high speed single link and the reason it gives better overall performance.
In fact, this queueing theory result is also one of the main arguments against having distributed systems at all and argues in favour of concentrating the computing power as much as possible.
However, mean response time is not everything. There are also arguments in favour of small channels and distributed systems, such as cost. In general, the cost of N single resources of cost C is N.C, but the cost of a single resource N times better is C^N, or it can be impossible to build it at any price. Reliability and fault tolerance are also factors to consider.
Moreover, it must be considered that for some users, a low variance in service time may be perceived as more important than the mean response time itself, specially for interactive applications. Consider for example web browsing through your own ADSL line, on which asking for the same page to be displayed always takes 500 ms (at least if served from the central office cache). Now consider web browsing on a shared high speed link on which asking for the next page takes 5 ms 95% of the time and 5 s one time in 20. Even though the mean here is twice as good as on the private ADSL line, the users may consider the performance intolerable. On the other hand, to the user running P2P file transfers, the high speed link may win hands down.
A possible compromise is to provide both options, providing each user with a small single amount of reserved capacity for interactive tasks such as web browsing and running all non-interactive transfers (e.g. P2P, mail, SFTP...) on the rest of shared bandwidth of a high speed link.
A related joke read on the Embedded Muse 223: Why does *my* queue at the supermarket usually move the slowest?
You compare yours to the ones on either side. The odds of yours being
the fastest are 1 in 3 if you compare yours to the immediately adjacent
queues, 1 in 5 if you look at two lines on either side, etc. If you
want to feel better about it, join the queue all the way on the end and
you'll have fewer others against which to measure.
• asked a question related to Computer Science
Question
Hello,
I would appreciate it if you suggest me some public twitter sentiment analysis dataset during COVID-19 pandemic
Best regards,
Sentimental Analysis of COVID-19 Tweets Using Deep ... - MDPI
• asked a question related to Computer Science
Question
Dear Friends,
Can you guess which one is the most mysterious and enigmatic physical thing among these things such as biological cells, light, elementary particles (e.g. electrons, neutrons or protons), viruses, fungi, bacteria, atoms, chemical compounds, biological cells, blood cells or finally plain old components, in the context of engineering paradigms (e.g. mechanical, electronics, or aerospace) for designing and building large products (e.g. cars, airplanes, computers, factory machinery or spacecraft)?
The greatest tools for acquiring and using knowledge for technological progress and great inventions are (i) scientific method and (ii) mathematics, where these two tools provide complementary perspectives for gaining deeper insights. Each act like a light to illuminate mutually complementary sides, perspectives or dimensions. Since software researchers refuse to use scientific method (i.e. light of science), software community wasted 50 years and failed to solve software crisis and ended up with a useless fake CBE-paradigm.
If fake scientists still don’t realize that it is a mistake to blatantly violate scientific principles, they are going to repeat same kind of mistakes for Artificial Intelligence research and development. Many things would stay enigmatic and end up in a crisis, like software crisis. Many things that are inexplicable and puzzling or enigmatic in the perspective of mathematics can become crystal clear from the scientific perspective, since light of scientific method illuminates the dark spots left by light of mathematics.
Today, greatest enigmas for researchers of software and computer science include answers to following simple questions such as what is meant by a component in the context of all the other engineering disciplines, and what is meant by CBE (Component Based Engineering) that successfully eliminated engineering crisis form designing and building large and complex products (unlike software crisis).
Even if we know just 30% about bacteria or viruses that has been documented in the textbooks, each and every piece of knowledge can only be included in the textbooks, if and only if the piece of knowledge is supported by falsifiable proof. It impossible to find a piece of knowledge that is not supported by a falsifiable proof. There is a possibility that 20% of the knowledge in the textbooks might be falsified by finding counter evidence in the future such as new discoveries or empirical evidence.
Since mankind have enough valid knowledge about things such as bacteria, light or electrons, researchers are able to invent great things such as treatments for many kinds of infections, fibre-optic networks or semi-conductor chips respectively.
On the other hand, none of the knowledge about the components in the textbooks for computer science or software is either tested (e.g. no one challenged) or supported by any falsifiable proof. But there is a possibility that up to 20% of the knowledge might be proven valid in the future. However, I am sure that 80% of the knowledge in the textbooks is invalid and not open for challenge.
Even simple things such as what component is and what is meant by CBE stayed an enigma and mysterious for many decades, since knowledge in the textbooks about components is untested and invalid. Fake scientists at NSF ( that I prefer to call National Fake Science Foundation) feel offended, if anyone challenges their myths about so called components.
Anything would be less enigmatic or mysterious, even if we only have 30% valid knowledge than another thing that has huge knowledge, but significant portion of the knowledge is invalid. Hence, plain old components are far more mysterious and enigmatic than the invisible things such as viruses, electrons and biological cells. We made many useful inventions by even by relying on the limited valid knowledge.
Can you name any physical thing on the Earth that is more mysterious and enigmatic for scientific community than plain old components used for designed and building large Component-Based Products (or CBP), by taking into consideration all the knowledge in the published scientific literature and textbooks for all scientific disciplines?
A thing must be the most mysterious and enigmatic, if there is a large BoK (Body of Knowledge) for the thing and if larger percentage of the BoK is invalid (e.g. untested and unproven). The main reasons that makes anything enigmatic is not just lack of sufficient valid BoK but also having large chunks of invalid knowledge.
Isn’t it fascinating? Even such simple to acquire knowledge would stay mysterious and enigmatic (and creates a paradox and crisis), if researchers refuse to use the light of scientific principles to illuminate dark spots that are in the realm of science, since such dark spots can’t be illuminated by the light of mathematics.
I invented solutions for software crisis by gaining scientific knowledge essential for understanding mysterious components essential for achieving the elusive and enigmatic CBE-paradigms, in the context of all the other engineering disciplines. The fake scientists of computer science foolishly refusing to use light of scientific method.
The NSF that supposed to uphold scientific principles and scientific method, but is breaking scientific principles, protocols and code of conduct for scientific discourse, which is essential for progress of science and technology. Any accepted theory (i.e. theory or concepts derived from the theory that are being used by practitioners of any craft or trade) must be treated as an assumption, if the theory is not supported by a falsifiable proof (that is backed by repeatable evidence and/or verifiable facts).
The practitioners of astronomy or astrology had been practiced their trade or craft until 16th century by relying on the 2300-year-old theory “the Earth is static at the centre” (and concepts or observations derived from the theory). Mankind falsely concluded that “the Earth is static at the centre” is self-evident fact, so no one bothered to support this unproven theory by finding a falsifiable proof.
Since there was no falsifiable proof for such core first-principles in the foundation, it was impossible to challenge the huge BoK (Body of the Knowledge) acquired and accumulated for 1800 years for creating the dominant paradigm until 16th century by relying on such core first-principles. The scientific community in dark ages used illegal circular logic to defend the core first-principles.
For example, they used the observable facts such as epicycles, non-uniform speeds of planets, lack of stellar parallax and retrograde motions to defend the presumption “the Earth is static at the centre”. Countless concepts, observations and other derived theories in the whole BoK that had been accumulated for 1800 years can be used to defend the belief “the Earth is at the centre”.
The scientific method, protocols and processes for discourse has been created and perfected to prevent this. The biggest problem to subvert a flawed dominant paradigm is overcoming the illegal circular logic, which rely on the huge BoK acquired and accumulated for the paradigm. This kind of thing can be prevented by having falsifiable proof for the core first-principles at the foundation of any dominant paradigm.
When there is a falsifiable proof and if the theory is flawed, it is straight forward to falsify the proof by finding one or more verifiable and/or repeatable counterevidence. This is the reason the scientific method is created, which requires that each theory must be supported by a falsifiable proof.
Unfortunately, today software researchers and experts using the huge BoK in the textbooks and published literature that has been acquired and accumulated for past 50 years by relying on untested and unproven core first-principles in the pre-paradigmatic foundation such as about so called components for software and computer science is a branch of mathematics etc.
About 80% of the accumulated knowledge we have in textbooks and other published literature about the components for software is untested, unchallenged and invalid. Having invalid knowledge makes anything enigmatic, mysterious or paradoxical. Anything would become more and more enigmatic, mysterious or paradoxical, if it acquires and accumulates more and more knowledge and if larger and larger percent of the knowledge accumulated is invalid.
Every piece of scientific knowledge for any physical thing in the textbook must be well tested, challenged, and musty be supported by falsifiable proof backed by empirical evidence that must be open for challenge. Scientists of computer science must be ashamed of them-selves, if they feel offended by counter evidence or facts to expose untested or unproven knowledge about the enigmatic components.
Isn’t it pathetic, if the NSF (National Fake Science Foundation) don’t know or can’t understand basic scientific principles, processes and basic code of conduct? I oppose passing “The Endless Frontiers Act (S. 3832)” to fund the Fake Science foundation, until fake scientists at NSF understand basic scientific principles and processes and strictly implement the code of conduct for upholding the truth.
I wish to file a court case to block the act (i.e. The Endless Frontiers Act) to prevent tens of billions of dollars flush down the drain by the fake scientists at CISE, since nearly 50% of the US\$100 billion goes to the CISE of Fake Science Foundation.
Best Regards,
Raju Chiluvuri
Hi. The origin and nature of the universe.
• asked a question related to Computer Science
Question
Hi Reserchers,
I am doing a computer science dissertation on the topic '' Automate text tool to analysis reflective writing''.
The hypothesis set is ‘To what extent is the model valid for assessed reflective writing?’ I just want from the questionnaire( closed ended questions and one open question) to validate the proposed model.
I have used the used the 5 point likert scale for analysing the data, option given strongly agree, agree, neutral, disagree, strongly disagree. The sample size is 10 participants. I have chosen my participate based on their experience, career and knowledge of the reflective writing.
1) Which statistical analysis tool shall I use to analyse 10 sample size to validate the model? Please show me step by step on how to analyse the data?
2) What would be the associated hypothesis?
3) Can I use Content Validity Index with 10 sample size participants on the questionnaires using 5 point likert scale?
4) this step on my research Is it qualitative method or quantitative method?why?
If you have any suggestion on my hypothesis, the sample size and the tool I need to analyse?
but i have a worry how will you increase your population are you going to change your area of study? i have a similar case with a population of just 13 teacher how do i test for the relibility of the quetionares
• asked a question related to Computer Science
Question
I would like to start a discussion on which index is more reliable, H-Index or i10-Index. Both are usable, however their ways of calculation are different. There is also G-Index. I am not asking on the differences but on their reliability. Welcome to any comments.
Good question,but both are so low for me that it does not matter.
• asked a question related to Computer Science
Question
I have heard conflicting answers on this ranging from "do it to make your research accessible" to "only do it if you're invited" to "don't do it at all." The most moderate advice I saw was "one or two is fine as long as you have several other journal publications."
If the answer depends on my field, I'm in computer science and software engineering.
Yes, chapters in books are regarded as journal articles in my country. Remember they go through a rigorous review process as well and are always at least 20 pages in length.
• asked a question related to Computer Science
Question
Rodgers’ evolutionary concept analysis is being used in Nursing field. I could not find any paper that prove that Rodgers’ evolutionary concept analysis has been used in any other fields other than Nursing.
Is it possible if i use it for computer science field ?
Rodgers' evolutionary concept analysis is a method for developing knowledge in nursing science. ... A brief description of the evolutionary process, from data collection to data analysis, with the concepts' context, surrogate and related terms, antecedents, attributes, examples and consequences, is presented.
• asked a question related to Computer Science
Question
Dear Friends,
I can bet that no one in the world today (particularly in the software industry) knows or has the right answers to two simple questions: (i) What a component is, and (ii) What is meant by CBE (Component-Based Engineering), in the reality and context of all other engineering disciplines such as mechanical, electronics, and aerospace engineering.
Learning the right answers to these two simple questions would have two huge benefits: (i) Inventing effective solutions for the notorious software crisis by eliminating infamous spaghetti code, and (ii) Proving Computer Science is a fake science (i.e. paradox), that opens the door to transforming Computer Science into a real science that can address not only problems that have stood unsolved for decades (e.g. human-like computer intelligence that can be achieved by gaining valid scientific knowledge about the functioning and anatomy of bio-neurons in bio-neural networks) but also problems of the future such as bio-cellular computing, which cannot be solved by fake scientists or practitioners of fake science.
If a problem requires acquiring valid scientific knowledge, it is impossible for fake scientists practicing fake science to acquire such valid scientific knowledge essential to solving the problem. To provide tangible proof, I invented effective solutions for the infamous software crisis by gaining and using valid scientific knowledge that can provide the right answers to these simple questions about components, where scientific knowledge implies knowledge that clearly falls under the realm of science and is acquired without violating the core principles and proven rules of the scientific method.
I have been requesting software researchers to find right answers to the simple questions for over a decade, and my request has been seen as heresy. Please see attached PDF.
Why does the software research community find it repugnant or heretical when requested to recognize the reality and truth objectively? I feel, any Scientist must be ashamed of himself if he feels such a request is repugnant or heretical and resort to snubbing and personal attacks.
Best Regards,
Raju Chiluvuri
Dear Raju Chiluvuri,
Interesting considerations on past research, including a comparison of the research methodology carried out by Galileo and Kepler. That's right. In order for the development of science to achieve fully defined goals, this is research work, the interpretation of the results of the research and inference must be carried out objectively.
Best wishes,
Dariusz Prokopowicz
• asked a question related to Computer Science
Question
Dear colleagues,
The idea that its language can identify a scientific or technical field was a relevant topic in the work of Jürgen Habermas. However, in computing fields, this difference would be mild or fuzzy.
I have designed a small instrument to measure the domain difference between computer science and software engineering. I would like that you answer it ( http://shorturl.at/akFS7 ) or let me your opinion about it. Depending on the number of answers, I would let the results here.
Thank you very much
Filled the survey. I come from an Information technology undergraduate background.
• asked a question related to Computer Science
Question
I am looking for a journal, related to Psychology and Computer Science. IF < 2.0. If anyone has published some related work, or know any journal. that would be nice.
المجلات الماليزية جيدة جدا تنشر باللغتين العربية والانجليزية ولهن تصنيف عالي
• asked a question related to Computer Science
Question
I am looking for computer science journal
you can search DOAJ
directory of Open Access Journals.
• asked a question related to Computer Science
Question
I need suggestions for a good book that covers the basics of Blockchain Technology that will be prescribed for BS Computer Science students.
I would definitely recommend Bitcoin and Cryptocurrency Technologies from Princeton University. A lot of university courses about blockchain technologies in the world are following exactly this book which goes deeply into understanting all concepts and is fully self-contained, which in you use-case, is crucial. It is available as open-source: http://bitcoinbook.cs.princeton.edu/
• asked a question related to Computer Science
Question
In this regard, I reviewed the articles indexed in DOAJ, but no suitable case was found.
I need an open-access journal in the field of computer science (cloud computing) that
1. does not limit the number of pages of the submitted article
2. does not have a processing or publishing fee
3. is indexed in JCR or Scopus
Unfortunately, almost all open access journals charge processing fees especially scopus indexed journals.
• asked a question related to Computer Science
Question
Is it possible now or in the future to create an artificial intelligence that will draw knowledge directly from the analysis of Internet resources and learn this knowledge?
Best wishes
The issue of the possibility of self-improvement of the system of artificial intelligence, which gained autonomy, got out of human control and by downloading data from the Internet it is becoming an increasingly powerful and threatening system for humans since the 1990s in science fiction literature and film. With each subsequent year, the technology of artificial intelligence is developed and improved. So, can the scenario presented above come true in the future? What do you think about this? Please reply.
Thank you, Regards,
Dariusz Prokopowicz
• asked a question related to Computer Science
Question
In which scientific studies you run or plan to run would be artificial intelligence helpful?
Best wishes
No. The discussion was not over. The overwhelming majority of the topics discussed in my Research Gate profile are still relevant and the discussions are open. Yes you are right. The issue of varieties of "analytical" artificial intelligence is also important. You added an important point to our discussion.
Thank you, Have a nice day, Stay healthy! Best wishes,
Dariusz Prokopowicz
• asked a question related to Computer Science
Question
Can anyone suggest payment based fast track sure publication in computer science or cybersecurity SJR or JCR journals?
Many thanks
• asked a question related to Computer Science
Question
1. Functions
2. Matrix algebra & eigenvectors
3. Vector algebra
4. Complex numbers
Dear Prof. Halim,
It is difficult to categorize papers, but in general you can go for papers dealing with:
Fuzzy sets ( Equivalent to defining membership functions)
Can i put sets (defined through characteristic functions)
In fact, if you go by the latest definition of Mathematics (It is the study of sets, functions and their properties) there are substantial portion of Mathematics dealing with functions only.
Rough sets (Rough membership functions)
Soft sets and its variants (through characteristic function approach and membership function approach)
Matrices are also functions (In fact transformations)
Keeping my above observations in view, can you please be more specific!!
• asked a question related to Computer Science
Question
Dear,
I m finding low cost but impact factor journal of computer science.
Which domain as Network and communications,
Regard......
Acta Informatica, AI and Ethics, AI & SOCIETY, Algorithmica,Annals of Mathematics and Artificial Intelligence, Applicable Algebra in Engineering, Communication and Computing, etc
• asked a question related to Computer Science
Question
Computer Science and Engineering
- Formal methods for analysis and verification of robotic software
- Analysis and verification of smart-contracts in blockchains
• asked a question related to Computer Science
Question
Dear All,
I am developing a small computer program "kendo" ( ) for the data processing and visualization of mass spectral data (MS1 and/or MS2, SWATH, IM-MS...) mainly using netCDF and mzML file formats.
For mzML files, I haven't found any complete list of the "accession" codes defining specific parameters (e.g. <cvParam cvRef="MS" accession="MS:1000511" name="ms level" value=""/>; <cvParam cvRef="MS" accession="MS:1000127" name="centroid spectrum" value=""/>; <cvParam cvRef="MS" accession="MS:1000285" name="total ion current" value=""/> ...)
Does anybody have such a list so I can generate clean mzML files ?
Thank you !
Thierry
Hey Thierry, I am not an expert on this, but this may help:
• asked a question related to Computer Science
Question
One interpretation includes the following explanation:
Application of computer science and technology, special purpose scanners, for recognition of signals obtained by excitation of magnetic fields in electrotechnical devices, apparatus and measurement.
Vishnu Kumar Gupta's
What does UDC 004.6:004.86 mean?
• asked a question related to Computer Science
Question
I understand vaguely that the first author is supposed to be the one who "did the most work", but what counts as "work" in this comparison? Does "most" mean "more than all the other coauthors together" or just "more than any other coauthor"? What happens when the comparison is unclear? How often is "did the most work" the actual truth, versus a cover story for a more complex political decision?
I realize that the precise answer is different for every paper. I'm looking for general guidelines for how an outsider (like me) should interpret first authorship in your field. Pointers to guidelines from journals or professional societies would be especially helpful.
Good journals accept submission with clear authors contributions to submitted work. First author is considered the main figure for the research item
• asked a question related to Computer Science
Question
This is my only question on logic in RG; there are other questions on applications of logic, that I recommend.
There are any types and number of truth values, not just binary, or two or three. It depends on the finesse desired. Information processing and communication seem to be described by a tri-state system or more, in classical systems such as FPGAs, ICs, CPUs, and others, in multiple applications programmed by SystemVerilog, an IEEE standard. This has replaced the Boolean algebra of a two-state system indicated by Shannon, also in gate construction with physical systems. The primary reason, in my opinion, is in dealing more effectively with noise.
Although, constructionally, a three-state system can always be embedded in a two-state system, efficiency and scalability suffer. This should be more evident in quantum computing, offering new vistas, as explained in the preprint
As new evidence accumulates, including in modern robots interacting with humans in complex computer-physical systems, this question asks first whether only the mathematical nature is evident as a description of reality, while a physical description is denied. Thus, ternary logic should replace the physical description of choices, with a possible and third truth value, which one already faces in physics, biology, psychology, and life, such as more than a coin toss to represent choices.
The physical description of "heads or tails", is denied in favor of opening up to a third possibility, and so on, to as many possibilities as needed. Are we no longer black or white, but accept a blended reality as well?
Great idea
• asked a question related to Computer Science
Question
Hi everyone! I would like to write my bachelor's thesis on a topic that's currently relevant in the sphere of finance, marketing or computer science (or if it's possible a topic concerning all the three fields of interest). Those fields are the same upon which my bachelor is based (Bachelor of Science in Economics, Management and Computer Science).
I've some broad ideas about the topics, for example: the link between brand equity and financial performance; the effects of aggressive marketing on financial markets; the new generation of traders (covid has increase the number of retail investors with no previous experience); machine learning applied to behavioral finance (I really enjoy those last two topics but have no idea on how to connect them).
Obviously any kind of suggestions, regarding new topic (broad or specific) or the development of cited ones would be greatly appreciated.
A good answer would be: Digital currencies and digital payment - an IT challenge for safe transactions.
• asked a question related to Computer Science
Question
Hi,
I have been through some discussion regarding survey paper writing tips and tricks. However, these are very generic. I want to know how to write a survey paper related to computer science topics (e.g., blockchain,.internet of things, so on). I have some following queries regarding the aforementioned concerns.
• How to design the flow of the survey paper?
• What will be the minimum length of the survey paper?
• How to pick up a reference paper and which criteria should be the first concern while selecting it? What is the minimum number of references that should I pick?
• Is it necessary to propose an idea in the paper? If yes then is it necessary to show a performance evaluation of the proposed scheme?
• While writing a survey paper which things should I focus on or care about?
Thanks for your time and input.
Importance and significance of the topic.
Discuss the background and target audience.
Summarize the surveyed research area and explain why the surveyed area has been studied.
Summarize the classification scheme you used to do the survey.
Summarize the surveyed techniques with the above classification scheme.
• asked a question related to Computer Science
Question
Hello,
I would appreciate it if you could suggest me studies based on natural language processing (NLP) to help assisting medical emergency cases.
I have been working in the field of domain specific artificial intelligence and domain specific synthetic languages for several years. This is a very important question. I discussed this subject with a colleague just yesterday. A few notes from our discussion follows.
There are some jobs where a person’s eyes and hands must be free to perform their work, yet they must be able to communicate with other people and computer systems that help them perform that work. In some of these jobs, safety is most important.
Although systems like Alexa and Siri are already in wide use, they do not today provide the degree of reliability and safety that would be required for use in an emergency room or neonatal ICU.
In such cases, it would be prudent to a create a task specific synthetic language designed for communication between humans and computer systems while humans perform their work, and computers answers questions, provide information, control task specific equipment, and other functions, thereby enabling a person to perform their work with their hands and eyes free to focus on the task at hand.
In short summary this would require implementation of computer systems that are able to:
· reliably hear and process spoken words
· translate spoken words into human and machine-readable form (text)
· verify that the spoken words, in human and machine-readable, form comply with the specification (grammar) of the synthetic language
· determine what question or command has been voiced by the human,
· invoke the procedure that will answer the question, provide requested information, or perform the command using task specific equipment.
· reply in some audible or visible form if the spoken words were not recognized or if the spoken words in human and machine-readable form do not comply with the specification of the synthetic language, or if they are used in the wrong context.
For added safety, before such capability is put into use, it would be necessary to generate many requests and commands both correct and incorrect in their formation to verify that both correct and incorrect commands can be successfully categorized as such.
For added safety, before such capability is put into use, it would be necessary to generate many requests and commands and to verify that the correct procedures are invoked, and through simulation, verify that the procedures safely perform the intended action.
Simply stated, a synthetic language is a set of words and set of rules for how to create requests or commands using the words of that language that can be spoken by a human, recognized by a computer, properly interpreted by a computer, and safely translated into the intended actions.
• asked a question related to Computer Science
Question
Hello,
My research work focuses on the use of NLP and voice recognition for the medical emergency assistance.
So I would appreciate it if you suggest me some contributions that could be done in this area.
Best regards,
Dear Artem Kramov , thank you so much for your valuable reply, I'll take your suggestions into consideration.
• asked a question related to Computer Science
Question
I'm currently working on my undergraduate thesis where I develop a genetic algorithm that finds suboptimal 2D positions for a set of buildings. The solution representation is a vector of real numbers where every three elements represents the position and angle of one building. In that every three elements, the first element represents the x position, the second represents the y position, and the third represents the angle. A typical solution representation would look like:
[ building 0 x position, building 0 y position, building 0 angle, building 1 x position, ... ]
I have already managed to create a genetic algorithm that produces suboptimal solutions and it uses uniform crossover and discards infeasible solutions. However, it is only fast for small problems (e.g. 4 buildings), and adding more buildings makes it too slow to the point that I think it devolves into brute force, which is definitely not what we want. I tried to keep infeasible solutions into the population but with a poorer fitness before, but that only results in best solutions that is worse than when I threw away the infeasible ones.
Now, I am looking for a crossover operator that can help me speed up the genetic algorithm and allow it to scale to more buildings. I have already experimented arithmetic crossover and box crossover but to no avail. So, I am hoping that the community can suggest crossovers that I could try. I would also appreciate any suggestions to improve my genetic algorithm (and not just for the crossover operator).
Thanks!
Hi,
Since your representation is in R^d, I strongly advice you to look into Evolution Strategies (ES) instead of GAs.
ES are evolutionary algorithms that are naturally suited to evolve real-valued solutions and are state-of-the-art. They operate by updating a distribution and sampling new real-valued solutions from it.
The most famous approach is Hansen's CMA-ES:
A modern approach that can operate very efficiently if linkage structure is known (e.g., what variables should be sampled at the same time) is Bouter's RV-GOMEA:
If your problem has many local minima and you want to explore them, you can look into Maree's Hill valley clustering-based ES:
• asked a question related to Computer Science
Question
Dear colleagues,
I am working on an intervention study with 3 different groups of students. One group represents the intervention group and consists of a seminar with a practical phase and computer science content. Another group has only a theory seminar with computer science content and the last group has a seminar with other content. The last group is used to check how stable the constructs are. The 3 measurement points are equally spaced as pre-inter-post-tests in a quasi-experimental setting. Latent growth curve modelling doesn´t fit! Is there a method that uses the strength of the 3 measurement time points with a small sample size?
Thank you
Martin
I would use a repeated measures ANOVA for Between x Within subjects designs.
In attachment a script using R for such a design, with Post Hoc tests. In the example there are 5 observations in each Condition.
• asked a question related to Computer Science
Question
Hello,
I am seeking recommendations from the Research Gate community regarding studies about
the IT and Artificial intelligence solutions to assisting medical emergency cases.
What are the perspectives and future works that could be done in this area?
Best regards,
• asked a question related to Computer Science
Question
anyone suggest what are the merits and demerits of GAN and classical data augmentation in plant leaf disease detection and classification systems
interested
• asked a question related to Computer Science
Question
Good morning everyone. As a part of my research work, I designed a network that extracts the leaf region from the real field images. when I search about performance evaluation metrics for the segmentation I found a lot of metrics. Here I provided the list
1. Similarity Index = 2*TP/(2*TP+FP+FN)
2. Correct detection Ratio = TP/TP+FN
3. Segmentation errors (OSE, USE, TSE)
4. Hausdorff Distance
5. Average Surface distance
6. Accuracy = (TP+TN)/(FN+FP+TP+TN);
7. Recall = TP/(TP+FN);
8. Precision = TP/(TP+FP);
9. Fmeasure = 2*TP/(2*TP+FP+FN);
10. MCC = (TP*TN-FP*FN)/sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN));
11. Dice = 2*TP/(2*TP+FP+FN);
12. Jaccard = Dice/(2-Dice);
13. Specitivity = TN/(TN+FP);
14. Sensitivity = TP/(TP+FN);
suggest to me which performance evaluation metrics are best suited for my work. thank you.
• asked a question related to Computer Science
Question
Apparently, the largest technology companies are already working on a new type of electronic gadgets, which in the next stage of the current technological revolution, known as Industry 4.0, will replace smartphones.
Therefore, I am asking you: What type of electronic gadget will replace smartphones in the future?
Voicebots, I guess.
• asked a question related to Computer Science
Question
I am currently writing a proposal for my computational social science thesis, I need help to focus on the sentiment analysis of how each sentiment change over time in a social science field. The current research question I have in mind is: how do the sentiments of English tweets regarding COVID-19 as a threat evolve over the year 2020? However, it does not seem to link with social scientific topics but rather a computer science project. I got my research idea based on this paper Any advice is much appreciated!
The idea is good. Check if there is some already data set available for this problem. And then work on that data. Visualise the data and check if u get any idea from that. It is basically an NLP problem.
• asked a question related to Computer Science
Question
Dear Colleague,
The University of Dayton’s Department of Computer Science invites applications for multiple tenure-track Assistant Professor positions beginning on August 16, 2021.
The Department seeks experts committed to excellence in undergraduate and graduate education who also have a focus on research. The individuals holding these positions are expected to teach undergraduate and graduate courses, pursue an externally funded research program, advise and mentor students, and engage in service to the university and the community. Applicants must have completed all course work needed for a Ph.D. in Computer Science or equivalent related field, the potential for quality teaching and scholarly research, and articulate a commitment to excellence in undergraduate and graduate education with a focus on research.
The Department of Computer Science offers a stimulating academic environment with active research programs in growing areas of computer science. Recently, the Department has relocated to a brand new state-of-the-art facility with numerous cutting-edge research labs and classrooms, and a commitment to further expand and grow the faculty and student opportunities. The department offers Bachelor’s, Master’s, and Ph.D.  degrees in Computer Science, and also a certificate in Autonomous Systems and Data Science.
View the complete job description, information about the University, and application instructions here: https://sciences.academickeys.com/job/zlja1rkx/
Warm regards,
--  - Valerie Woodruff    Ph. +1.203.693.1101
Thank you for spreading the news.
• asked a question related to Computer Science
Question
Hi Everybody
I am a phd student, working currently in detecting miRNA target based on the many-to-many relation between miRNAs and targets .I have created miRNA-Target modules .
I have used Guided Clustering technique to achieve my goals.
My background is Computer Science and I need to interpret my results, any Volunteers?
Thank you very much. Dr Muhammad.That was really useful :)
• asked a question related to Computer Science
Question
At the Computer Science Department at the beginning of the first semester there are p freshmen (study) groups: group i contains ni students, for all i = 1, p. For the second semester the Department wants to reorganize these groups in such a way that:
(1) the new organizing schema has r groups;
(2) the new group j contains mj students, for any j = 1, r;
(3) any new group cannot contain more than c students which were classmates in a same old group from the former organizing schema (c ∈ NN∗ \ {1}).
We want to know if such a new organizing schema exists.
(a) Devise a network flow model for building (if possible) the new organizing schema.
(b) Prove a characterization of the existence of a solution to this problem in terms of maximum flow in the above network. (In the affirmative case provide a way of building the new schema: which old group will have to cede and how many students to which new group).
(c) What is the time complexity for deciding if a solution exists? (Discuss the time complexity for at least three algorithms.)
2 Peter Breuer: as far, as I understand, there is a constraint on composition of newly organized groups, e.g. it is not possible to keep them unrearranged. Does your hydrologic analogy take it into account?
• asked a question related to Computer Science
Question
If we have three different domain of data (e.g. security, AI and sport) and we did 3 different case study or experiments (1 for each domain) and we estimate the Precision, Recall and F-measure for each experiment. How we can estimate the overall Precision, Recall, F-measure for the model. Is use normal mean average is suitable or F1 or p-value? Which one is better?
• asked a question related to Computer Science
Question
Dear Friends and Colleagues from RG,
I wish You all the best in the New Year. I wish you a successful continuation and successes in scientific work, achieving interesting results of scientific research in the New Year 2019 and I also wish you good luck in your personal life, all the best.
In the New Year, I wish You success in personal and professional life, fulfillment of plans and dreams, including successes in scientific work, All Good.
In the ending year, we often ask ourselves:
Have we successfully implemented our research plans in the ending year? We usually answer this question that a lot has been achieved, that some of the plans a year ago have been realized, but not all goals have been achieved.
I wish You that the Next Year would be much better than the previous ones, that each of us would also achieve at least some of the planned most important goals to be achieved in personal, professional and scientific life.
I wish You dreams come true regarding the implementation of interesting research, I wish You fantastic results of research and effective development of scientific cooperation.
I wish You effective development of scientific cooperation, including international scientific cooperation, implementation of interesting research projects within international research teams and that the results of scientific research are appreciated, I wish You awards and prizes for achievements in scientific work.
I wish You many successes in scientific work, in didactic work and in other areas of your activity in the New Year, and I also wish you health, peace, problem solving, prosperity in your personal life, all the best.
Thank you very much.
Best wishes.
I wish you the best in New Year 2019.
Happy New Year 2020.
Dariusz Prokopowicz
Dear Colleagues and Friends from RG,
Hello Dear Everyone,
In the New Year of 2021, I wish all researchers, scientists and users of the Research Gate portal fulfill their plans and dreams, success in their professional work and personal life, success in the field of research and publication of their results, all the best. I wish all my colleagues and friends from the Research Gate portal that the New Year 2021 will be better than the previous years, that the SARS-CoV-2 (Covid-19) coronavirus pandemic ends as soon as possible, that there is a quick return to "normal", to the pre-pandemic state . The year 2021 begins a new decade of many new challenges related to solving key problems of the development of civilization, solving problems resulting from the pandemic crisis, economic, social and climate crises, etc. Let us hope that the development of science, further scientific research, new technologies, the current fourth technological revolution will enable the solution of key problems of civilization development.
Happy New Year 2021
Best regards, Stay healthy!
Dariusz Prokopowicz
• asked a question related to Computer Science
Question
Q3,Q4 will be okay. I need scopus publication without APC and fast published. The paper is from Computer science discipline.
• asked a question related to Computer Science
Question
Mac OS vs Linux vs Windows??
I personally use MacOS but would like to know what other people use for their research work, preferably researchers associated with Computation work. If possible do let me know the reason. This is just a survey.
MS Windows OS
• asked a question related to Computer Science
Question
I am using data augmentation and decaying learning rate after each epoch. If I don't use data augmentation but keeping callback, then my training accuracy reaches to 99.65% but not validation. Another scenario is, if I remove callback of learning rate decay but keeping data augmentation then training accuracy also improves and reaches 99% but not validation. Why is it stucking with current configuration (lr decay + Data Augmentation) ?
What could be the reason to have this problem with data augmentation?
Still you're are not stating what algorithm you are using, that's the software
But anyway, what's the problem if you can successful train the model and the accuracy is that high?
Why are we interested in the values in the training step ?
• asked a question related to Computer Science
Question
Dear All,
I have the data of metabolic pathways in case and control, well there are pathways which are represented several times due to the involvement of reactions in case, well now I would like to see if the no. of reactions in metabolic pathways in the case are significant when compared to no. of reactions in the control sample. I have attached an excel sheet to this message which includes pathway names and the no. of reactions in case and control. My next step is to perform fisher exact test using R and to identify false discovery rate in R. kindly let me know how this could be performed by using R programme
Try this one:
FDR(pvals, qlevel = 0.05)
Arguments
pvals
a vector of pvals on which to conduct the multiple testing
qlevel
the proportion of false positives desired
Good luck-
• asked a question related to Computer Science
Question
Hello,
My situation is a little strange because I am a Statistics PhD student but I ended up doing research in Natural Language Processing (NLP)/ Deep Learning due to my supervisor's recommendation.
I feel strange because although I can read the NLP paper, understand them, and come up with my own research topics (so far I have identified 3 different research topics), I am not a part of computer science department NLP lab. My supervisor into applying/developing machine learning algorithms for analyzing open-ended questions, so I guess his work is somewhat related to NLP, but he is not really an expert in NLP/Deep Learning. I do like my current supervisor, he is trying to help me out in the best way he can, and I want to continue work with him. But I am wondering,
- Is it advisable for me to seek a professor from computer science NLP lab to be my co-supervisor (upon my supervisor's consent)?
- What are the advantages of doing a NLP research in a big computer science NLP lab?
- Is collaboration important for a PhD student to do research in NLP? I see many PhD students who publishes at top NLP conferences are often co-authored with multiple number of collarborators. I am just doing NLP research with my supervisor, and my supervisor and I don't really have any connection with NLP researchers. Should I make an effort to find NLP researchers whom I can work with for my research (of course, I will be doing most of the work since I want to be the first author)?
- If I need to seek collaborator / co-supervisor who are familiar with the field of NLP, how should I approach them? can I try sending them emails and see if they are interested in my research? I guess the best way is to talk to people at a conference but due to COVID-19, everything is taking place online and I doubt whether I will be able to make any connections.
- Do PhD student from a big NLP lab have better computational resource than the PhD students who does NLP research outside of those labs? My supervisor recently set up an account for the national supercomputer so that I may take an advantage of it, but since almost every researchers in my country have access to this resource, when I submit the job on SLURM the queue wait time can be long. Are PhD students from computer science NLP lab often free of this problem? (do they have a access to better computational resource?)
...Lots of questions! Could someone advise me on these issues?
Thank you so much for your time,
Python and R. Do coding and try by yourself, do not do theoretical study only from papers.
• asked a question related to Computer Science
Question
Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.
1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?
2. Then I thought well if we assumed that the answer to 1 is yes then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.
I'm looking foe different insights :)
Thanks.
The change propagation models may give a great idea
• asked a question related to Computer Science
Question
Please could you urgently assist in getting this article titled “Influence of time pressure on aircraft maintenance errors” by Kimura Akisato, T. V. Thaden, William D. Geibel Published 2008 by Computer Science
• asked a question related to Computer Science
Question
I am not sure if any one study on this topic. It has been observed in certain cases that success-full researchers have poor academic record. These are exceptions, I am interested to know any study on this topic. Please also cite examples that shows negative or positive examples. Ideally there should be high positive correlation, if it is not than why?
I mean if person got good grade/marks or rank at school/university level will be a grate scientist. For example if topper of JEE (top exam in India for getting admission in engineering college) join research, he/she will be best scientist in the world.
From my long experience of teaching, the researchers who have rich researching CV usually have poor teaching records and vice versa. That is especially true for the universities that haven't a sufficient researching budget. The reason behind that is that researching needs more time at the expense of teaching.
• asked a question related to Computer Science
Question
Dear all,
I hope this question finds you in good health & spirit!
This is a repeated question but in a different way.
How can I add a research paper to Google Scholar? I have the following three papers, all of them have been added manually to the "Research Scholar":
Nidhal El-Omari, “Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem”, International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Nidhal El-Omari, “An Efficient Two-level Dictionary-based Technique for Segmentation and Compression Compound Images”, Modern Applied Science, The Canadian Center of Science and Education, published by Canadian Center of Science and Education, Canada, p-ISSN:1913-1844, e-ISSN:1913-1852, DOI:10.5539/mas.v14n4p52, 14(4):52-89, 2020.
Nidhal El-Omari, M. H. Alzaghal, and Sameh Ghwanmeh, “ICT and Emergency Volunteering in Jordan: Current and Future Trends”, Computer Science and Information Technology, published by Horizon Research Publishing Corporation (HRPUB), p-ISSN:2331-6063, e-ISSN:2331-6071, DOI: 10.13189/csit.2015.030402, 3(4):105-112, 2015.
But, I couldn't find any one of them when I search for the usual way. To be sure, you can see the three attached snapshots.
Thank you!
Regards,
I appreciate your valuable assistance. But, I have already added my research articles manually before putting this question on this portal. My question is related to another different situation. I want to see my papers when I make searching as in the attached snapshots with the original question; please make another glance at them. Anyway, the problems of the first and second papers have solved after I have uploaded them on the "www.academia.edu" using the following link:
Until this moment, the problem of the third paper, namely "ICT and Emergency Volunteering in Jordan: Current and Future Trends", hasn't been solved despite that it has been loaded on the "academia" and other sites. Thanks again for your great assistance.
Finally, I want to introduce my gratitude, thankfulness, and acknowledgment to Dr. Roman Anufriev, Dr. Mohammed O. Al-Amr, Prof. Vadym Slyusar, and all the other respected researchers for their contributions, attentions, and participations.
• asked a question related to Computer Science
Question
Isn’t that grateful that everyone knows how to make a phone, know what are capacitors and transistors used in computers. Maybe China is ahead of all because they can make one at home despite availability of all materials at cheaper cost.
Dear bhavesh
It looks interesting. But practically it may be feasible. Technology has not yet become that much user friendly. Also many other constraints are there.
• asked a question related to Computer Science
Question
Good Morning All,
I am pursuing phd in computer science. I am thankful to all the people who are guiding me whenever I have been stuck. I have got a list of parameters or features related to my topic, but now I am confused how to select the feature from that list? How to represent this feature during our RDC presentation? Do we need to only list out the attributes or do we need to give data in detail for those attributes?
Hi Shah,
In terms of how you can select an appropriate subset of features, there are a number of algorithms that can be applied such as the “Sequential Feature Selection”. The Sequential Feature Selection algorithm has been implemented in Python and Matlab, and it is one of the commonly used methods for obtaining a subset of highly informative features from the entire feature space/vector.
I hope you would find this helpful.
• asked a question related to Computer Science
Question
Hello Friends,
I am a master's student at the University of Baghdad, Department of Computer Science in the second stage, and the topic of my research in mobile and network computing about tracking, movement and mobility via mobile phone. So, I need a spatiotemporal data about the participants that give temporal and spatial indicators with the type of activity?
Any one can inform me where can I get it?
Dear Khalid,
For your understanding, I have attached two YouTube links how to track and plot the real-time spatiotemporal data. After tracking data you have to take the data into your pc or laptop and get a tracking map with elevation and location ( Lat & Long). Please See this link. Hopefully your requirements may fulfill through this videos.
• asked a question related to Computer Science
Question
I need help understanding how to use the Java Netbeans IDE for writing and compiling programmes
Dear A : If you still need any help in Java or/and Netbeans, I have long experience in using them.
This is my email: nidhal.omari@wise.edu.jo
• asked a question related to Computer Science
Question
Dear Friends,
Kindly allow me to ask you a very basic important question. What is the basic difference between (i) scientific disciplines (e.g. physics, chemistry, botany or zoology etc.) and (ii) disciplines for branches of mathematics (e.g. caliculus, trigonometry, algebra and geometry etc.)?
I feel, that objective knowledge of basic or primary difference between science and math is useful to impart perfect and objective knowledge for science, and math (and their role in technological inventions & expansion)?
Let me give my answer to start this debate:
Each branch of Mathematics invents and uses complementary, harmonious and/or interdepend set of valid axioms as core first-principles in foundation for evolving and/or expanding internally consistent paradigm for each of its branches (e.g. calculous, algebra, or geometry etc.). If the foundation comprises of few inharmonious or invalid axioms in any branch, such invalid axioms create internal inconsistences in the discipline (i.e. branch of math). Internal consistency can be restored by fine tuning of inharmonious axioms or by inventing new valid axioms for replacing invalid axioms.
Each of the Scientific disciplines must discover new falsifiable basic facts and prove the new falsifiable scientific facts and use such proven scientific facts as first-principles in its foundation, where a scientific fact implies a falsifiable discovery that cannot be falsified by vigorous efforts to disprove the fact. We know what happened when one of the first principles (i.e. the Earth is static at the centre) was flawed.
Example for basic proven scientific facts include, the Sun is at the centre, Newton’s 3 laws or motion, there exists a force of attraction between any two bodies having mass, the force of attraction decreases if the distance between the bodies increase, and increasing the mass of the bodies increases the force of attraction. Notices that I intentionally didn’t mention directly and/or indirectly proportional.
This kind of first principles provide foundation for expanding the BoK (Body of Knowledge) for each of the disciplines. The purpose of research in any discipline is adding more and more new first-principles and also adding more and more theoretical knowledge (by relying on the first-principles) such as new theories, concepts, methods and other facts for expanding the BoK for the prevailing paradigm of the discipline.
I want to find answer to this question, because software researchers insist that computer science is a branch of mathematics, so they have been insisting that it is okay to blatantly violating scientific principles for acquiring scientific knowledge (i.e. knowledge that falls under the realm of science) that is essential for addressing technological problems for software such as software crisis and human like computer intelligence.
If researchers of computer science insist that it is a branch of mathematics, I wanted to propose a compromise: The nature and properties of components for software and anatomy of CBE (Component-based engineering) for software were defined as Axioms. Since the axioms are invalid, it resulted in internally inconsistent paradigm for software engineering. I invented new set of valid axioms by gaining valid scientific knowledge about components and CBE without violating scientific principles.
Even maths requires finding, testing, and replacing invalid Axioms. I hope this compromise satisfy computer science scientists, who insist that software is a branch of maths? It appears that software or computer science is a strange new kind of hybrid between science and maths, which I want to understand more (e.g. may be useful for solving other problems such as human-like artificial intelligence).
Best Regards,
Raju Chiluvuri
Dear @Raju Chiluvuri
To my opinion, mathematics is the precursor to all the disciplines of science. And, in fact, mathematics is also a science.
Thanks!
• asked a question related to Computer Science
Question
I know this question is clear but I want to know the difference between these facts especially in computer science?
Dear Dr. Albahri:
My four-cent is as follows:
Model in computer science is the result of a process of representing a real-world object or phenomenon as a set of logical, mathematical and computational concepts and equations. Contemporary scientific practice employs at least three major categories of models: concrete models, mathematical models, and computational models. Simulation of a system is the operation of a model in terms of time or space, which helps analyze the performance of an existing or a proposed system.
Architecture in computer science is logical/functional/structural arrangement as well as a set of rules and methods that describe the functionality, organization, and implementation of an architected system. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In a pragmatic view, it is the conceptual structure around which a given system is designed.
Framework in computer science is an abstraction in which a system providing generic functionality can be selectively changed by additional, purposely developed constituents, thus providing application-specific features and services. In a pragmatic view, a framework (a software, a system, or an environment) is a platform for developing specific applications.
Protocol in computer science is a standardized set of rules (of types, commands, constraints, acknowledgement) for preparation, processing, and communicating something (e.g. data, instruction, etc.). Protocols enable computers to be networked with one another and transfer (like OSI). Protocols are typically constructed by following reductionist principles.
Best regards, I.H.
• asked a question related to Computer Science
Question
Is there a freely available test dataset of research papers in computer science field? I need to evaluate my text retrieval system which uses a computer science domain ontology, so the dataset should come with a ground truth/relevance judgments for text retrieval task and/or classification task.
I appreciate any suggestion. Thanks
Yes and too many, see e.g., free publicly available databases : https://www.visualdata.io/discovery and https://www.robots.ox.ac.uk/~vgg/data/
• asked a question related to Computer Science
Question
I need a very good text book which is self explanatory and easy to understand for self study by a beginner in JAVA PROGRAMMING
Just I want to add to
• asked a question related to Computer Science
Question
Since technology has seemed to extensively pervade virtually every facet of medicine, do you feel that students of medicine (MD or MBBS) should be better equipped with knowledge and skills in mathematics, physics, biomedical image processing (to better process medical images for diagnostics and surgical planning), biomedical signal processing (for better analysis of bioelectrical signals, e.g. EEG, EKG, EMG), and basic computer science?
Care to discuss?
Myles Joshua Toledo Tan dear, I don't think medical students need to study math or physics as they already has to acquire a good basics on them to qualify for the admission test. In MBBS level they have to cover a vast and extensive curriculum. If you want to add something to them you can add--English language, Behavioral science and Psychology.
• asked a question related to Computer Science
Question
cloud computing security
Dear respected researchers:
Kindly, you can click the link below to find a complete, new, and full answer:
Or refer to this:
Nidhal Kamel Taha El-Omari, “Cloud IoT as a Crucial Enabler: a Survey and Taxonomy”, Modern Applied Science, The Canadian Center of Science and Education, published by Canadian Center of Science and Education, Canada, p-ISSN: 1913-1844, e-ISSN: 1913-1852, DOI:10.5539/mas.v13n8p86, 13(8):86-149, 2019.
• asked a question related to Computer Science
Question
Is there really a significant difference between the performance of the different meta-heuristics other than "ϵ"?!!! I mean, at the moment we have many different meta-heuristics and the set expands. Every while you hear about a new meta-heuristic that outperforms the other methods, on a specific problem instance, with ϵ. Most of these algorithms share the same idea: randomness with memory or selection or name it to learn from previous steps. You see in MIC, CEC, SigEvo many repetitions on new meta-heuristiics. does it make sense to stuck here? now the same repeats with hyper-heuristics and .....
Apart from the foregoing mentioned discussion, all metaheuristic optimization approaches are alike on average in terms of their performance. The extensive research studies in this ﬁeld show that an algorithm may be the topmost choice for some norms of problems, but at the same, it may become to be the inferior selection for other types of problems. On the other hand, since most real-world optimization problems have different needs and requirements that vary from industry to industry, there is no universal algorithm or approach that can be applied to every circumstance, and, therefore, it becomes a challenge to pick up the right algorithm that sufficiently suits these essentials.
A discussion of this issue is at section two of the following reference:
• asked a question related to Computer Science
Question
What kind of technique is used for systems of IoT?
What are the challenges for testing IoT?
Do we need to modify traditional testing technique for IoT?
I really appreciate if anyone discuss or refer some papers.
Thank you
• asked a question related to Computer Science
Question
Why most researchers are shifting from tensorFlow to Pytorch?
Tensorflow creates static graphs, PyTorch creates dynamic graphs.
In Tensorflow, you have to define the entire computational graph of the model and then run your ML model. In PyTorch, you can define/manipulate/adapt your graph as you work. This is particularly helpful while using variable length inputs in RNNs.
Tensorflow has a steep learning curve. Building ML models in PyTorch feels more intuitive. PyTorch is a relatively new framework as compared to Tensorflow. So, in terms of resources, you will find much more content about Tensorflow than PyTorch. This I think will change soon.
Tensorflow is currently better for production models and scalability. It was built to be production ready. PyTorch is easier to learn and work with and, is better for some projects and building rapid prototypes.
• asked a question related to Computer Science
Question
I wanted to get a scatter plot but the constraint is to plot these points over an image with same dimension as the range of points. Is is possible? If yes which library is good to start with? I tried using gnuplot but it is causing problems. For now I have a code which was stated here but didn't work. I tried using a bitmap image with the points to be plotted in data1.dat file and used the gnuplot script which is stated in the link
set terminal pngcairo transparent
set output 'Figure1.bmp'
plot "Co_ordinates0.dat"
set output
To do this using matplotlib in python:
import matplotlib.pyplot as plt
import cv2
fig, ax = plt.subplots()
ax.imshow(img)
ax.scatter(e.xdata, e.ydata, label=str(counter))
This is working for me. The patches library also works for the purpose of more extensive visualizations.
• asked a question related to Computer Science
Question
To apply graph theory concepts in computer science and engineering.
As continue of Juho Andelmin answer I can identify next areas of applications of graph theory and her components: Natural Language Processing (NLP), routing networks, Big Data and Data Science, Hypergraph semi-supervised learning etc.
• asked a question related to Computer Science
Question
When Einstein published 4 great papers in one year, it was something very special. Now this is not the case,
there are many professors who publish papers every week even if they don't do any work in this papers. What is the reason to write a professor name in your paper from your perspective?
How can a young professor reach that in the beginning of academic career path?
Dear all,
I think that the main reason is that the universities give money to the professors for any research article.
Thank you!
Regards,
• asked a question related to Computer Science
Question
I found so many papers aplying the Deep Deterministic Policy Gradient (DDPG) algorithm implementing a critic neural network (NN) architecture where the action vector skips the first layer. That is, the state vector is connected to the first layer, but the actions are connected directly to the second layer of the critic NN.
Actually, in the original DDPG paper ("CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING", Lillicrap 2016) they do that. But they do not explain why.
So... why is this? Which are the advantages of this architecture?
Regards,
Shafagat
• asked a question related to Computer Science
Question
Can we affirm that whenever one has a prediction algorithm, one can also get a correspondingly good compression algorithm for data one already have, and vice versa?
There is some correlation between compression and perdition. Prediction is a tool of compression. Assume you have data and you you have redundancy in it you can predict the redundancy from the context of the signal and remove the redundancy by simply subtracting the the predicted signal from the real signal.
The difference will be the compressed signal.
The prediction is a powerful concept to reduce the redundancy in the signals and consequently compress it.
prediction is used intensively in video codecs and other signal codecs.
Best wishes
• asked a question related to Computer Science
Question
This Special Issue will focus on control, modeling, various machine learning techniques, fault diagnosis, and fault-tolerant control for systems. Papers specifically addressing the theoretical, experimental, practical, and technological aspects of modeling, control, fault diagnosis, and fault-tolerant control of various systems and extending concepts and methodologies from classical techniques to hybrid methods will be highly suitable for this Special Issue.
Potential themes include, but are not limited to:
Modeling and identification
Reinforcement learning for control
Data-driven control
Fault diagnosis
Fault-tolerant control of systems based on various control and learning techniques
Prof. Dr. Jong-Myon Kim
Prof. Dr. Hyeung-Sik Choi
Dr. Farzin Piltan