Science topics: Computer Science
Science topic

Computer Science - Science topic

Explore the latest questions and answers in Computer Science, and find Computer Science experts.
Questions related to Computer Science
  • asked a question related to Computer Science
Question
3 answers
What is relation between Machine Intelligence and Smart Networks?
Relevant answer
Answer
Artificial Intelligence has been around for a long time – the Greek myths contain stories of mechanical men designed to mimic our own behavior. Very early European computers were conceived as “logical machines” and by reproducing capabilities such as basic arithmetic and memory, engineers saw their job, fundamentally, as attempting to create mechanical brains.
As technology, and, importantly, our understanding of how our minds work, has progressed, our concept of what constitutes AI has changed. Rather than increasingly complex calculations, work in the field of AI concentrated on mimicking human decision making processes and carrying out tasks in ever more human ways.
Artificial Intelligences – devices designed to act intelligently – are often classified into one of two fundamental groups – applied or general. Applied AI is far more common – systems designed to intelligently trade stocks and shares, or manoeuvre an autonomous vehicle would fall into this category.
Neural Networks – Artificial Intelligence And Machine [+]
Generalized AIs – systems or devices which can in theory handle any task – are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI, it’s really more accurate to think of it as the current state-of-the-art.
The Rise of Machine Learning
Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving AI development forward with the speed it currently has.
One of these was the realization – credited to Arthur Samuel in 1959 – that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves.
The second, more recently, was the emergence of the internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.
Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world.
Neural Networks
The development of neural networks has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias.
A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain.
Essentially it works on a system of probability – based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.
Machine Learning applications can read text and work out whether the person who wrote it is making a complaint or offering congratulations. They can also listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music expressing the same themes, or which they know is likely to be appreciated by the admirers of the original piece.
These are all possibilities offered by systems based around ML and neural networks. Thanks in no small part to science fiction, the idea has also emerged that we should be able to communicate and interact with electronic devices and digital information, as naturally as we would with another human being. To this end, another field of AI – Natural Language Processing (NLP) – has become a source of hugely exciting innovation in recent years, and one which is heavily reliant on ML.
NLP applications attempt to understand natural human communication, either written or spoken, and communicate in return with us using similar, natural language. ML is used here to help machines understand the vast nuances in human language, and to learn to respond in a way that a particular audience is likely to comprehend.
A Case Of Branding?
Artificial Intelligence – and in particular today ML certainly has a lot to offer. With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare and manufacturing are reaping the benefits. So, it’s important to bear in mind that AI and ML are something else … they are products which are being sold – consistently, and lucratively.
Machine Learning has certainly been seized as an opportunity by marketers. After AI has been around for so long, it’s possible that it started to be seen as something that’s in some way “old hat” even before its potential has ever truly been achieved. There have been a few false starts along the road to the “AI revolution”, and the term Machine Learning certainly gives marketers something new, shiny and, importantly, firmly grounded in the here-and-now, to offer.
The fact that we will eventually develop human-like AI has often been treated as something of an inevitability by technologists. Certainly, today we are closer than ever and we are moving towards that goal with increasing speed. Much of the exciting progress that we have seen in recent years is thanks to the fundamental changes in how we envisage AI working, which have been brought about by ML. I hope this piece has helped a few people understand the distinction between AI and ML. In my next piece on this subject I go deeper – literally – as I explain the theories behind another trending buzzword – Deep Learning.
Bernard Marr is a best-selling author & keynote speaker on business, technology and bigdata. His new book is Data Strategy. Toread hisfuture posts simply join his network here.
  • asked a question related to Computer Science
Question
8 answers
I'm looking for examples or just ideas about how to even roughly quantify the complexity and/or difficulty of manufacturing and maintaining two different products.
There is a school of thought in environmental discourse that believes in a coming "simplification" or even collapse of modern civilization, following e.g. Joseph Tainter's studies about how societies have shedded their more complex interactions as a response to reduced energy supply. One of the central tenets of this school of thought is that some products are just too complex to be produced if society "de-complexifies." However, to me it seems that this school of thought categorizes products quite arbitrarily to those that are deemed too complex and those that are deemed "appropriate technologies" etc.
Is anyone aware of any attempts at quantifying these complexities? I'm aware of Harvard's Atlas of Economic Complexity (http://atlas.cid.harvard.edu/), but to me it seems that these rankings do not really tell much about actual complexity of manufacturing or maintaining the products listed.
Relevant answer
Answer
Thank you all, and my apologies for not thanking you guys earlier! To be frank I haven't logged in to Researchgate for a while and forgot I even asked this question...
But now that my thesis is done, I just might look into this a little bit further. The problem is still an interesting one and these resources have been really helpful.
  • asked a question related to Computer Science
Question
5 answers
Hi,
I am an information specialist and researching conference rankings in computer science. So far I have found CORE, Arminer, Microsoft Academic search and GII-GRIN-SCIE (GGS) Conference Rating. Which ones of them are the most relevant? Thanks :-)
I should specify my focus: I am not looking for publishing at specific conferences, but preparing a workshop on rankings. As I was told by computer scientists, journal rankings such as the impact factor are not relevant to them. But apparently there are specific rankings for CS-related conferences. I am trying to find out which ones I should recommend them to use.
Relevant answer
Answer
Hi,
There is a site that provides the ranking of conferences by three sources, Australian, Brazil and Microsoft Academic.Here is the address
I tried to rank LNCS and it was ranked mostly on top of other conferences.
  • asked a question related to Computer Science
Question
1 answer
In Europe, Informatics is often considered similar to Computer Science in America. Applied Informatics, often utilize fundamental concept of Computer Engineering and Science. What is the correlation between the field of Computing and Applied Informatics, what are the major differences, and will they merge in the future?
Relevant answer
Answer
I follow answers
  • asked a question related to Computer Science
Question
13 answers
If the unidirectional transmission of codded information through a channel to some repository has an energy-cost of 1, should the query-and-retrieval, via the same channel, of the same information, be assigned a cost of 2?
I don't know is this a question related more to neuroscience or to computer science, in all cases the question arose in a model of cognitive complexity.
Relevant answer
Answer
Mr. Nathan has rightly mentioned that work (energy) can be turned into an equivalent amount of information. Information can be turned into an equivalent amount of work (energy). Since high energy efficiency is required in a sensor network, it is desirable to disseminate query messages with small traffic overhead and to collect sensing data with low energy consumption.
  • asked a question related to Computer Science
Question
3 answers
The ABAT Accreditation and ACM Computer Science Curriculum is systematically evolving and adapting to make sure that current and future Computer Science Graduates are well trained and qualified to enter the job market with proper skills and competencies. Is the current or future ABAT Accreditation adopting the new developments in Bio-Computing, and to what extend will both the undergraduate and graduate higher education change in next five years?
Relevant answer
Answer
Thank you Saif. With current dynamic research, innovation and technological advancement of Computing Technologies side by side with Bio-Technologies and Bio-Computing, is it critical to adopt and update the Computer Science and Engineering Curriculum to match the industry expectations from graduates as future potential employees.
  • asked a question related to Computer Science
Question
8 answers
I'v been doing an investigation about detection of a specific signal and I need to determine that how a received signal resembles the reference signal. A primary approach to this problem is done simply by taking cross correlation of signals, but I need to know about more accurate methods. References would be also appreciated.
Relevant answer
Answer
Hi Alireza,
actually there are many ways to determine the similarity of signals (or the distance between them). Here some cues:
Using cross correlation (or correlation coefficient as normalized measure) surely is a widely applied method. As alternative correlation approach you could consider rank based correlation (Spearman's rho) (cross correlation e.g. assumes a linear dependency which might not be true).
Similarity of signals also can be accessed in the frequency domain, you could have a look for "coherence" to find more information.
Even by using measures of information theory similarity can be qunatified. Have e.g. a look to "entropy".
Apart from that, there are many distance metrics which can be applied similarly (the smaller the distance the bigger the similarity). The most often used metric is the euclidean distance (or a normalized version), but you'll find other metrics when searching for "distance measures".
Now, what's best suited strongly depends on your signals and what you regard to be similar (which can be problem specific). Hope this helps.
Greetings, Sebastian
  • asked a question related to Computer Science
Question
10 answers
How can we differentiate message-oriented protocols and stream-oriented protocols?
Explanation with examples would be highly appreciated.
Relevant answer
Answer
TCP is a stream oriented protocol and UDP is a message-oriented protocol.
TCP receives the stream of bytes from application layer protocols and divide it in to segments and pass it to IP. But UDP receives already divided or grouped bytes of data from application protocols and add UDP headers which will become datagram and send it to IP, then application layers has the burden of dividing the streams of data in to messages when they run on top of UDP .
  • asked a question related to Computer Science
Question
9 answers
I am a computer science student, I don't have good knowledge on signal processing.
I know, for extracting required sub-bands from a signal "discrete wavelet transform" is one of the popular methods. My doubt is if you give (time Vs amplitude) signal to the DWT method, what the out format of the decomposed signal? Means is it in (time Vs amplitude) or frequency domain.
For example, from the given figure, If S is time domain signal we gave it to the DWT then what is the outputs at CA1 and CD1. Please clarify whether these subbands output signals are in the time domain as like input signal or converted to the frequency domain signal.
Please clarify my doubt. Thank you
Relevant answer
Answer
For signal processing you typically do:
Signal -> DWT -> Processing -> inverse DWT -> Output Signal
Where the processing step is doing whatever you would like with the DWT output. In the image you posted, you only show the Signal->DWT part, which gives you the DWT coefficients.
Referring to your image, the "A" (e.g. cA1) and "D" are the approximate and detailed coefficients from the transform. A simplified explanation would be, that "A" are the low-pass outputs, and the "D"s the high-pass outputs, but down-sampled by a factor of 2 at each stage (see wiki article).
What do you want to do with the DWT of the signal? Denoising?
I highly recommend you have a look at http://zone.ni.com/reference/en-XX/help/371419D-01/lvasptconcepts/wa_dwt/ where the whole signal processing with DWT is outlined. The image you show, is "only" for computing the decomposition coefficients, without processing the signal and reconstruction.
Because at every stage, a sub-sampling by a factor of 2 is performed, you need to put the outputs of the different stages together after up-sampling again for the inverse DWT and get the filtered version of your original signal in the time domain.
Let's look at a denoising example. To get rid of some noise in your signal, you may want to filter out some high-frequency components of your signal. At the end of the image you show, we have the DWT coefficients. At this point, we could reconstruct the original signal with these coefficients, no processing is done. But now, you can e.g. throw away some high frequency coefficients to reduce the noise. So now you are left with a reduced number of coefficients / subbands. If you want to get the processed signal again in the time domain, you need to apply an inverse DWT:
  1. Upsample by a factor of 2 (multiple times if the coefficient comes from a level >1
  2. Put the upsampled signal in the right frequency range by applying a bandpass filter according to your original filtering.
The link I provided above shows this nicely.
So long story short, in the image you posted, you only show the Signal->DWT part, giving you the DWT coefficients, and not yet a filtered version of your input. From these coefficients, you can reconstruct the original signal using the inverse DWT. If you want to process, do something with the coefficients before applying the inverse DWT to get the signal in the time-domain.
Here's some example code for C++ if you want to play around a bit:
I hope that clarifies it a bit.
  • asked a question related to Computer Science
Question
3 answers
Hi, I am Annal, doing PhD in computer science in Bharathidasan university, Trichy, tamil nadu. my research work at final stage. Reseach in Cloud storage. So I need to submit indian examiner's​ panel list to the university. Kindly suggest or if any one willing to be an examiner, please inform meand sent your details to my mail, the examiner should be an associate full time professor from other states.
Thank you so much.
Regards,
Annalabel
Relevant answer
Answer
Please let me know in detail, your Research Work.
I am a PhD Guide at The Maharaja Sayajirao University of Baroda.
  • asked a question related to Computer Science
Question
4 answers
Many Computer Science Freshmen come to college with little or no background in computational thinking and almost no computer programming background. In my part of the world, only a very small number of freshmen become adapt at programming at graduation.
Relevant answer
Answer
I thank you all for your answers.
  • asked a question related to Computer Science
Question
34 answers
Dear Friends,
         Example for practical Knowledge: If I drop a stone from top of a building, I know that it falls on the ground. Example for intellectual Knowledge: If I drop a stone from top of a building, I know why it falls on the ground, because I learned Newton’s discoveries and mechanics when I was in school and college.
             The epicycles and retrograde motions of planets were practical knowledge, since the epicycles can be observed by anyone living on the Earth. Mankind didn’t acquire intellectual Knowledge until 17th century, why planets appear to be making epicycles and retrograde motions. This proves mankind can be fooled by practical knowledge alone. Out perception of reality and things we experience in practice might fool us. We need intellectual knowledge to determine the degree of falsity of knowledge and Verisimilitude (or truthlikeness).
                  Our practical knowledge might be a illusion and flawed, if we can’t provide valid reasons why it happens that way. How can we be sure a practical knowledge is valid until we gain intellectual knowledge of why? Today software engineers have been practically experiencing the infamous software crisis and spaghetti code. No one knows why?
              Except the designing and engineering of software products, designing and engineering of no other product is affected by infamous software crisis and spaghetti design/code. Gaining the intellectual knowledge leads to solution for the infamous software crisis and spaghetti code. The fact is, there is no valid reason why software engineering endures infamous software crisis and spaghetti code.
               How can I compel the software researchers to gain the intellectual knowledge, which I am sure leads to solution for the infamous software crisis and spaghetti code? Our experiences could fool us. The practical knowledge and intellectual knowledge compliment with each other. Both are essential and our knowledge can’t be complete, if either of the knowledge is flawed or invalid.
               Our knowledge about software crisis and spaghetti code is incomplete until we gain intellectual knowledge that explains why, by using facts, falsifiable evidence and sound reasoning (where the falsifiable evidence and reasoning can be and must be falsified, if and when new counterevidence can be discovered).
Best Regards,
Raju
Relevant answer
Answer
Very interesting academic debates and insightful arguments! Great work done all commenters on the subject. Both intellectual and practical knowledge are all important in establishing and defending truth. They both aid in the discovery and application of knowledge for the development of societies.
Dickson Adom
  • asked a question related to Computer Science
Question
4 answers
Your suggestions and comments are highly appreciated, thanks in advance...
Relevant answer
Answer
Dear Esmat,
Check The link below (Asian Journal of Information Technology): USD 250 for fees, fast publication, and Q4.
  • asked a question related to Computer Science
Question
7 answers
Within one month
Relevant answer
Answer
Dear Rakesh,
this question looks similar to the one at this link: https://www.researchgate.net/post/Can_someone_please_suggest_some_good_Scopus_indexed_fast_journals_where_I_can_publish_my_research_paper_in_quick_time. Please, take a look at it, since there are good hints and resources listed there.
Hope it helps.
Best,
Luca
  • asked a question related to Computer Science
Question
5 answers
Virtual reality is an emerging and essential technology in computer science. It is also useful for medical cares such as surgical simulation and understanding viruses. But how could the virtual reality technology be used to study the fetal or infant brain and analyze in utero MR imaging data? Could you suggest any feasible application of virtual reality for pediatric imaging and radiology? 
Relevant answer
Answer
Virtual reality (VR) is a new technique. It is still in experimental stages in Radiology and imaging. It is mainly used to see and analyse the DICOM images on a tablet or mobile phone. VR can be as useful in Pediatric Radiology as in other sub specialties of Radiology. This technique may be utilized to read the fetal/infant brain MRI images on mobile phones and tablets.
  • asked a question related to Computer Science
Question
3 answers
If I want to make Python or MATLAB, invoke subprocesses in programs like Robot, Revit, OpenSeeS, EnergyPlus etc., is there a general theory behing all these?
I am interested in getting specific outputs when performing commands from MATLAB or Python.
Relevant answer
Answer
In computer science, this is known as inter-process communication.  More specifically, when a process spawns a sub-process, it is referred to as a child process.   A parent process and a child process can communicate using an Unnamed Pipe.  Independent processes (on the same machine) can communicate using Named Pipes.
  • asked a question related to Computer Science
Question
28 answers
Kindly guide me about choosing best simulator for dynamic routing in Adhoc networks.
Can we use Opnet simulation for high level work?
Thanks
Relevant answer
Answer
NS2 is no longer used and active development stopped 7 years ago. So its best you use NS3 or NetSim or OPNET. And like you say you need to have lots of time and be very good at programming (in multiple languages) for Ns3.
  • asked a question related to Computer Science
Question
6 answers
Hi Friends! I am searching for best thesis topic in Computer Science for my MSCS thesis. Please suggest me by using your research experience.Is it best to choose requirement engineering for thesis or research? My favourite topics along with requirement engineering are network security and Human Computer Interaction. What should i choose for thesis? Please, seniors suggest the best one. Thanks
Relevant answer
Answer
I am not from any field of Engineering but I feel Human - Computer Interaction is the current and upcoming field and can be taken up for research. There can me may aspects of this field - you can take the Economic, Social, Administrative and other such aspects to make research valuable for the society/today's world.
  • asked a question related to Computer Science
Question
9 answers
Polyphonic music is mentionned here to properly define the type of signals.
Relevant answer
Answer
FFT can be used....
  • asked a question related to Computer Science
Question
6 answers
i am looking for MRI brain tumor data-set(databases) with ground truth means manually segmented by radiologist doctor to evaluate the accuracy, precision and efficiency of my proposed algorithm, may i get your help?With regards.
Relevant answer
Answer
Here you can find brain MRI dataset (some include white matter lesions). Hope it helps:
  • asked a question related to Computer Science
Question
3 answers
Graph theory techniques in computer science have been used but how best can this be answered in real life situation? 
Stable marriage algorithm has been used to model this computer science problem. How should satisfiability and fairness be considered in real life situation?
  • asked a question related to Computer Science
Question
4 answers
For example:
If I want to predict the color of a flower which has four possible outputs; would normalizing the data (so that a sigmoid factor could be used when building a neuro network) be appropriate and effective? 
Relevant answer
Answer
You need to make sure your input data is normalised as training may not occur if you don't normalise input data.
As you are doing a classification problem  you will need more than one output node. If you only have one output node and you assign colour A if the value on the output node is  0 to 0.25, colour B if the value is 0.25 to 0.5, and colour C if the value is 0.5 to 0.75 and colour D if the value is 0.75 to 1, then you will be solving a regression problem, not a classification problem.
So you need two output nodes and output (0,0)=colour A, output (0,1)=colour B, output  (1,0)=colour C and output (1,1)=colour D.
Or you can have four output nodes as output then (1,0,0,0)= colour A, (0,1,0,0)= colour B. etc
Now the actual values you choose at the nodes depends on the output function you have chosen. The function may range from 0 to 1, in which case you use the above values or your output function may range from  -1 to 1, in which case replace the 0 by -1 in the above and keep the 1 as it is. Or your output function may have a range -0.5 to 0.5, in which case the zeros are replaced by -0.5 and the ones replaced by 0.5 in the above brackets.
  • asked a question related to Computer Science
Question
6 answers
Does this involve NLP, machine learning, logic programming, or otherwise?
Relevant answer
Answer
Dear all,
Thank you for the interested you expressed in our project, and for all your comments.
The project is in its seminal stage, and this is why I have not updated more details just yet. I realise now, it was perhaps a mistake to post just a title in the first place, given the state of our research is in, and I apologise for that.
Regarding the request for updates I have received lately, allow me to say that I'd like to reserve the right to post more details when I am comfortable that the results I have are robust enough to be shown. I hope you will understand.
What I can say however is that the project will present a number of novel approaches, techniques and theories to face new problems brought about by the pervasive use of robotics and autonomous systems in different domains of human society. This project aims also, and in turn, to provide Philosophy with a laboratory for testing the validity of its theories at a larger-scale, as opposed to the current philosophical tradition of small-scale thought experiments. 
The project definitely does not:
"attempt to nudge philosophers into reflecting on the significance and underlying intent of machine-oriented systems such as decision-making robots"
On the contrary, we use philosophical theories to develop novel computational techniques, programs that work, like physics or mathematics are used in computing science with the same goal. Therefore, no Philosophy of <x> will be proposed or discussed, the level of abstraction (Floridi, 2008) is much lower than that.
The project is definitely not about:
"HOL (Higher Level Logic) in a proof-theoretic approach to machine behaviour"
We take a precise position about the use of logic as approach to a certain subset of AI problems, in fact we depart from that tradition showing that probabilistic methods are the most suitable solution, and do work very well in Philosophy too.
Regarding what Muna Alsallal said:
"I think machine learning ML can be assumed as a trend that can change the ordinary aspects of the world"
I am not sure what is the role of this assumption with regards to the original question (i.e. What is this project exactly? I do Philosophy and Computer Science myself so I'm curious to see a hybrid like this).
I do honestly apologies for the vagueness of this answer. I am truly delighted that you are interested (or curious) to know more about our project, and I please you all to bear with me a bit longer. I'll post more details very soon. In the meanwhile, I'd be happy to answer any question you might have, and keep the discussion alive at any level you prefer.
With kind regards,
Francesco Perrone
--
Floridi, Luciano. "The method of levels of abstraction." Minds and machines 18.3 (2008): 303-329.
  • asked a question related to Computer Science
Question
9 answers
I was interested in listing all the possible integer solutions to
f(n/10)-f(n/11) = 1                 (eq1)
Where f(x)=floor(x) is the floor function, relating each real number x to the greatest integer z less or equal to x.
The floor function wasn't so easy to deal with, as it seems at first sight. I replaced n/10 = x.. and took 10x/11, having a similar equation:
f(x)-f(10x/11) = 1
The solution set was then easily verified, X = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}..
But didn't mean the possible solutions to (eq1) would resume to:
Y= 10*X = {10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110}.
I computed every solution to the equation (shown in the figure) with the support of an algorithm.
I realized then, the relation with another equation by taking a look thru the floor function identities...
 –11x + 10y = 110 – n ,
where x = n mod 10, and y = n mod 11.
Seems like a diophantine approximation involved. Of course there are theorems to help with solutions of diophantine equations... But...
What if we have an equation:
f(x/a) + f(x/b) = c,  where x is the variable and a, b, c are positive integers, where f(x)=floor(x) is once again the floor function.
How can we compute the possible solutions for x integer?
Is there any property that we can operate with floor functions? I suppose not cuz the function isn't continuous.
This subject just caught my attention. Maybe it's easier than seems. Any clue/tips?
Relevant answer
Answer
Dear Kelvin,
The equation [x/a] + [y/b] = c, where  [  ] is the floor function, has the 
general  solution:  x = ma + r and y = (c-m)b + s,
where m is any arbitrary integer,  0 ≤ r < a and  0 ≤ s < b.
Now, you need x = y  > 0  to obtain positive  solutions x for  [x/a] + [x/b] = c ,
direct substitution shows m = ( bc + s - r )/(a+b).
Next, you test all possible values of s and r to give a list of all possible positive integer values of m. Otherwise the equation has no solution.
Example :  [x/2] + [x/3] = 5.
m = ( bc + s - r )/(a+b) = ( 15 + s - r )/(5)  where  0 ≤ r < 2 and  0 ≤ s <3.
All possible values of m are {3,14/5,13/5,16/5} , m= 3 is the only positive integer solution.
Therefore x = ma +r  provides  6 and 7  are all possible solutions for   [x/2] + [x/3] = 5.
Best wishes 
  • asked a question related to Computer Science
Question
181 answers
Dear Friends,
            In his famous letter to Kepler in year 1610, Galileo complained that the philosophers (i.e. Scientists were referred to as philosophers) who opposed his discoveries for exposing flawed belief (i.e. the Earth is at center) at the root of then dominant geocentric paradigm had refused even to look through a telescope.
            "My dear Kepler, I wish that we might laugh at the remarkable stupidity of the common herd. What do you have to say about the principal philosophers of this academy who are filled with the stubbornness of an asp and do not want to look at either the planets, the moon or the telescope, even though I have freely and deliberately offered them the opportunity a thousand times? Truly, just as the asp stops its ears, so do these philosophers shut their eyes to the light of truth."
            What is the difference between the religion and a scientific discipline, if the beliefs at the root of a scientific discipline are fiercely defended and frighteningly impervious to evidence and objective facts? Isn’t it a violation of scientific method to have untested implicit beliefs in any scientific discipline and accumulating new knowledge by relying on such untested implicit beliefs that are flawed?
           Unfortunately computer science, particularly BoK (Body Of Knowledge) related to so-called components in the context of CBD/CBE (Component Based Design, Development or Engineering) is rooted in 50 to 60 years old untested implicit flawed beliefs, which are being fiercely defended and considered impervious to evidence.  Unanimity of biased beliefs has been concluded to be objective facts and/or self-evident Truths (that needs no supporting proof and impervious to any amount of counter-evidence).
           Unfortunately, software researchers concluded that nature and essential properties of components are ideological choices, even in the context of countless quintessential CBD/CBE products (e.g. cars, computers, airplanes, machines or machinery for factories etc.) that are built by designing, building and assembling components.
Does the nature and essential properties of physical beings (e.g. animals, trees, bacteria, viruses, fungi or components) are subjective ideological beliefs impervious to any amount of counter-evidence or objective facts? Don’t researchers have any moral obligation to investigate counter-evidence to such beliefs, when offered? Isn’t it violation of moral and ethical obligation (or gross negligence), if such evidence is deliberately ignored or suppressed?
Even if the nature and properties of physical things such as bacteria, viruses or components were to be ideological choices, why software research community is killing the ideological diversity and plurality by reacting as if it is a heresy to propose or explore any other choice? The software researchers 50 to 60 years ago made an ideological choice (by ignoring the reality and fact) that software parts that are reusable (or conducive to be reusable) are components.
          The funny thing is that, I feel like advocating good aspects of capitalism to hardcore Marxists in the Soviet Union or advocating good aspects of socialism to hardcore capitalists in the USA during the height of cold war, such experiences are well articulated, by Dr. Michael Parenti, in this video: https://www.youtube.com/watch?v=Rt_iAXYBUSk (please pay extra attention to 5 minutes bit starting from 15 minutes).
             Scientific disciplines such as botany, zoology, bacteriology, mycology, or virology are not like social science. There is no room for ideological beliefs or choices in such 21st century scientific disciplines including computer science. Such sciences end up having not much different from social sciences or even religion, if the BoK (Body of Knowledge) was rooted in ideological beliefs or choices (e.g. for defining nature and properties of physical beings) by ignoring reality or facts.
            If the nature and properties of physical beings are ideological choices, why plurality and ideological diversity is not accepted, as if the properties are sacred religious dogma? What is the difference between software researchers and religious fanatics? No one ever dared to question the validity of dogmatic untested beliefs at the root of software engineering.
          Unfortunately software researchers have been fiercely defending 50 to 60 years old tacit axiomatic beliefs that are very foundation of exiting dominant software engineering paradigm (as the 2300 years old belief “the Earth is static at the center” is the very foundation of 16th century dominant geocentric paradigm).
           Is it ethical to fiercely defend untested or unproven beliefs about the properties of the components in modern 21st century scientific disciplines such as computer science? Unfortunately many untested implicit beliefs are frighteningly impervious to counter-evidence and obvious fact. Software researchers have been using every possible excuse and tactics to suppress counter evidence.
Best Regards,
Raju Chiluvuri
Relevant answer
Answer
Dear Dr. Peter,
            The size is one of the useful things about the elephant. But they also have other uses. Like dogs, they can be bred to be ferocious, loyal and intelligent. These characteristics are also very useful in battle. Likewise, horses also have many desirable characteristics. Pigs or baffles cannot tolerate the noise or pain. They would turn back and trample our own soldiers. Likewise, components are versatile and well proven to have many subtle uses.
          You have no imagination and have no open mind. You are incapable of overcoming prejudice and preconceived notions. You don’t want to try new possibilities or learn new things. Even stanch religious conservatives would explore new ways.
           You said: Your idea is closest to being true for web page designers. They do select from a palate of options and drop them in to their design, stretching or cutting to fit.
            I said many times to you: The most complex software (in my view) that is least friendly to be componentized for CBD are compilers. I told you many times, even it is possible to implement large percent of code for compiler in pluggable components.
            Almost every text book on structure or records use Employee as an example. Is structures or records are only capable of creating records for Employee? Many of the GUI components (e.g. Pie-chart, line-chart or bar-chart) are demoed using financial data or stock movements. Are the GUI components such as line chart only useful for presenting intraday stock movements?
            Stop insulting me and trying to put me down. If you can’t accept the reality, it is your problem. The reality is that: Neither complexity nor uniqueness of a (physical or software) product can’t prevent its designers to partition the product in to self-contained parts that are conducive to be assembled.
              Almost every physical product is partitioned into self-contained parts that are conducive to be assembled. Nearly 100% of the features and functionality can be implemented in self-contained parts that are conducive to be assembled.
               Not even a single software product can be partitioned into self-contained parts that are conducive to be assembled. No attempt is made to partition even 10% of the features and functionality in self-contained parts that are conducive to be assembled. No one can help you, if you don’t want to accept this reality.
               It is a common sense: one would use any other animal, if it is loyal, ferocious, large and can be domesticated. We want to have animal that is having more useful characteristics for the task.  Did you see the movie Avatar? They have animals that are having better characteristics then houses that can fly. If you can invent a new kind of parts that are better than the components in modularization and/or other uses, then you can name them Wudgets or whatever you like. They can provide different but more useful then the components.
Bye,
Raju
  • asked a question related to Computer Science
Question
1 answer
Hello,
I am Max ,New to this site.
We offer UK Web Hosting on https://www.webhostuk.Co.UK anyone need any help feel free to contact me.
Relevant answer
Answer
Its been years and we have upgraded our services and site to offer Cloud Web hosting, Wordpress hosting, Managed Servers, Ecommerce Hosting , Domain registration and many more .. IF you are thinking of online Business dont forget to check out : https://www.Webhost.UK.Net
  • asked a question related to Computer Science
Question
11 answers
Hi together, 
when I worked in Roy Sambles' Thin Films & Interfaces Group in Exeter in 1993 I used an existing Fortran program for multilayer Fresnel modelling and improved its slo-mo performance to the point where one could change the parameters of the layers and see an almost instant graphical response. This actually gave rise to the "Inverted SPR" discovery.
Now, since then I was wondering if it wasn't feasible to have the computer scan a huge parameter space on its own and essentially doing a "curve discussion" on its own: A new phenomenon or quality can be described in natural language terms and so can known patterns be described (how are minima, maxima distributed, what are known limits, etc.) as well. The latter would have to be told to the computer and then it would only start spitting out results if it found something remarkable, something outside the known patterns. With today's computing power such a generic software tool would have produced the ISPR instantly! Now I would like to pass on the suggestion: create or find a method to describe and detect patterns in modelled data. Then let a computer scan the areas you find interesting? What do you think? Or is it standard procedure nowadays to do this?
Cheers from Switzerland 
Marc
Relevant answer
Answer
Trade-offs:
Speed v texture ["depth" can be illusory; detecting interactions and subtle dynamics allows understanding of data flavor but takes time]
Stability v Adaptability [are criteria clear and stable? is it important to detect emergent or unstable characteristics?]
GAs are faster; standard stats are more easily applied [and better understood]; recurrent hierarchical networks are slower and require more intense design but offer granularity and flexibility and can be audited [to a limited extent]
  • asked a question related to Computer Science
Question
3 answers
Metrics in cbse
Relevant answer
Answer
Analyse existing used matrices  for  different measure of component based software and try to find gap between measures and try to develop metrics which will fill those gaps between measures   
  • asked a question related to Computer Science
Question
1 answer
In cooperation with my student Jakub Tkac, a student from the Czech Technical University, we ported the old code simulating dynamic recrystallization (implemented in Cellular/Cellang) into a new software version (C++ & Qt). Details can be found in the following links:
Our interest is to find out what you find out good and bad on this software and how it can help you understand modeling of complex systems using cellular automata. Your answers will help to develop future examples more adjusted to yours needs.
Enjoy the source of the code and video provided above.
Jiri Kroc & Jakub Tkac
PS For details and other questions and work see the project
where other question as "New drug development strategy: war of weapons" and information are provided.
Relevant answer
Answer
Dear Researcher/Dear Colleague,
This work is a part of wider project that deals with promoting of complex systems, self-organization and emergence. For me and my coworkers it will be great to hear what you find good and not so good on it and enable us to deliver more suitable projects in the future.
Good luck at your research,.
Jiri Kroc
PS Another interesting questions are involved in the project as is:
  • asked a question related to Computer Science
Question
12 answers
Assume we have a class of graphs. Now what does this sentence mean?
"each of the graphs in the class, monotonically should make no difference". 
Relevant answer
Answer
@Nazanin "A graph peoperty is monoton if every subgraph of a graph with property P, also has the property P"  is called "hereditary property".  
As for the original (part of a) sentence: if we join it to another domain of mathematics, it makes sense such as it is, despite imprecise wording: e.g. in a class of exponential functions with a basis a>1... 
  • asked a question related to Computer Science
Question
7 answers
Please can you help me to find Thomson Reuteurs Journals that covers Information Systems and Elearning with Rapid Publication? Where can I find the time needed for review, first response and publication for Journals?
Relevant answer
Answer
Informatica..
  • asked a question related to Computer Science
Question
6 answers
Some journal listed as good journal in SCImago Journal & Country Rank (http://www.scimagojr.com) with relatively good h-index for example Journal of Computer Science (from Science Publication) is identified as possible predatory Journal in Beall's list. Which one I should follow?
Relevant answer
Answer
People interested in an update on predatory journals, following the cessation of Beall's List updates and it's removal, might like to refer to an article in today's Times Higher Education (6 April 2017, pp. 42 & 43), reporting work by Larissa Shamseer and David Moher (link below but access may be restricted to subscribers).
Their study resulted in a list of 13 warning signs:
1. The scope of interest includes non-biomedical subjects alongside biomedical topics, or multiple, wide-ranging and unrelated fields of study are combined
2. The website contains spelling and grammar errors
3, Images are distorted/fuzzy, intended to look like something they are not, or are unauthorised
4.The home page language targets authors rather than readers
5, The Index Copernicus Value is promoted on the website
6.  Description of the manuscript handling process is lacking
7.   Manuscripts are requested to be submitted via email
8.   Rapid publication is promised
9.   There is no retraction policy
10.   Information on whether and how journal content will be digitally preserved Is absent
11. The article processing/publication charge is very low (eg, <$150)
12. Journals claiming to be open access either retain copyright of published research or fail to mention copyright
13.  The contact email address is non-professional and non-journal affiliated (eg, @gmail.com or @yahoo.com)
  • asked a question related to Computer Science
Question
3 answers
i work in NSL-KDD dataset and i want to tokeniz and compute term frequency for each token.  
  • asked a question related to Computer Science
Question
53 answers
Dear Friends,
           So far I failed to convince the experts and researchers, even though I have multiple valid proofs for my disruptive discoveries about the components and CBD (Component Based Design and/or Development).  There are two main reasons (that are summarized in the attached PDF) for my failure to provide convincing proof, which can overcome flawed preconceived notions and blind beliefs or baseless prejudice due to the existing deeply entrenched paradigm.
           Kindly understand the difference between a valid proof and a convincing proof: A proof is valid, if it is accurate and it is not possible to falsify the proof objectively by using valid knowledge and objective facts or evidence (even if the proof miserably failed to convince the experts due to their flawed prejudice). A convincing proof is not necessarily flawless, but can convince the experts (e.g. by reinforcing their flawed preconceived notions and blind beliefs or prejudice).
            Saying the Truth “the Sun is at center” 500 years ago angered experts by offended common sense and deeply entrenched conventional wisdom. A valid proof fails to convince experts, if the existing theoretical foundation or BoK (Body of Knowledge) comprises of flawed pieces of knowledge, such as accepted conclusions (e.g. the Earth is static at center is inalienable Truth), observable facts (e.g. epicycles or retrograde motions) or accepted theories (e.g. If the Earth is moving, how could the Moon follow or why stellar parallax is not noticeable).
          In light of such large flawed BoK, even a valid proof cannot convince the experts, because all the valid evidence and/or reasoning are incommensurable and perceived to be lies, heresy or even scam. Each of the researchers and experts had been slowly but profoundly brainwashed all their lives by the huge BoK, which resulted in forming a mental picture of a flawed perception of reality, which is radically different from objective reality (so much so that the objective reality or facts appeared to be strange or weird and inconceivable).
           Today my valid proofs facing such flawed perception of reality painted by 20 to 30 times corrupted BoK. The existing CBSD paradox has been evolving for past 50 years and accumulated 20 to 30 times flawed BoK by relying on 50 to 60 years old flawed definitions for so called software components. The software experts have been brainwashed by the BoK to conclude that software parts are components and using such parts is CBD. In light of this flawed BoK and altered perceptions of reality, today objective reality and facts about the CBD and components are perceived (by software experts) to be heresy or even fraud.
Please see attached PDF for reality for CBD
Best Regards,
Raju S Chiluvuri
Relevant answer
Answer
Look at how the Eigen, Catch and the Boost C++ libraries started as pet project of one or a few developers and are now widely used even in big companies. There are lots of examples contradicting the "millions of dollars needed to start" model, maybe you should look at how they did it in more details. 
Of course, it will happen over years and Google won't use your stuff next month but in the meantime, your ideas will spread and your library improve with the real feedback you will get from real users. Had you gone this path a decade ago, you might have been one of the library I was giving as an example!
Good luck,
Bruno
  • asked a question related to Computer Science
Question
15 answers
MulVAL- is a logic-based attack graph generation technique invented by Xinming Ou. I have installed "MulVAL" as per the instructions given in (http://people.cis.ksu.edu/~xou/argus/software/mulval/readme.html) and tried to run it for the testcase given in input file (input.P) as follows:
shyam@ubuntu:~/Desktop/Graph/mulval/utils$ ./graph_gen.sh /home/shyam/Desktop/Graph/mulval/testcases/3host/input.P -v
But I am getting the following error:
cat: goals.txt: No such file or directory
rm: cannot remove `goals.txt': No such file or directory
The attack simulation encountered an error.
Please check xsb_log.txt.
What does it mean and how do I solve it? Any kind of help would be appreciated.
Relevant answer
Answer
Hey Guys I need help,
Every time I tried to convert the nessus vulnerability scanning file to nessus.p file which is the input for mulval I am getting below error
++Error[XSB/Runtime/P]: [Existence (No procedure usermod : vulExists / 3 exists)] []
Forward Continuation...
... machine:xsb_backtrace/1 From C:\Program Files (x86)\XSB\syslib\machine.xwam
... loader:load_pred1/1 From C:\Program Files (x86)\XSB\syslib\loader.xwam
... loader:load_pred0/1 From C:\Program Files (x86)\XSB\syslib\loader.xwam
... loader:load_pred/1 From C:\Program Files (x86)\XSB\syslib\loader.xwam
... x_interp:_$call/1 From C:\Program Files (x86)\XSB\syslib\x_interp.xwam
... x_interp:call_query/1 From C:\Program Files (x86)\XSB\syslib\x_interp.xwam
... standard:call/1 From C:\Program Files (x86)\XSB\syslib\standard.xwam
... standard:catch/3 From C:\Program Files (x86)\XSB\syslib\standard.xwam
... x_interp:interpreter/0 From C:\Program Files (x86)\XSB\syslib\x_interp.xwam
... loader:ll_code_call/3 From C:\Program Files (x86)\XSB\syslib\loader.xwam
... standard:call/1 From C:\Program Files (x86)\XSB\syslib\standard.xwam
... standard:catch/3 From C:\Program Files (x86)\XSB\syslib\standard.xwam
Any idea how to solve this error??
  • asked a question related to Computer Science
Question
4 answers
Continously statistical methods and algorithms useful for researchers in other fields are being developed. It would be interesting to know the speed of knowledge transfer. I am wondering if this issue has been subject to systematical analysis?
Relevant answer
Answer
@Steven: I found it, thanks. I have also had a look at Armstrong's website. There is quite a number of publications, did you have some specific paper in mind?
  • asked a question related to Computer Science
Question
13 answers
Are there any researches about mathematic competencies needed in upper classes computer science yet? And which of those competencies are not taught in school?
Auf Deutsch: Meine Bachelorarbeit soll sich mit folgender Thematik befassen:
"Strategien zur Überbrückung von im Informatikunterricht der gymnasialen Oberstufe erforderlichen mathematischen Kompetenzen"
Bisher habe ich dazu leider keine Literatur gefunden, es scheint sich niemand mit der Thematik auseinandergesetzt zu haben. Die Frage stelle ich nun hier, um einen Überblick zu bekommen, welche Probleme überhaupt im Informatikunterricht durch fehlendes mathematisches Wissen/ fehlende Methoden auftauchen können. Also Dinge, die innerhalb des Informatikunterrichts erklärt werden müssen, obwohl sie eher Inhalt des Mathematikunterrichts sein sollten, oder aber den Informatikunterricht generell einschränken.
Relevant answer
Answer
To a great extent it depends on the class.  Graph theory, linear algebra, generating functions, logic, set theory, and some basic number theory as well as basic probability theory will  help a lot. Especially if you want to go on for graduate school. If you are interested in crypto for instance some group theory and more advanced probability is needed. Machine learning is all about probability and statistics. Formal methods and programming languages require abstract algebra and lots of logics.
  • asked a question related to Computer Science
Question
1 answer
Hi, 
I am looking for a comprehensive list of research labs in the u.s. Maybe categorized by area of research or by the name of the universities? I understand that this would be a very long list but I was wondering if anybody has ever done this work to put everything in one place.
  • asked a question related to Computer Science
Question
31 answers
Dear Friends,
           Saying the truth “the Sun is at the centre” 500 years ago offended common sense and deeply entrenched conventional wisdom. Researchers refuse to see or investigate either evidence in support of heliocentric model or counter evidence that could expose the flawed geocentric paradox. How any lie could ever be exposed (e.g. the lie “the Earth is static at the centre” at the root of the geocentric paradox), if research community refuses to look at evidence, for example by perceiving it to be arrogant, disrespectful and uncivilized to question the validity of primordial dogmatic “consensus” of the respected researchers or scientists. Does such primordial dogmatic “consensus” must be revered as inalienable Truth for eternity?
            Please kindly recall the Galileo’s famous letter to Kepler in 1610: "My dear Kepler, I wish that we might laugh at the remarkable stupidity of the common herd. What do you have to say about the principal philosophers of this academy who are filled with the stubbornness of an asp and do not want to look at either the planets, the moon or the telescope, even though I have freely and deliberately offered them the opportunity a thousand times? Truly, just as the asp stops its ears, so do these philosophers shut their eyes to the light of truth."
               Galileo Galilee’s attempts to demonstrate counter evidence for the flawed primordial dogmatic 2300 years old “consensus” (the Earth is static) at the root of geocentric paradox faced huge resistance such as: "I am not going to look through your "telescope", as you call it, because I know the Earth is static ... I am not a fool, how dare you to insult my intelligence?". Likewise, most experts feel I am insulting their intelligence, if I offer counter evidence to primordial dogmatic 50 to 60 years old “consensus” about the nature and preparties of components and essence of the CBSD (as if each of the untested “consensus” is inalienable Truth for eternity). Those untested “consensus” were reached 50 to 60 years ago when computer science and software technologies were in infancy. So it was inconceivable to create real-software-components for then existing primitive technologies such as Fortran or assembly languages. Existing CBSD/CBSE (Component Based Software Design/Engineering) paradox is fundamentally flawed. Today no one else even knows the objective reality about: "what is true essence and power of CBD".
             I have been struggling for many years to provide counter evidence to flawed beliefs at the root of the geocentric paradox of software engineering (in general and CBSD/CBSE in particular). The flawed beliefs diverted research efforts in to a wrong path and software researchers have been investing research efforts for 50 years in the wrong path resulted in the infamous software crisis (as the flawed belief “the Earth is static” diverted research into a wrong path 2300 years ago and investing research efforts for 1800 years in the wrong path resulted in geocentric paradox).
               I have been struggling for many years to compel software researchers to investigate counter evidence for exposing the flawed beliefs at the root of the software engineering in general and CBSD/CBSE paradox in particular. I tried every method I can think of and so far no “civilized” method worked. My efforts to expose the Truth are perceived to be arrogant, disrespectful, uncivilized or even heresy. Many experts feel that is scam or uncivilized to question the primordial “consensus”, so they feel compelled to respond more uncivilized by resorting to personal attacks. The “consensus” 500 years ago was that “the Earth is static”. Unfortunately researchers even in the 21st century researchers refused to investigate the counter evidence, which can expose the dogmatic “consensus”.
              Could anyone suggest a civilized way to compel software researchers to investigate evidence in support of the heliocentric model of software engineering and counter evidence for the geocentric paradox of software engineering? Is there any legal way that doesn’t involve bribing (i.e. paying handsomely for doing their moral duty of discovering the Truth/facts by investigating evidence) or dragging tax-payer funded research organizations to court to fulfil their moral and ethical obligation of not wasting taxpayer funds on the geocentric paradox of software engineering?
              The flawed beliefs at the root of the CBSD paradox resulted in the infamous software crisis, which already cost a trillion dollars to the world economy, and would cost trillions more, if I fail in my effort to expose the root causes for the geocentric paradox of software engineering. I can’t believe the software scientists even in the 21st century reacting similar to the fanatic scientists in the dark ages. For example, the government funded research organizations (e.g. NSF.gov, NIST.gov, NITRD.gov, SEI/CMU or DoD) already wasted decades and billions of dollars for expanding the BoK (Body of Knowledge) for the geocentric paradox of software engineering. Any kind of research efforts in a wrong path is fool’s errand, because mankind’s scientific knowledge (BoK) would still be stuck in the dark ages, if the error at the root of geocentric paradox were not yet exposed. Only fools use excuses such as “that ship has sailed long time ago” or “the wise men had spoken”.
             How could any scientist or researcher foolishly insist unproven beliefs or untested opinions are self-evident facts, for example, by refusing to see counter evidence and often resorting to humiliating insults, snubbing or even personal attacks (when politely offer counter evidence that exposes flawed unproven beliefs or untested opinions at the root of the geocentric paradox of software engineering)?
            Computer Science is a religion, if it is rooted in sacred unquestionable “consensus” (i.e. dogmatic tenets) and experts feel offended or react as if it is heresy to question the validity of primordial dogmatic tenets created (by “consensus” of wise men) during primeval period of computer science (i.e. between 50 to 60 years ago when Fortran and assembly languages are leading technologies). It was inconceivable to create real-software-components (that are equivalent to the physical components) for achieving real-CBD for software, which is equivalent to the CBD (Component Based Design) for physical products 50 to 60 years ago (during primeval period of computer science).
              The purpose of science or engineering research is pursuit of absolute truth (by accumulating and analysing the evidence), but not creating sacred “consensus” and defending the “consensus” (no matter how elaborate or elegant the “consensus” may be). That kind of “consensus” might be justifiable few decades ago, but such “consensus” cannot be treated as inalienable truth/fact for eternity. Such outdated consensus at the root (i.e. that are very foundation) of any modern scientific discipline must be questioned time to time. Is it a sacrilege or uncivilized to question “consensus” that were outdated by advancements of science or technology?
             If the consensuses (i.e. assumed to be facts) are fundamentally flawed, research efforts relying on such flawed facts leads to scientific crisis (e.g. geocentric paradox of the discipline) and exposing the error results in a Kuhnian paradigm shift. Is there a civilized way for exposing a geocentric paradox of a 21st century scientific discipline? How can I keep it civilized, if respected researchers and scientists perceive facts/truth (that contradict flawed “consensus”) are heresy and react uncivilized by resorting to humiliating insults and personal attacks. How can I compel them to act civilized and fulfil their moral and ethical obligations to Truth?
           Any untested “consensus”, no matter how elaborate or elegant, is not science. Period. Anyone who feels such untested “consensus” as inalienable Truth for eternity and resort to insults must be ashamed to think he is a scientist/researcher. In science, there are no sacred “consensus” that can’t be questioned and tested. Offering counter evidence to outdated “consensus” is not sacrilege or uncivilized. If anyone feels that the counter evidence is flawed, he is free to expose the flaw. But only incompetent people react uncivilized manner by resorting to personal attacks. Is there a civilized way to present counter evidence for exposing sacred “consensuses” (that are fundamentally flawed) in the 21st century scientific discipline?
Best Regards,
Raju
Relevant answer
Answer
Dear Friends,
         Let me provide clear and unambiguous contrast between the self-contained components for software and physical products (no contradicting can be as black and white as this contradiction): In the context of the physical products (every large physical product can be partitioned into many large disjoint sets of self-contained features and functionality): Almost each and every disjoint set of self-contained feature and functionality can be implemented as replaceable component, which can be disassembled and reassembled at any time in the future (e.g. redesign and test individually free from spaghetti design). That is, each component is assembled or plugged-in as illustrated in FIG-1.
          In the context of the software products (every large software product comprises many large disjoint sets of self-contained features and functionality): Almost no large disjoint set of self-contained features and functionality is implemented as replaceable component (e.g. as a module that can be assembled or plugged-in). But each and every large disjoint set of self-contained features and functionality can be implemented as RCC (Replaceable Component Class).
That is, the code base of each large self-contained component is broken into many code chickens and added to (or spread across) many non-exclusive files (i.e. mixed with code chunks for other self-contained components), where each non-exclusive file contains spaghetti code for multiple large self-contained components (as illustrated by FIG-2, FIG-3 & FIG-4 in the attached PDF).
         The root cause for this problem is: No one in the world even today knows what is the purpose and true essence of the CBD for physical products. The flawed “consensuses” reached 50 to 60 years ago diverted research efforts into a wrong path, as the 2300 years old flawed consensus “the Earth is static” diverted research efforts into wrong path that resulted in geocentric paradox (an altered perception of reality filled with epicycles and retrograde motions).
           My struggle to compel research community to investigate evidence, reality and facts to discover the nature and essence of the real-CBD (Component Based Design) for physical products, which can be summarized as: CBD is Implementing about 90% of the features and functionality in replaceable components, where replaceable component imply components that can be easily dis-assembled (or un-plugged to redesign or refine individually outside of the product free from spaghetti design/code) at any time for periodical feature or functionality maintenance or up-gradation releases and re-assembled (or re-assembled after testing it individually free from spaghetti code). Since over 90% of the design and code for features and functionality is implemented in such components (where each is free from spaghetti code), over 90% of the design and code of the product is free from the spaghetti code.
         Such real-CBSD addresses the infamous software crisis, because the root cause for the software crisis is the infamous spaghetti code. Design and development of no other product is infected with such spaghetti code, for example, particularly design and development of complex one-of-a-kind-products such as experimental spacecraft or fully-tested pre-product golden prototypes of next generation Jet-fighters. It is not hard to achieve such real-CBD: https://www.researchgate.net/publication/292378253_Brief_Introduction_to_COP_Component_Oriented_Programming
      Today no one even knows what the true essence of the CBD is, but every researcher insist that it is impossible to invent real-software-components for real-CBD. I have been struggling to provide counter evidence for many years such as hundreds of software applications built by assembling replaceable components (e.g. See real-software-components.com and pioneer-soft.com).
       Any research efforts for any discipline would be diverted into a wrong path, as soon as research efforts start relying of flawed “consensus” (by assuming then to be self-evident facts). The research efforts persist in the wrong path for long enough time (without realizing the error), the research efforts accumulates huge BoK (Body of Knowledge) filled with epicycles over the time and the discipline ends up is a paradox. Today, exiting CBSD paradox is supported many times bigger BoK than the BoK that existed 450 years ago in support of geocentric paradox.
Best Regards,
Raju Chiluvuri
  • asked a question related to Computer Science
Question
10 answers
greetings,I am using c++ for optimization mathematical problem,in this case how can I add optimization library to c++?
Relevant answer
Answer
If it is about cplex (see https://en.wikipedia.org/wiki/CPLEX ):
get the software bundle - including the libraries (binaries) for your compiler and OS and follow the documentation about installation/use.
Having installed the libraries correctly, you have already 'added' them to your compiler. Then - as Peter Breuer already stated - you have to include the appropriate header files to be able to call the functions contained in the libraries.
Really "Compiler 101".
  • asked a question related to Computer Science
Question
5 answers
Support vector machine is one of the 10 classification method available.I know this method for single output data and want to know for multioutput data
Relevant answer
Answer
if you mean multiclass classification using SVM, see 'fitecoc' in matlab:
  • asked a question related to Computer Science
Question
7 answers
I would like to know a good starting point to carry out my research in the above mentioned topic.
Relevant answer
Answer
Hello Aditya,
What are the Objects you want to recognise ? Are they same object? and more in Number?
Example : 1st Case : You want to Detect Multiple objects of same class .
You want to detect Multiple Oranges in a Picture.
2nd Case : Multi Class Object detection
You want to detect Oranges, Apples, Banana's . 
If you are looking for the first case, it should not be that hard to implement, Apply the object detector multiple times on the images, instead returning when you find the first object.
If its the second case, try building feature detectors for every class and apply every detector onto the image to find the different objects.
In general there are many approaches to recognise the object. Simple features like Color, Shape, or more complex features using HaarWavelets.
Start with one object and move to multiple object recognition
  • asked a question related to Computer Science
Question
7 answers
I need to generate a decision tree for classifying the marks dataset example data set the data set consists of 8 attributes and 1 class attribute out of 8 attributes 6 are subject marks one is total one is percentage  the criteria for class is not  only total marks/ percentage  to be considered but also all subjects marks should be greater than or equal to the pass marks. 
My question is how to train the machine to classify correctly?. As if we know the manual functions in other programming languages . But how to train a machine this type of criteria what method I have to follow. I have to increase the training set data?
Relevant answer
Answer
Hi Veera, 
I believe that your dataset has no meaning in the current structure. My suggestion is to change each column into a raw (except the last column) considering the associated "percentage" attribute with each subject (if the "prcentage" attribute belongs to the subject). Afer that, if your'e using WEKA, then evaluate your learning process based on "Percentage split" method becaues cross-valiation will not work.
NB: Increase the amount of your dataset, as you won't get any useful model from it to be used in the future. 
HTH.
Samer
  • asked a question related to Computer Science
Question
3 answers
In the IEEE papers Channel Aware Routing Protocol (CARP) AND (E-CARP) Protocols are better routing protocols yet now. Is there is any other protocol anybody prefer better than those. Can we apply the SDN concept here
Relevant answer
Answer
Thank you sir i will refer the links
  • asked a question related to Computer Science
Question
37 answers
Dear Friends,
            Isn’t it fraud (if not crime) against scientific and technological progress, if scientists/researchers blatantly violate well established and proven “scientific method” https://en.wikipedia.org/wiki/Scientific_method to acquire and include new knowledge in the theoretical foundation of any discipline for expanding its BoK (Body of Knowledge). Certain basic concepts in the BoK for software are nothing more than fiction rooted in wishful thinking. Relying on such flawed concepts or knowledge for technological advancement is violation of basic logic and even common sense.
               In case of scientific research for expanding the boundaries of scientific or theoretical knowledge, mankind led to believe that any Truth (backed by irrefutable proof and evidence) is ultimate and always triumphs over myths, beliefs, unproven theories or axiomatic assumptions. In other words, if the science is a religion, then the only God for the religion of science is the Truth. Any Truth (backed by irrefutable proof and evidence) always triumphs over evil (i.e. flawed myths, beliefs, prejudice, theories or axiomatic assumptions, if evidence can be produced to prove that they are flawed).
                  Everyone of us in the 21st century led to believe that anyone can freely express, say or debate (without fear of personal attacks, being humiliated or persecution) (1) any scientific truth openly (as long as the Truth can be backed by irrefutable proof and evidence) or (2) falsify any accepter theory or concept to remove it from the BoK (Body of Knowledge), if evidence can be demonstrated openly. Unfortunately, this is not always true and many of us have been fooled.
                Many respected scientists publicly say Truth (if it is backed by irrefutable proof and evidence) always triumphs, but the very scientists refuse to investigate the irrefutable proof and evidence (if the Truth contradicts their myths or prejudice). Even if we humbly request the scientists to back their myths and prejudice, many of them resort to personal attacks or humiliating snubbing.
                The situation in our beloved computer science (software) is so bad that young researchers are scared to openly support Truth or evidence that expose existing flawed myths or prejudice promoted by respected scientists, due of the fear of being ostracized and/or ruining career. Shell we live under fear hiding underground like criminals for discovering Truth or organize resistance to over through existing regime of fake scientists? Isn’t it (i.e. derailing scientific progress) a scandal, if not fraud against technological progress of mankind?
                The theoretical foundation (or BoK) for computer science (software) filled with many flawed myths and unproven beliefs, which are supported not by reason and evidence but by the authority and prejudice of the regime of fake scientists. They have been promoting the myths and prejudice, while refusing to investigate the evidence and reasoning that can expose the flawed myths and unproven beliefs.
                 How could anyone rescue any scientific discipline from such authoritarian regime of fake scientists? They absolutely have no clue, what is real Science? Let me briefly summarize what is meant by “Scientific Discipline”: Any scientific discipline is a BoK (Body of Knowledge) comprising many pieces of knowledge, where each piece of knowledge is acquired by employing “Scientific Method”. The essential requirement for any scientific discipline is following the well established guidelines, processes and rules of the proven “Scientific Method” for acquiring each new pieces of knowledge for the BoK. Each piece of knowledge in the BoK must be supported by irrefutable proof and demonstrable evidence, which can’t be falsified by using any known knowledge or evidence.
                    The very constitution for the Scientific Disciplines is the “Scientific Method”. The authoritarian regime refusing to accept the authority of “scientific method” (having proven track record) for investigating objective reality for acquiring valid knowledge (by discovering objective facts and theories backed by evidence). Many of them feel offended (or threatened), if anyone tries to expose their myths and prejudice by using such objective facts and evidence.
                     What kind of scientists feel offended or threatened by Truth? Many of them refuse to validate our proof for any new discovery of Truth (backed by irrefutable reasoning, objective facts and evidence), if the new discovery, objective facts and evidence contradicts their myths and prejudice. Many of them feel that it (i.e. requesting to employ scientific method to gain objective knowledge) is heresy or repugnant. What kind of scientist every possible excuse to avoid using scientific method for protecting his myths and prejudice?
                    The situation in our beloved filed of computer science (software) is totally unacceptable. It must be changed, even if it requires intellectual battles using facts, objective reasoning and evidence), even it hurt the egos of so called scientists and researchers promoting the authoritarian regime to protect fiction, myths or prejudice. No discipline can be a science, if it blatantly violates the scientific method. Only fake scientists refuse to know and investigate knowledge and evidence acquired by using scientific method.
                  We have facts and evidence gained by using scientific method, which exposes the flawed myths and prejudice in the existing BoK for software. Respected scientists refuse to investigate or even look at our poof. We discovered certain new Truths, which are backed by irrefutable proof, facts and evidence. Respected scientists refuse to investigate or even look at our poof. Isn’t it a scandal, if not fraud?
                    Let me illustrate this by using an analogy: If you worked with the elephants for years, when a pig is shown to you would you insist that it is an elephant? Assume (i) you discover the essential properties uniquely and universally shared by the each and every large physical component by using scientific method (i.e. no part can be a component without having the essential properties), and (ii) you invented real-software-components (having the essential properties) and created thousands of such real software components for over a decade.
                     Would you call any other kind of software part (not having the essential properties) a real software component, if the difference between the other kinds of software parts and the real software components is comparable to the difference between the pigs and the elephants? No other kind of so called software components (known today) have the essential properties, so referring any of them as software components is like showing a pig and insisting that it is an elephant. Scientific methods have proven track record for discovering such essential properties, objective reality and facts. All I am requesting is to discover objective reality, facts and evidence using scientific methods. But software researchers have been refusing to use scientific methods to test their unproven myths. Isn’t it a scandal, if not fraud? https://www.researchgate.net/publication/309320487_Isn%27t_it_scandal_if_not_fraud_if_scientists_feel_repugnant_when_requested_to_not_violate_the_scientific_method_for_acquire_theoretical_knowledge
                  If one works with the elephants for a year and in a pig form for a year, he will never insist that it is a elephant, when a pig is shown. The difference between the real components (that can be unplugged and re-plugged-in) and the other kinds of parts (e.g. used as ingredients such as plastic, steel, metals, alloys, cement or pain) that can’t be unplugged from their container products, is bigger than the difference between the elephants and pigs. Comparable difference exists between real-software-components and other kinds of so called software components (known today), if one were to discover the nature and essential properties for real components by using “Scientific Method”.
                   Today experts defined each kind of software parts (e.g. having given properties) is a kind of components (without any basis in reality and fact) and have no clue about the nature and essential properties of the components. This error directed research efforts into a wrong path and software ended up in a crisis. Unfortunately most experts have been refusing to gain essential knowledge (by using “scientific method”) about objective reality, which can put the research efforts on the right tracks by exposing the error.
Best Regards,
Raju Chiluvuri
Relevant answer
Answer
Dear Mr. Krish,
            The biggest problem in case of scientific crisis is exposing the error (i.e. unproven myth “the Earth is static”) at the root of the geocentric paradox. The unproven axiom (i.e. “the Earth is static”) diverted scientific progress into a wrong path, which altered perception of reality so much so, the Truth was perceived to Heresy or repugnant.
            Although, there is enough evidence exists by 17th century, research community prosecuted Galileo for saying that the Earth is not static. Discovering the Truth was not hard, once all the evidence is investigated with open mind. The truth could have been discovered in normal course of investigation, even if the truth were to be “Saturn is at the center” – By just exploring each possibility by plotting the planarity positions by putting each planet.
            If you are me wanted to discover the Truth with open mind, what would we do?  If I think, Saturn is at the center, I try to collect as much evidence as possible and try to plot planetary paths and try to predict future locations to validate my prediction.  
            Remarkable similar kind of problem repeated in the computer science. Software researchers defined 50 years ago that CBD is building software by assembling reusable (or COTS – Commercially off the Shelf) components, as hardware engineers build computers by assembling COTS from 3td party component vendors. Also few researchers even called them Software-ICs.  – This has no basic in reality/fact, but software researches have been perusing this wrong path for past 50 years.
            Since they have been perusing a wrong path for over 50 years, it altered perception of reality so much so, the Truth was perceived to Heresy or repugnant. This error may have cost a trillion dollars during past 25 years and would cost trillions more, if the error is not exposed. The reality of CBD is explained at: https://www.researchgate.net/publication/284167768_What_is_true_essence_of_Component_Based_Design. I accomplished this goal many years ago and trying to expose the error. My CBSD is not correct, the Truth can be discovered by any one by investigating all the evidence.
            It was impossible to discover the Truth by prosecuting any one, who questioned the validity of flawed seed axioms (i.e. the Earth is static). I have been enduring insults, personal attacks and humiliation for requesting proof for flawed concepts such as “reusable software parts are components, and CBSD is using such fake components”. The Truth can be discovered within couple of weeks, if the can be exposed.
The biggest problem is admitting or acknowledging the error. The “scientific method” requires investigating all the evidence with open mind and going wherever facts lead us, without being influenced by prejudice and preconceived notions.
Quotes by Arthur Schopenhauer (Great 19th Century German Philosopher)
“All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.”
“The discovery of truth is prevented more effectively, not by the false appearance things present and which mislead into error, not directly by weakness of the reasoning powers, but by preconceived opinion, by prejudice.”
            I didn’t think, they are true even in 21st century. But based on firsthand experience, they are true even in 21st century.  No software experts can support their opinion, except by saying every one else things reusable software parts are components. It is not science. Science needs objective proof.
            In an open scientific debate, one must back each fact by proof and evidence. I can back each of my invention by irrefutable proof and evidence. Also I can prove existing concepts are flawed by using irrefutable proof and evidence. I am requesting for an opportunity to demonstrate proof, but no one is willing to give me an opportunity. I put all the evidence openly in my web sites as well as tools for building real CBD applications.
                      Compared to problems such as “Einstein's constant velocity of light”, the problem we in software trying to solve is elementary school math.
Best Regards,
Raju
  • asked a question related to Computer Science
Question
10 answers
Hello all,
I am currently working on Application layer protocol in IoT.  I want to know, Where to handle congestion?
Suppose in general architecture, either between IoT devices and gateway or between gateway and cloud.
Is there any architecture available for IoT, that don't consider cloud part for accessing resources?
Is IoT handle end-to-end congestion or not?
I have been stuck with these questions.  Need help.
Thanks
Vishal
For example: I am accessing services from cloud.
Relevant answer
Answer
Hi Vishal
Any system that is developed will be no faster than the weakest link. So you need to search for the bottleneck, identify where it is at, and work out how to deal with it.
So, in your case, if the bottleneck is at the IoT device, there is likely nothing to be gained from making improvements in the connection between the gateway and the cloud. And it may be the case that there is no point in improving the connection between the IoT device and the gateway.
You need to start by analysing every component in your system in order to find out the maximum capacity it can consistently deal with. You then need to assess the capability of the connection between each component of the system and then next in the chain, to establish what the maximum throughput capacity is, and finally the throughput to the cloud.
Think of it like you would a large river system. High in the mountains, rain falls, or snow melts and the water drains into the ground. Eventually it becomes waterlogged and it starts to form a little tributary, many of which will flow into a stream, which grows bigger and bigger the further the system flows. If the ground is very dry, the water is instantly absorbed into the ground, and there is no flow downstream. Liken this to your IoT sensors. The barrier to information flow here is the dry ground which diverts the flow. Once you overcome this problem, when the ground becomes waterlogged, tributaries start to flow. Liken this to resolving the congenstion at the IoT devices. Many tributaries, covering a wide area eventually flow into streams. Liken this to the aggregation of data collection from many IoT devices. Let us say there is a dry lake bed in the way. Liken this to a congenstion between IoT device collection and the gateway. Until the lake is full, the water will not flow past it. Once the lake fills, water will flow over the edge into a waterfall. Liken this to the congestion between gateway and cloud. Once the water flows beyond this point, nothing can stop it until it reaches the sea. Liken this to outflow from the gateway to cloud connection being de-congested, where the cloud can be expanded to absorb all data flows.
Given the infinite scalability offered by cloud services, that end is unlikely to be at issue. Simply add more resources, whether compute, or storage, or network and you will handle whatever you need to throw at it. So identify the specific capacity of each of these elements in your cloud architecture to discover what your maximum flow rates are. Now, if we liken the waterfall to the internet, check your maximum flow rate through the internet. If this is less that the cloud you have specified, either change internet supplier to a faster package, or reduce the capacity of the cloud services you are running to match.
Now repeat this exersise all the way through to the individual IoT data sensors to identify what the various maximum flow rate capacities are, the total number of these feeding into your system, and check your sums to find the bottleneck.
If the bottleneck is at the IoT sensors, perhaps you either have sensors that don't have enough capacity, or you are sampling your data too frequently. IoT sensors are very cheap, and it may be that the software you are running on them is too bloated. You may be able to find much more compact software. Or you can replace them with better capacity sensors, or you can drop your data collection frequency.
If the bottleneck is between the sensors and the gateway, then you need to analyse traffic patterns. If the data comes in high and low spurts, you could do data collection, then re-transmit collected data in a steady stream to the gateway. Or if you have too many sensors, you need to consider whether you need additional gateway capacity, or could you use the data collector to spread the load.
As for the congestion between gateway and cloud, you can add aditional resources as needed to solve the problem.
The bottom line is you need to measure all the throughputs of all the components so that you are aware of what your spare capacity will be throughout the system architecture. Some components will be far more expensive to scale than others. So you need to do your sums to discover where the cheapest point would be to expand capacity. The IoT has an infinite capacity to create data, so finding the most cost effective route to acquiring and collecting that data is essential. Equally, you do not want to be paying for more cloud services than you absolutely need, especially where the systems will run 24/7 365 days a year. The costs can really stack up.
I hope this gets you started.
Regards
Bob
  • asked a question related to Computer Science
Question
1 answer
I would like to know how to find the closed form solution of a narx model in matrix form
Relevant answer
Answer
netc = closeloop(net);
[xc,xic,aic,tc] = preparets(netc,X,{},T);
yc = netc(xc,xic,aic);
tcs = size(tc,2);
closedLoopPerformance = perform(net,tc,yc)
where 'X' and 'T' are input and target vectors
'net' is narx network that is already created by you
  • asked a question related to Computer Science
Question
6 answers
I am doing work on the feature extraction of malware.
Somewhere I found N-Grams techniques to extract features of malware but it's complicated to reach my goal.
find attachment for details:
Note: a just focus on Feature extraction of malware, not testing , training or classification.
Relevant answer
Answer
There is no such best feature extraction technique(PCA, KPCA, Markov model) for malware detection....What type of features of malwares you want to detect or classify that need to be considered.....You may follow the follwing links.....
file:///C:/Users/Sanjay/Downloads/ELK-1601-189_manuscript_1.pdf
  • asked a question related to Computer Science
Question
4 answers
I'm developing a genetic algorithm for the traveling salesman problem in large-scale graphs, and I'm applying the nearest neighbor algorithm to infect my population with locus arising from this technique. However, the execution time have been too high. Therefore, I would like to know if there is any alternative model of the nearest neighbor algorithm, which is faster than the simple method?
Relevant answer
Answer
I think you can try to use any of the methods using for generate an initial population:
- Random (Soumitra K Mallick)
- Quasi-Random
- Secuential
- Parallel (Soheila Ghambari)
- Heuristic (Dênio Júnior)
  • asked a question related to Computer Science
Question
1 answer
I am very interested in this subject
Relevant answer
Answer
Please go through the web site:
  • asked a question related to Computer Science
Question
2 answers
good web site
Relevant answer
Answer
You can see this website for learning lot of concepts related to computer science.
  • asked a question related to Computer Science
Question
2 answers
Can u suggest few research topics related to algorithms?
Relevant answer
Answer
Hi Neha..see following can be useful.
1) Parallelize Algorithms for XXX. Now you may complete the sentence by replacing XXX for a specific architecture, parallel processing platforms, distributed processing platforms etc.
  • asked a question related to Computer Science
Question
8 answers
Greetings everybody
Does anyone have any idea of how I can evaluate whether my genetic algorithm is working well? what are the criteria? 
Best Regards
Relevant answer
Answer
Dear Amin Bakhtiari,
In the case of single objective GA, please refer to the CEC2005/2013 which contain problem definitions and evaluation criteria for the single objective real-parameter optimization problems. Otherwise, refer to the CEC2009 multiobjective optimization test instances.
  • asked a question related to Computer Science
Question
11 answers
I am working on my research thesis. I am finding best Computer Science Authors (Who have most research collaboration, productivity etc). I have calculated Centrality(Betweenness, Closeness, Degree, PageRank) values. These values are in decimal. I want to represent in integer values. How can i do that? Picture attached of my decimal calculated values. 
I have read a research paper in which Author represented centrality values in integers. Attached is the screen shot of table of mentioned paper(integer values)
I wan to do like this? Please help how i can do this? I am desperately need help.
Relevant answer
Answer
We have both tried in various ways to guide you to RANKING the data ..... you have n raw scores from say large negative to large positive numbers (can be decimal numbers). The rank of each score is its place in an ordering from say LARGEST (1) to SMALLEST (n)  (or the other way if your interpretation suits). The point is that there is almost nothing wrong you can do -- have the courage of doing your own analysis rather than trying to copy another paper? especially when the steps seem pretty obvious.... note that the published example you show clearly says that they have used ranks, and the resultant are numbers that are all different by a few integers....
I think it would be interesting if you were trying to interpret the magnitude of the gaps between the scores for two observations on two different scales there might be some work to do to normalize the values --- clearly a very large range in the underlying data could appear to give a very large difference on one scale that might not be mimicked on the other; if you are interested in the relationship or coherence between the different metrics, you could use the raw data in a correlation analysis (excel will do this -- see the data analysis add-in)
Hope this helps.
  • asked a question related to Computer Science
Question
2 answers
what is ways to overcome power problem in network on chip and processor technology that make the gap between closer?
is the batteries the problem of power consumption in this field of computer science (battery capacity) or there is more options for solving it. (Really efficient method);
I am trying to research on solving this problem efficiently. can you please help me in this research?
and what other sciences is going to meet in this scope? (like chemistry for batteries and physics etc.)
thanks for attentions
Relevant answer
Answer
Hi Yasin
There will be a number of issues that will need to be addressed here. Ideally, you would want to do it in as cost effective a manner as possible. First, the obvious option would be to see if you can find a nearby power source, either to drive the hardware, or to top up the charge in the battery. This is often not practical, such as in IoT at remote locations, or remote SCADA system elements. The next logical choice would then be to see if you can install a solar powered feed to continuously recharge the battery. A water powered generator would also fit the bill here. Assuming these options are impractical, then increasing battery capacity would be the next logical move, either by research to provide greater capacity (very expensive) or by installing a bigger battery. If these turn out not to be practical, then you need to look very carefully at your hardware to see whether it is matched to the demands being placed on it, and whether the software running on it is over bloated, or signals are being passed through it with too high a frequency. Matching CPU capability to load would be a useful start. Minimising the operating system and software code running on the hardware would also be a smart move. Most operating systems and software running thereon are heavily over-bloated. If you are running something with 150MB of code, but you are only using a very small portion of this, then by running exactly what you need, there will be a much smaller draw on your available battery capacity. Similarly, cutting down the frequency of signals being passed though the hardware would also reduce the load. You could create a temporary data store on a high speed SD card, then update once a day instead of doing live updates, subject to your needs. Usually, you can cut down on frequency without too much impact on the overall system functionality. These last three suggestions you can carry out now, with minimal cost outlay. All the previous suggestions will involve some element of capital spend and/or research which will take some time to gain any positive benefits.
Hope that helps.
Regards
Bob
  • asked a question related to Computer Science
Question
3 answers
I prepared Kings B agar on Friday so I could use it on Tuesday. However, when I opened the chiller, some of the agar plates had melted. There was water in the sleeves I used! 
This has never happened to me and I'm really not sure what went wrong. 
Has anyone had any experience with this?
Relevant answer
Answer
It could also be that the plates were warm when they went into the chiller causing condensation in the bag and plates.  This could cause the agar to get wet and dilute it out the agar if it had not solidified well when first put in the chiller.  I have had this happen before and only put plates in the refrigerator after they have completely cooled most times waiting over night to bag them and chill them down. 
  • asked a question related to Computer Science
Question
2 answers
Dear Friends 
I am looking for download this dataset for Loop closure detection. 
but on website just give link for Leb6Indoor dataset , the small dataset, I need the big one.
I attach image for the dataset, this dataset used in this paper "Incremental vision-based topological SLAM" 2008 
Relevant answer
Answer
Have you tried contacting them directly? It might work faster.
  • asked a question related to Computer Science
Question
20 answers
hi there I want to know how can I write a book based on researches?
the book is about technology and related challenges in computer science.
what is the first step?
whatever you think is important about it is good.
feel free to ask or tell me anything you think is useful.
thanks for attention
Relevant answer
Answer
Hi Yasin,
Good to know that you are willing to contribute a book. First of all know ur target audience, whom u r writing d book for, design the table of content or index for ur reference. So that u r clear with what chapters covers what content precisely...
First of all know ur target audience, whom u r writing d book for, design the table of content or index for ur reference. So that u r clear with what chapters covers what content precisely...
design the table of content or index for ur reference. So that u r clear with what chapters covers what content precisely...
Read the existing literature and quote them well. 
Write simple english; understood by all. As if u r teaching someone...it should be in that such simple format,  explain it with may be some cases, examples, etc. for better understanding.
All the best. Will be looking forward for your book
Mukta
  • asked a question related to Computer Science
Question
11 answers
Donald Knuth said:
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
When is that holds to be true? How do I know that I am not wasting my time? 
Relevant answer
Answer
I think what Knuth means is that you should not start optimizing because of a "gut feeling" that some algorithms in the program may run faster. You should first finish the program and then measure the performance. If the performance is sufficient, stop; else identify the problematic parts and optimize only as much as necessary.
Faster code is less readable, and a non-critical perceived problem probably solves itself in one or two processor generations (or your optimization becomes obsolete), so keeping it simple is a very good advice. As Peter implies, the actual most critical parts are often in the most unusual and unexpected places.
So the message is to base the optimization decision on facts (measurements) and not on guessing (programmer intuition).
  • asked a question related to Computer Science
Question
4 answers
how to apply pre-fetching code into our system/application
Relevant answer
Answer
A Prefetch Algorithm
Dear, check this link:
  • asked a question related to Computer Science
Question
10 answers
But I don't understand components of IT and SI, help me please
Relevant answer
Answer
First you need to find proxies for IT and SI. These will be representative of each IT and SI. Your choice could depend on availability of data as well as previous research among other factors.
Once you have your variables, you can conduct all kinds of association analyses including but not limited to pair-wise correlation, scatterplots, regression, etc.
  • asked a question related to Computer Science
Question
3 answers
I have developed an interaction verification technique. I want to evaluate my proposal. It works completely fine with local Web service repository. However, I would like to test it against the standard norms if possible. For this, I am searching standard test web services repository in the verification domain.
Thanks 
Relevant answer
Answer
You might be interested in this paper:
  • asked a question related to Computer Science
Question
3 answers
i am working in data classification using metaheuristic ( and swarm )feature selection , i appply svm,knn in the original dataset without fs (100 features) and record the accuracy in weka tool
and then make a wrapper feature selection ( swarm + knn as a fitness function ) and produce a subset of features ( f.e 20) and take only20 features and make a classification again in weka 
compare the without/with swarm/2 filter based fs in weka
1- the acuracy here is the best solution fitness in swarm algorithm or the accuracy of classification after fs ?
2- how many datasets and their sizes ( i use small 20 features , this is true)
3- why to make 20 runs of algorithm and is it neccessary ?
thank you
Relevant answer
Answer
thank you Mr.Giorgio Roff for your replay 
  • asked a question related to Computer Science
Question
29 answers
Many tools provide an excellent introduction to computer science for little students, some of them are Scratch, Kodu, Alice, Hackety Hack et al. These are just a few of the options for introducing someone to programming. 
What other languages or tools have you used — in the classroom or at home?
Do you think that they are appropriate for children age 6 and up?
Relevant answer
Answer
There are not very easy for children.
  • asked a question related to Computer Science
Question
6 answers
Computing time complexity of Genetic Algorithm 
Relevant answer
Answer
Hojjat Allah Bazoobandi answer is very correct
  • asked a question related to Computer Science
Question
6 answers
what features are beneficial to find the age from the voices of human beings?
Relevant answer
Answer
Dear Manish,
In the following related papers you can find many relevant feature sets:
(1)
Sedaaghi, M. H. (2009). A comparative study of gender and age classification in speech signals. Iranian Journal of Electrical and Electronic Engineering, 5(1), 1-12.‏
(2)
Lingenfelser, F., Wagner, J., Vogt, T., Kim, J., & André, E. (2010). Age and gender classification from speech using decision level fusion and ensemble based techniques. In INTERSPEECH (Vol. 10, pp. 2798-2801).‏
(3)