- Misha Kakkar added an answer:4How can we using NASA MDP dataset for software defect prediction?
Many researcher in software defect prediction using NASA MDP dataset, but they not publish the developer of each project. Are all of NASA MDP datasets developed by one developer? We can't analyze the dataset if they are from different developer, because the developer have different behavior.
An ownership metrics has been defined in the study --
"C. Bird, N. Nagappan, B. Murphy, H. Gall, and P. Devanbu. Don’t touch my code!: Examining the effects of ownership on software quality. In Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference onFoundations of Software Engineering, ESEC/FSE ’11, pages 4–14, New York, NY, USA, 2011. ACM. "
It tried to study the effect of different developers on software quality.Following
- Denis P. Oleynikov added an answer:3What are the last approaches for design flow ?
I need here the last approach for design flow in software engineering in general, the challenges, design flow area.
At business layer see BPMN.
In software engineering try to use standart UML approach.Following
- Sharmin Sultana added an answer:10Is it possible predict number of test case from Equivalence Class (EC)?
Most of the time people ask me, "how to fix the number of test cases ??". We can create test case from Equivalence Class(EC) , Boundary Value (BV). Here, I only want to calculate number of test case depending on EC.
Thank You all for your valuable comments.Following
- Harshvardhan Singh added an answer:10What are factors to be considered for Image BLUR?
Is there some concept of BLUR FACTOR which can be used for determining if an image is blurred.
If an image is to be de-blurred, is there any initial parameter to determine if the image is actually blurred.
Assume that there is no reference image present and the features from the first image need to be used when determining if the next image is blurred.
If a factor or variable is available, it could help determine if the image is actually blurred and would require de-blurring.
See the detailed explanation on
- Koshy George added an answer:3Can inverse Radon Transform be embedded in the Deep learning Auto Encoder?
I am working on removing blur from images and needed help in understanding if inverse radon transform can be embedded at the core of processing in the auto encoder(AED) which is based on deep learning techniques.
The work revolves around removing the motion blur from an image. The approach is using Deep Learning and a useful processing technique seen in Deep Learning is Auto Encoder(AED). It uses Gaussian for its core operation. I want to replace it with inverse Radon Transform.
Is this approach possible?
It would be great if some materials or links could be suggested as a pointer in this direction.
Thank you very much. The materials really helped :)Following
- Hootan Ghiasi added an answer:7I was wondering if anyone knows which softwares are available for feasibility testing of a project/system in software engineering processes?
For example MS project is used by an analyzer.
What are their advantages and disadvantages?
I'd really appreciate any help.
Jira is the best one https://www.atlassian.com/Following
- Santosh J. Dubey added an answer:14Do software process models predict decay? Are there any formal methods to prevent decay and ensure steady state?
We appreciate software engineering process models provide the scope for managing processes in managing projects. But there are concerns on whether the model is really the one to provide the best alignment over time! It is argued that model should pursue right project management metrics based performance measurements and guide the project managers with clarity. But dynamic conditions imposed by the reallife projects look for mid course corrections. Thus, there is need for stability and steady state in the model. Are there formal methods to predict this decay?
That's a great question. It may interest you to know that there is indeed some work in entropy, rather some papers I have been reading have been precisely on this topic recently.
The entropy of conventional models in software lifecycle is pretty interesting as it is influenced by so many technical and non-technical factors. The sheer rapidity in software output today may probably even invoke such cycles due to compatibility and complexity. Particularly if we're working with legacy systems and cloud. Migration, in itself, could be one of the factors. Or perhaps, it is the end user that instigate the chaos of entropy? Hope this helps.Following
- Mohammed I. Younis added an answer:4How can I include data in evaluation part of research paper?
I am having some problem in writing evaluation section of my research paper. By some experiment i have collected some data as,
Source fault discover rate
A 74% 94%
B 68% 90%
As you can see for system A fault discover rate is 74% where new approach's fault discover rate is 94%. That means new approach discover 20% more fault that existing approach.
1. How i can mention this analysis in my paper?
Another thing is for system A, to achieve 78% fault, 20 test cases are needed. On the other hand for 94%, 30 test cases are required. So for getting 20% more fault discover rate 20 more test cases are required. So is a positive thing?
2. How can i mention this issue?
It is desired to write how do you obtain the test case?
for each set, how the test cases are generated? Adhoc or by a tool? or random?
Can you obtain the test cases used by the tool ? or support your own with the tool?
To make a fair comparison: name the different test case say Set1, Set 2
apply both set to each algorithm , mentioned the metrics as far as fault detection, other metrics like time taken (to generate the test cases and time to execute)
you can observe these time to conclude the cost ( whether it is significant or not).
So, how the SUT grade with Set1, Set2. As such you will have 4 Tables or more (2 Tables per Set).
Depending on the interpretation of your results, you may have contributed to one or more or even all aspects as far as test case generation, fault discover,and execution time are concerned.
- Naoufel Boulila added an answer:16Who has an experiment design for Software Architecture BSc students?
This year, I give a lecture on software architecture for BSc students. I'm still looking for some "action" for the students, and I want to replace 2-3 exercises (4-6 hours) by an experiment. Does anybody has a design for an experiment on UML modeling? For instance, an experiment that waits for replication?
i have conducted a similar experiment where students keep improving a software architecture using UML for a distributed UML modeling Groupware and have replicated the experiment over 3 semesters. everything is described in this paper:
if you need more input let me know.
- Yuanduo He asked a question:OpenHow can I evaluate ACE, the classic species richness estimator, without knowing the ground truth?
Abundance Coverage-based Estimator (ACE) is a modification of the Chao & Lee (1992) estimator discussed by Colwell & Coddington (1994). Any one knows how to evaluate the estimator without knowing the real species numbers?
For example, if I have a sample containing 10 individuals, ACE will give out a result; while if I have a sample containing 20 individuals, ACE will give a better result. How to evaluate the difference between the two results?Following
- Dina Koutsikouri added an answer:2Case studies in software project managementCan anyone recommend good case studies with a focus on project management, project risks and managing the team; that I can use for my students in Software Engineering and Managment BSc. The idea is to get them thinking about challenges and other issues that impact project success. Cheers!
Thanks Sjaak! Very kind of you to respond to my request. Will say hi to Jan and Eric when I see them next.
- Bala Dabhade added an answer:7How to calculate power usage of single C program running on desktop ?
I have a C program running on a linux platform and I have to measure the power consumption of program. Please suggest a power simulator apart from XEEMU and Wattch. I basically want it for power optimization at software level work.
Thanks a lot for everyone for your quick help its really helped me to get more insight into topic.
My research area is related to code optimization for low power(reducing switching activity to reduce dynamic power dissipation). For validation purpose, I need a tool which can give me power usage before and after optimization. There are few tools like XEEMU or Wattch available but they are working on linux 64 machine.
Can you please suggest some more open source tools available or any better technique for validation
- Matteo Ragaglia added an answer:3Does anyone know how to perform mutliple concurrent calls to the "qHull" C library?
I am developing a C program that at some point has to compute "n" convex hulls on the basis of "n" disjoint set of points.
I would like to execute all these calls in parallel and, given the poor documentation available on the qHull library website, I would like to know if anyone has some sort of template to solve this problem.
Thank you in advance for considering my request.
Yesterday I noticed that a new version of qhull has recently been published.
You can find it at: https://github.com/qhull/qhull
This new version includes the "libqhull_r" library that features the same API with respect to "libqhull", but it requires a pointer to a specific qHull struct to be passed to each function call. In this way libqhull_r avoids using global variables, thus allowing multiple parallel calls.
I tested the parallelization and I was able to avoid memory errors and segmentation faults.
- Roger Rommel Ferreira de Araújo added an answer:2How to include the Encog framework in my own Java Project?
Does anybody know how I include the Encog library to use in my own project? An example would be great.
Have a look at section 1.1.4 of the tutorial Taher mentioned. It will show you how to include Encog in your project either using Gradle (Listing 1.1) or Maven (Listing 1.2). I'd go with Maven, since that's what I'm most familiar with, and it has good support in IntelliJ IDEA, NetBeans and Eclipse. Good luck and cheers,
- Fedor Dzerzhinskiy added an answer:17Is there any research on motivating companies who develop software for internal use use appropriate software engineering practices?
- Some companies have an "IT" department that develops software for internal use
- In my experience, such companies see software development as a secondary function and do not invest in the appropriate setup, processes, etc
- Is there any research regarding this? Do such companies have a term which refers to them?
A radical and quite popular way of freeing the company's top managers from the worries about the inhouse software engineering practices is to outsource the software development. So, some of the research on the topic of software development outsourcing may be also relevant for this question. I'd like to mention a nice, short article by Phillip G. Armour "Owning and Using" (CACM, June 2014, vol. 57, No. 6, pp. 29-30), where he points out the kind of internal software development the outsourcing of which can be damaging for the company business - the software constituting the company's operational "vital knowledge".Following
- Asad Masood Qazi added an answer:9What could be the best topic for research in software quality management?i am looking for some problems and issues in project management specially of IT sector in pakistan
this may also helps you more:
- Carl J. Mueller added an answer:7Have you come across any papers or material on identifying the path complexity in an ER diagram?
I am trying to understand how a complexity (quantified) can be attributed to any virtual path from an attribute to another attribute of a different entity.
Formal models are good for estimating the development effort, but the model is completed late in the process (waterfall, scrum). Generally, management wants an effort estimate that included the modeling or design process. In essence, the question is what will it cost to provide the require facilities, This is why the point techniques are favored by many developers.Following
- Chris Basta added an answer:14Would you please advise me what the latest research points in adaptive workflow systems are?I am starting tore search my reaserch point for master`s thesis. I have read in workflow systems and I want to reach something in it. I am new in this area but I am interested in it somehow. Would you please advise me about the latest research points in adaptive workflow systems?
Thank you @GilbertFollowing
- Matthieu Vergne added an answer:15Which formalisms explicitly use lacks of information?
I am facing a case where I want to order some elements based on available evidences, saying which one is more important than another. The point is that, when I have not enough evidence to tell which one is higher than the other, I can order them with an "undecided" order ("a ? b" instead of "a > b"). In particular, if I need to rank them, I can put them at the same rank to say that I am not able to tell which one is better, instead of equally important. I am not dealing with quantitative data, so it is not a matter of having intervals which overlap each other. I really deal with ordinal data: some evidences show that A is better than B, other evidences show that A is better than C, but no information between B and C, so I build a ranking which puts A first, B second, and C second too.
What I am looking for are concrete formalisms which explicit the lack of information like I do (i.e. we have a specific data to tell that we do not have such information). An example is the case of subjective logic (https://en.wikipedia.org/wiki/Subjective_logic) which count the amount of belief, disbelief, but also uncertainty, three values in [0;1] for which the sum should equal to 1. Formalisms related to orders would be of particular interest.
I already read a bit about the Dempster-Shafer theory, but I did not feel like it would help me. I should check your introduction. Thanks, Fernando.Following
- Asher Klatchko added an answer:3What is the best way to handle multiple productivity rates when using workflows in Process Dashboard?
I am working with Process Dashboard v2.1
Let's assume that ProductivityRate = Resources('flow') / Workload('flow'). Now you have to optimize PR with respect to these two variables each of which may in turn depend on your workflow configuration -- parametrically named here 'flow'. R & W may be complex functionals but in principle per given WorkFlow there would be a way to simulate/simplify their behavior.Following
- Akash Singh Chauhan added an answer:3Can anyone help me design dipole of freq band (690MHz-960 MHz) with balun feeding?
Can any one help me to design dipole of freq band (690MHz-960 MHz)with balun feeding?? How we can reduce the height of feed???
If anybody has any docs related to this please share with me..
Can you tell in which antenna book Disk cone antenna are explained? if you have soft copy of that book can you share with me??
my mail id is firstname.lastname@example.org
- Eric Neher added an answer:9What kind of scheduling problem is this problem?
Consider the following scheduling model:
There is a finite set of jobs J each consisting of a set of components. The number of elements in the components sets varies between 1 and 1000. There is a finite set of machines M. Each component of a job can be produced by a subset of M, namely M´ with some negative or positive effects on the processing time. The components are independent and each component only needs one processing step, i.e. a processing by only one machine which is why the production order of the components does not matter. The objective is to minimize the makespan which should also imply a good utilization of the machines.
By reading the literature I noticed that the above problem does not match the usual scheduling models. I think the one that fits the situation most is the concurrent open shop problem, in which the order of the operations does not matter and operations can be executed in parallel. However, as I understood it, each job hat to be processed by each machine which is not true in my case.
My goal for now is to class my problem in order to better understand it and to know which literature I should further conduct. So can anyone help me to determine the type of scheduling problem I currently face? Is it as complicated as the NP hard open shop problems?
So I have done some more reading and thinking.
I came to the conclusion that the problem is a signle-stage "Rm / Mj / Cmax ?" problem. Single stage due to the fact that the components only have to be assigned to the machines but not ordered within the assignments. The question mark within the y-field is because I'm not yet sure if the problem imposes one or multiple objective-functons. See below.
Single- / Multi-objective
A scheduler of my problem should optimize the assignment based on the following goals:
- No machine's workload exceeds x%
- The workload of all machines is balanced (not like one machine is idle and one has constantly 100% workload)
- The components of the jobs are not heavily allocated on different machines (machines are physically distributed in a prduction facility. Therefore, the more the components of a job are spreaded over the machines, the more complex it gets to collect the components belonging to a job afterwards)
Goal 1 is basically just a hard constraint which has to be considered while assigning components to machines. Goal 2 can be achieved with the well known objective "minimize Cmax" as this implies a good balance of the workload. However, goal 3 stands in conflict with goal 2. In order to achieve a good balance in workloads an allocation of the components has to be done. To my knowledge, as those two goals are conflicting, the problem imposes two objectives and has therefore be considered as multi-objective. Do I make this more complicated than it is or not?
I also tried to categorize my problem based on the 4 terms above. In order to do that, one must know the characteristics of my problem:
Jobs with their components may arrive and disappear at any time, i.e. they can be deleted or added. Also, machine breakdowns can occur. However, preemptions are not allowed and once a job is in production it can also not be deleted anymore. So the scheduler does actually not know how many jobs there will be during the day. Which is not that big of a deal since its job is only to allocate the components anyway. This dynamic environment pushes the problem to the stochastic side. However, the completion times of a job (based on the machine and on the job) are instantly known once the job arrives. This differs from all definitions of stochastic scheduling I've read so far. There the completion times are only known when the jobs are done. So my quastion is: is the problem of deterministic or stochastic nature?
My longterm goal is to have a reactive scheduler which has some kind of recovery strategy. Based on the paper "Reactive Recovery of Job Shop Schedules – A Review" by A. S. Raheja and V. Subramaniam I would say this would categorize my scheduling problem as reactive scheduling (on the stochastic side).
What are your thoughts about the two questions?Following
- Simon Schröder added an answer:4Are programming language performance benchmarks relevant in multi-platform applications?
So the question is: Do benchmarks hold relevance when considering multiplatform applications?
Most of the time benchmarks will give you skewed results. Two different computers will produce different results. How will you make sure that neither the OS, nor the CPU architecture, nor the amount and speed of RAM influences the results? If you don't use identical hardware it is impossible to tell what you are benchmarking. What you can actually benchmark is how your solution will scale with problem size (but this is only for one platform). If you have different kinds of hardware you need to set a baseline for every platform independently. As a baseline you can, for example, provide another script written in the same language. This will set a baseline on all platforms. You might have a chance to compare your script to this baseline on each platform separately.
I am not sure if this approach will make sense as I have not thought this concept through. But, for now I am really skeptical when I see comparison between different platforms.Following
- Mariusz Janiak added an answer:12What are open-source frameworks implementing publish–subscribe approach over IP/UDP, dedicated for small embedded systems?There is a commercial available implementation of RTPS protocol, a RTI Connext Micro, which is well scalable for small embedded systems with microcontroler. There is also open-source project ORTE (http://orte.sourceforge.net) that implement a RTPS protocol, but I haven't found its port for microcontroler based systems.
We are basing on the standard Ethernet hardware that is available in PCs and microcontrollers. On the MAC level, the RTnet implements software TDMA discipline -- Time Division Multiple Access so the arbitration problem has been already solved. What we looking for now is a midlleware protocol that implements the pub/sub paradigm, and is able to operate over the UDP/IP. Our PCs has 1GB NICs but microcontrollers has only 100MB. Most of our system typically operate with 1ms period so latency should be no more then the tens of us.Following
- Majid Imani added an answer:28Is it easier and faster to implement data structures in Java or C++?Students often ask if it is easier and faster to implement data structures, such as list, stack, queues, trees, forest, in Java or C++?
Which of these is your own choice? Java or C++?
Your answers will be appreciated. Thank you in advance.
I've recently implemented some cache-oblivious algorithms and data structures with both languages (Java and C++). So based on my experience C++ is more powerful than Java for implementation of cache oblivious data structures (support for large dataset and closer to the hardware can be some of reasons).Following
- Subburaj Ramasamy added an answer:27Which type of testing technique covers all coverage like code, path, statement etc., in a better way?
There are many type of testing techniques available, whether all techniques covers code, boundary, statement coverage?
you should also carry out dataflow testingFollowing
- Peter T Breuer added an answer:10Why can I not execute software in direct log in, but no problem when execute via ssh command?
Problem is solved now. the reason is: there is one thread is created and concurrently operated with my reading process, with the function is to signal that the computer is ready for the reading process or not yet. If this thread is not operated one of its "key step" about signalling before my reading function, the main software will not able to do the reading process ===> it stops !
I simply make the computer sleeps for 1 micro-second before reading process start===> problem is solved.
I wrote a C++ software, which is a data readout from a hardware modules to PC via Gigabit Ethernet.
I have a problem with executing software when direct log in to Linux computer. This is the case:
- When I log in the computer from another one via the ssh command, I can execute my software successfully: the PC can access to the module and read out data.
- When I log in the computer directly, I cannot execute my software successfully: PC can access to the module but usually cannot read data out (just a few times, it can read data out).
This is a strange behavior to me! I discussed with my lab mates, they just have problems if they login with ssh but everything is normal when direct login.
Of course, I want my software can be executed when directly login or via ssh.
Is there anybody experienced this problem?
I wish for sharing experiences and giving suggestions & advises.
Thanks a lot.Following
- Emna Ben Ayed added an answer:5What is analytical and empirical evaluation technique for evaluating a technique?
I have proposed a testing technique and make some comparison with experimental data with existing works(empirical evaluation). But it seems not sufficient for publishing research paper. I want to make some analytical evaluation.
Is it an acceptable evaluation criteria?
how to make analytical evaluation?
I used to make some qualitative comparison between existing works. does it worth anything??
Hi, I do not know exactly your case but I think that some paper published in "JOURNAL OF TESTING AND EVALUATION" can give you answer.Following
- Alberto Sampaio added an answer:9How do I construct a benchmark, taxonomy or classification in software engineering?
Are there any systematic guidelines for that?
Is there any difference among the three terms?
Emigdio, that's another question and the answer can be found inside the SWEBOK (see the answer above by James Moore). I only add that there is also a non free printed book about SWEBOK at amazon.Following
- Rana Majumdar added an answer:3Is it possible to connect Genetic algorithm with software maintainability?How to use genetic algorithm for checking agile software maintainability?
About Software Engineering
Software engineering and the application of knowledge-based, simulation-based, data-driven, human-centred and automated approaches.