Science topics: Computer ScienceComputing

Science topic

# Computing - Science topic

Explore the latest questions and answers in Computing, and find Computing experts.

Questions related to Computing

How do we select good journals? Is the Journal of Engineering, Computing, and Architecture a good journal?Does UGC approve it

Currently, data is available in forms of text, images, audio, video and other such forms.

We are able to use mathematical and statistical modeling for identifying different patterns and trends in data which can be used through machine learning which is a A.I's subsidiary for performing different decision making tasks. The data can be visualized in variety of forms for different purposes.

Data Science is currently the ultimate state of Computing. For generating data we have hardware, software, algorithms, programming, and communication channels.

But, what could be next beyond this mere data creation and manipulation in Computing?

After elaborate identification of faculty development program I found the following list of programs which can specify the the need for true, comprehensive and complete space-time analysis to address semantic and space-time scales was stressed, using scalable algorithms and infrastructure for large volumes of data. Fourth, means were thought necessary to represent and utilize and analyze data and information quality, reliability, and confidence. This implies the need to determine (1) what information is needed by particular users and the appropriate evaluations methods, and (2) how to make information on certainty or uncertainty useful and to support reasoning with uncertainty and with heterogeneous kinds of information. Lastly, the group discussed the need to develop adaptive visual analytic methods that support a range of users, uses, and devices across a range of interaction science issues; human-algorithm interaction; speed of response to support interaction; sensitivity assessment in real time; and uncertainty representation.

The information can be strengthened provided we can get more analytical information

1. Training for Social Contentedness and Inspiration

2. Environmental Geo-technology

3. Big data Analytics

4. Electric Vehicles

5. IoT(Internet of Things)

6. Waste Technology

7. Computer Science and Biology

8. Novel Materials

9. Social Enterprise Management

10. Green Technology and Sustainability Engineering

11. Telemedicine

12. Data Sciences

13. Control Systems and Sensor Technology

14. Mural

15. Wearable Devices

16. Smart Cities

17. Artificial Intelligence

18. Robotics

19. 3D Printing and Design

20. Photonics

21. Engineering Law

22. Block chain

23. Cyber Security

24. Machine Learning and Pattern Recognition

25. Quantum Computing

26. Emotional Intelligence

27. Augmented and Virtual Reality

28. Systems Engineering

29. Innovation Management

30. Artificial Intelligence and Robotics

31. Lab on Chip

32. Gamification

33. Data Science

34. Leader Excellence and Innovation Management

35. Sustainable Engineering

36. Immersive Virtual Reality

37. Design Thinking

38. Student Centered Teaching Learning Methods and Strategies for higher Education

39. Personal Effectiveness

40. Electronics and Computer Engineering

41. Electric and Computer Engineering

42. Technology Management

43. 3D printing and design

44. Energy Engineering

45. Management Information System

46. Robotic Process Automation tools and Techniques

47. Advances in manufacturing

48. Biomedical Instrumentation

49. Construction Technology

50. Graphic Design

51. Advanced Communication Engineering

52. Event Management

53. Advances in 3d printing and future scope

54. Capacity building

55. Productivity enhancement

56. Team building and coordination

57. Heritage management

58. Synthetic biology

59. Precision Health technology

60. Manufacturing and Monitoring

61. Global Navigation Satellite System

62. Operation Management

63. CFD-Computational fluid dynamics

64. Hybrid Machining Solutions for typical complex Engineering Applications

65. Apparel Design

66. Human Centric Computing

67. Alternate Fuels

Edge computing is a research hotspots, but I can not find any open data set of edge computing. Does anybody know any big data set available in literature for edge computing?

I want to know which approach is better between computing the RSME with scaled data or with unscaled data. Please share your idea.

Thanks

Question-4: How mature are our computing platforms and programming languages for enabling autonomous software generation mostly at run-time?

I have an idea to reduce error ratio in quantum computers. We try simulating digital gates without a proper success. Then, we need a revolution. I have suggested that we simulate bases of DNA.

I am planning on purchasing 24 core dual xenon gold processors. Would that be enough for analysis of processed data from scRNA seq analysis?

I am working on Forest Canopy Density. There is a parameter called "Scaled Shadow Index(SSI)" while computing Forest canopy density. In most of the papers I found that, SSI has been calculated by "Linearly Transforming" Shadow Index. I have computed the Shadow Index. But i am not getting the idea to compute Scaled Shadow Index. Kindly help me out. Moreover, If I am using Landsat 5 and 8 Surface Reflectance Image for FCD Mapping and as the Reflectance value ranges from 0 to 1, is it still mandatory to normalize these Surface Reflectance data before calculating Vegetation Indices?

Hi

I am developing a method for computing fussy similarity in WorDnet. Previous work mainly focused on the simialrity of SynSets (concepts).

I am serarching for a snatdard baseline for reasons of comparison. My question is: what is the standard baseline for comuting the similarity of words in wordnet.

Thank you.

I am trying to implement Multi Edge Computing in Vehicular Networks (V2X). I want to know how to integrate or implement MEC (probably ETSI standards) in v2x (coded in NS3)?

I would like to develop a system that runs on a microcontroller in a remote controlled submarine. While the submarine moves freely in the water, it should always calculate its exact position using data from an accelerometer, gyroscope and magnetometer.

For the distance measurement, orientation and position I have already gained basic knowledge about the Kalman filter, quaternions and AI systems.

The calculation must use as little computing power as possible but still be as accurate as possible.

Recently, I used a dataset with over 6298 rows after cleaning. Trending YouTube video's dataset from Kaggle is used. But after building the model while training the process get failed several times. Then I used Auto model feature which took much time to training and evaluating final results. Then it builds results over 1 GB. So how can I deal with these larger data? Is there any provision to minimize the computing time and storage space to use rapid miner effectively?

Computing limit of a function at a point is very important task in mathematics. The concept is applicable in science and engineering. Who would compute the limit of the function F at (0,0)? Get an attached file!

Thanks!

Can federated learning and edge computing be combined? The answer is yes. So what should we do? To improve the data security and privacy protection in the modeling of each participant (terminal).

Can you have any measuring fact technique?

In general the performance of a system is directly proportional to the system configuration. However, in context to IoT, the devices have limited computation power, storage, energy etc.

Hello everyone.

Any one can help me with the computation of the ray parameter (p) for seismological phases?

If a code is available in Matlab, it would be most useful!

We have a Computer Science and Communication Journal in the college (Journal of Computing and Communication ). We target to publish Research articles in all the disciplines of Computer Science and Communication. We plan to publish two issues per year.

**Can anyone tell me the ways and means to index the Journal in the google Scholar**Can anyone point me to resources on how to evaluate the computing curriculum at both the K-12 and college levels? Thank you.

I am doing the master in Mobile Edge computing ,so i want to know the best simulation tools to simulate the model.and how can i use it?

I have multiple measurements related to the object of interest (stationary). I want to identify the type of object.

I am computing a convex sum to strengthen my understanding of the type of object using the sum. sum(I) =weight* sum(i-1) + (1- weight)*measurement(i)

I want to know how to assign the weights to the multiple measurements such that

some convergence is observed interest.

Program execution time depends on the number of instructions as well as on computing power of the machine. Does anyone have some recommendation where to find an analytical model for estimating program execution time according to program instructions and CPU, RAM, and DISK characteristics?

For example, if we know the number of instructions, CPI (cycles per instruction), as well as hardware specifications of CPU, RAM, DISK, how to calculate (estimate) program execution time?

I have a multi-objective minimization problem and the final objective function is written as: F = f1 + f2. However, f1 and f2 are the error functions of two different quantities: orientation error (degrees) and position error (meters). I would like to minimize both of them at the same time.

The questions is: how can I properly scale, or make dimensionless, f1 and f2 before computing F in order to make them comparable (not to add oranges with apples)?

The main difficulty is that I do not know a-priori their range of variation (i.e. orientation and position errors). On the contrary, the ranges of variation of the "original" quantities (i.e. orientation and position) are known from the experimental data.

How is this problem commonly solved in the optimization practice?

Thank you,

Marco

- I read this paper( DOI: 10.1021/jacs.8b01543) and I had the following problems.
- 1-The structure examined in the article is two-dimensional.Given that the K points are considered to be 4*4*1 but the unit cell is shown as a bulk. Has a three-dimensional unit cell been used in the computing section?
- 2-Considering that by cutting a unit cell and creating a vacuum and repeat, a two-dimensional structure can be created. Why is a two-dimensional structure not used in the calculation section?

Dear colleagues,

I need to compute charge transfer integral J(RP), spacial overlap S(RP) and site enegries of the dimer H(RR) and H(PP) (two same molecules, R and P, specifically oriented), formulated as it is shown in pictire, from JPhysChemB, 2009, v113, p8813.

Could you specify the keywords of the Gaussian 09 to do this?

Thanks in advance,

Andrey Khroshutin.

Accounting traditionally is presented as describing efficiently flux (what comes in, and goes out) and stock (what is held, at a given time), and as debit and credit. It is also about matching the terms of an exchange.

How can we move the model beyond the basic number-based description, into more data-rich (including metadata, descriptors, etc) frameworks, while benefiting from the deep and long experience of accounting over human history?

With Matrices of sets [1], a first endeavour was made to describe objects rather than numbers attached to them (price, quantity, measurements and features).

With Matrices of L-sets [2] we are going one step further, distinguishing actual assets (as classical sets) and wish lists, orders, needs, requirements which are not yet owned or available. We show how an operational and computable framework can be obtained with such objects.

References:

[1]

Presentation Matrices of Sets, complete tutorial with use cases

[2]

Preprint Matrices of L-sets -Meaning and use

In recent years, machine learning has been used in 2D MPM owing to excellent computing performance in processing multi-source and non-linearity geosciences datasets. Nowadays, machine learning is rarely reported in 3D MPM. What limits the application of machine learning in the 2D/3D mineral potential mapping?

A repository to aggregate a set of reproducible and reconfigurable codes and notebooks for testing various task placement policies for edge and fog server networks.

The simulation models are based on two of the most powerful python-based simulation modelling frameworks, namely salabim and simpy.

The systems that are modelled cover the basic types of task placement problems in edge computing servers. The models are useful for managing a network of edge computing servers.

There are animation and GUI options which are indispensable in simulation modelling.

The uploaded models are templates for building simulation models for a variety of edge network policies. Researchers who research task and server placement problems in edge computing would find the models useful.

I am computing taylor series expansion in which distance of a objects from a origin(0,0) is computed. I am expressing the distance in terms of sum of position and velocity*time of the object. I only wrote the first two terms of taylor series expansion.

Please check if the expansion in the document attached is correct.

Am faced with the proplem with computional cost of step time during sequential coupled thermomechanical analysis in abaqus standard.

I have a nodal temperature for 2340 sec in heat transfer analysis and predefined in step 2

For step 1 mechanical time period is 1 sec

For step 2 predefined time period is 2340sec But the software didn't complete the job even in 3 days due to the second time period is huge.

If there is another method please suggest me

Actually, I need high computing power for executing my simulation and as mentioned by ansys i have contacted ANSYS and emailed them for free trial as written on their webiste. but haven't got any reply? how to get ANSYS cloud free trial? is there anybody who is on the free trial of ANSYS Cloud?

I basically haven't found any relevant research, but I think if it is data from surgery or implantable robots, is there a requirement for real-time inference/computing? I haven't found any evidence to support my thoughts.

Hi everyone,

I have a question about calculating RDF by LAMMPS. I want to know that what exactly LAMMPS do for computing g(r), In fact, I mean on which atom, LAMMPS take the RDF. Are we able to recognize it in our sample?

I Have a panel data set of 10 years and I am facing a problem in computing the blau value for my dummy variable CEO duality measured as: "1" if the person is the CEO and and the chairman of the board and "0" otherwise.

Hi, I have questions about HLM analysis.

I got results saying '

**Iterations stopped due to small change in likelihood function'**First of all, is this an error? I think it needs to keep iterating until it converges. How can I make this keep computing without this sign? (type of likelihood was restricted maximum likelihood, so I tried full maximum likelihood but I got the same sign) Can I fix this if I set higher '% change to stop iterating' in iteration control setting?

Hi, I am a software engineering student currently in my final year of the degree program. I am trying to find a research topic based on

**Edge computing**. Your recommendations are welcome!Can someone guide me on "

*Investigation of Security Enhancement Techniques in Edge Computing using Deep Learning Models."**Any base paper which is related to this topic is helpful*

While Computing undergraduate studies, several different software projects are required to be submitted. When in academic institutes, to development of software, it needs to follow a prescriptive process model. However, when considering group projects, the stages or works can be shared among the members. But in individual projects, as a single member, no work can be shared but can be loved the practical problems by managing the process groups efficiently.

Hence, rather than group projects, the selection of the prescriptive process model is important.

**Then when selecting a prescriptive process model, what are the things that an underegrad should be considered?**

^{Disclaimer}

^{The discussion targets the perspectives of two types of participants.}^{Software Engineering Student: please analyse the situation with your knowledge and answer.}^{Software Engineering professional / scientists: please use your experience and insert advice.}

^{It is warmly welcome the ideas of both the groups as well as interest audience}^{Reply to this discussion}

I'm looking to survey faculty on their research computing needs to advocate for

researchers, and am looking for good questions, or a good example.

By introducing certain constraints, It is common that in some of the existing literature that solves the American style options, the early exercise feature and the Greeks are either not computed accurately or unavailable.

I am seeking insight and would love to inquire about various numerical, analytical, and/or analytical approximation techniques for computing the early exercise feature in a high-dimensional American options pricing problem.

User mode and Kernel-mode are two processing statuses of the operating system. Please suggest to me, a very simple example in which you can explain the differences and other functionality such as system calls, interrupt to a novice learner.

Further, just inform how to map the example with subject

^{Disclaimer}_{The discussion targets the perspectives of two types of participants.}

_{Operating System subject's Student: please analyse the functionalities and construct an answer.}_{Computing professionals/teachers: please use your experience and insert answers as advice.}

_{It is warmly welcome the ideas of both the groups as well as interest audience}

Hi all, I'm using CPLEX for solving VRPTW (vehicle routing problem with time window) and observe there is a huge computing time difference even when I change the problem size by just 1. By "problem size", I mean number of nodes inside problem.

For example, for 20 nodes, it took only 20 secs to solve. However, it took more than 1 hour to solve 19 nodes instance. I understand VRPTW is NP-hard and so such phenomenon is expected to happen.

The gap is still too big, I wonder if there is any technique to make computing time more consistent with problem size?

hey, I want to develop energy efficient SDN and to integrate with Edge computing based IoT applications. So, which emulator/simulator would you recommend for integration SDN, Edge computing and IoT devices. Please suggest best open source Emulator / Simulator.

I am operating data for establishing the norms for Likert 7-point scale. I was suggested to use z-norms. But while computing the standard scores in SPSS, I got the value +2.15 and -1.96 (instead of ± 3). And while checking the data, it showed that the data do not follow the normality. In this case, how do I set the norms for interpretation?

[1] In this experiment, we use a very large virtual world, with a dimension of 30*30 units, total number of avatars is equal to 25000, and the number of servers, 16. The radius of the Area of interest of each avatar is equal to 0.5.

[2] To generate a simulation environment, we randomly position 100 cloudlets and 500 players on a 1,000 meter square grid. Players are partitioned into 20 regions

[1] An efficient partitioning algorithm for distributed virtual environment systems | IEEE Journals & Magazine | IEEE Xplore

[2] Delay-Sensitive Multiplayer Augmented Reality Game Planning in Mobile Edge Computing | Proceedings of the 21st ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems

Faculty of Computing Should have what are the following departments or suggest anyone you like?

Computer Science

Information Systems / Information Technology

Software Engineering

Computer Engineering

Data Science

Cyber Security

Bioinformatics

Robotics

As we know, the machine learning and data mining techniques under a "BIG data framework" have been booming in the fields of intelligent controlled robot, unmanned vehicles, 5G communications and AI chips, with papid advances in computer computing capabilities in the past few decades.

Currently, as far as I can see, the Quantum computing and cloud computing techniques are also possible solutions to promote the industrial processes further.

If our humans are able to make huge breakthroughs of these 2 technical ways to obtain theoretically infinite computing speed and resources, does it mean that all the industrial processes are able to moniter and operate under a supervised level in real-time (more intelligent, more safe, reliable and available).

The problems, "improvements of algorithms against the computing burdens" and "conveniences of online deployment" probably will not the more important issues for our control strategies, fault diagnosis and fault-tolerant issues any more.

What about your ideas for this?

I sincerely hope who are interested in this topic can give your valuable comments!

I have drived a formula of computing a special Hessenberg determinant. See the picture uploaded here. My question is: Can this formula be simplified more concisely, more meaningfully, and more significantly?

There are three mediators included and the product of the coefficient approach is used to compute the value of indirect effect through each mediator respectively. After computing the value of the indirect effect, bootstrapping test will be conducted to calculate the standard error of each indirect effect to study their significance.

how can apply the last step to use bootstrapping test to calculate the standard error of each indirect effect to study their significance?

After computing the value of the indirect effect, bootstrapping test will be conducted to calculate the standard error of each indirect effect to study their significance.

I need to know how I can do that?

sorry for my language!!!!

I need to know the computing requirements to do SAR time series analysis and how many SAR rasters (images) should be analyzed?

I'm computing climate indices for precipitation, maximum and minimum temperatures using RClimDex program. However, in the case of precipitation, the application displays the following error:

" Error in daynormm[(i - i1):(i + i1),]: subscript out of bounds"

As a result, what is the best way to fix this error?

I'm looking forward to hearing responses from you as soon.

Thank you for your assistance in advance

Let's say we have an undirected graph with only weighted nodes/vertices (representing an attribute/measure) and unweighted edges (where all nodes are fully connected).

Are there any theorems for represnting and computing the shortest path to traverse at least 2 nodes?

When using MOE software for computing Ligand interactions, which file format is suited to save/export interaction file of these format png,jpg, bmp,emf+ svg?

I want to save in that format which are accepted by academic research papers. Please advise.

Dear colleagues,

The idea that its language can identify a scientific or technical field was a relevant topic in the work of Jürgen Habermas. However, in computing fields, this difference would be mild or fuzzy.

I have designed a small instrument to measure the domain difference between computer science and software engineering. I would like that you answer it ( http://shorturl.at/akFS7 ) or let me your opinion about it. Depending on the number of answers, I would let the results here.

Thank you very much

Homomorphic encryption schemes allow users’ data to be protected anytime it is sent to the cloud while keeping some of the useful properties of cloud services like searching for strings within a file.

As we may provide a computerized graph structure for synthesizing and displaying the data on a region’s ecosystem-economic system how could we create a causal network of the synergistic impact mechanism among certain climate related factors?
How can we also tackle the various synergistic effects in a thematical framework?
For instance by applying Mathematica-based graph modelling

I am looking to establish theoretical frameworks for such research on one to one computing projects.

For computational methods in FEA/FVM and Partial DIff Eqn's using C++.

Also, some guidance with the learning path (how to get along) will be appreciated.

While computing numerical solution of a PDEs if the exact soln is not know we use a double mesh Technic for representing exact soln

. Is there other options to represent the exact computed soln?

Good morning everyone,

I’ve got a question about computing a new variable in SPSS, based on the highest value of multiple variables for each case. Please see the example below, where I indicate the value of the new variable for each case.

**Case 1**

Score A: 3

Score B: 2

Score C: 5

Score D: 4

Score E: 2

Score F: 3

New var: Score C

**Case 2**

Score A: 5

Score B: 2

Score C: 3

Score D: 4

Score E: 2

Score F: 3

New var: Score A

**Case 3**

Score A: 3

Score B: 2

Score C: 2

Score D: 5

Score E: 2

Score F: 3

New var: Score D

The question is: how to compute this new variable?

I know that the actual values of this new variable will range from 1-4, and by labelling I can display those as Score A - Score D.

But the step of finding the highest value and then showing where this value is derived from, is unclear to me.

Any help is much appreciated!

Best,

Michiel

I am interested in postdoctoral studies, how can I apply. Can someone recommend me some universities? research area, complex networks, machine learning and transparent computing,.

Container technologies such as Docker, or Kubernetes are widely used in the industry for virtualization deployments. Edge computing paradigms such as MEC and Fog are intending to leverage such infrastructure for their feasible deployments. My intention is to simulate and emulate the service migration process of such an edge computing scenario (migration from one edge computing node to another) to understand the security aspects of them. Thus, I have the two questions;

- What is the best simulator I can use to implement a migration model (preferably MDP)?

- In order to emulate, should I use existing tools like Openshift, Kubernetes...etc. ; Do they have the migration function? or do I have to implement such a platform from the scratch using docker engine?

Results of single-case research designs (i.e., n-of-1 trials) are often evaluated by visually inspecting the time-series graph and computing quantitative indices. A question our research team is pondering is whether visual analysis or quantitative indices should be set as the criterion for determining effects from experiments. I'd appreciate thoughts from both sides of the argument.

I want to learn new computing technology. I heard that Fog computing is the current one. Whether other latest computing technology available?

I am using only one Physics interface in Comsol i.e. Heat transfer in fluids with checking the box which says Heat transfer in porous media because a sub domain in the whole computing domain is porous. solving the stationary step and getting this error. please suggest possible solutions.

What does '

*t*' denote in the equation (image attached) for computing risk. Is it "continuous" time or "discrete time? Please elaborate. A detailed response will be appreciated.I use queuing theory for determining premium pricing through aggregate loss. I haven't found suitable data.

If Lmn(x) is the Laguerre polynomial of order m and degree n

Is there a means of computing Lmn(x) for

1) m goes to infinity

2) n goes to infinity