Science topic

Computing - Science topic

Explore the latest questions and answers in Computing, and find Computing experts.
Questions related to Computing
  • asked a question related to Computing
Question
5 answers
How do we select good journals? Is the Journal of Engineering, Computing, and Architecture a good journal?Does UGC approve it
Relevant answer
Answer
Be careful when it comes to the journal “Journal of Engineering, Computing and Architecture (JECA)” According to the website https://ugccare.unipune.ac.in/Apps1/User/Web/CloneJournalsGroupII the website http://www.journaleca.com corresponds to a cloned/hijacked version of a legit (but inactive?) journal. But even if you missed this crucial info, the only active (fake) website shows numerous red flags:
-Contact info http://www.journaleca.com/CONTACT/ is basically absent, no physical address which is suspect
-Prominently mentioned impact factor is misleading because this journal is not indexed in Clarivate’s SCIE (you can check here https://mjl.clarivate.com/home ) and thus it is a fake impact factor
-The (inactive) legit version of this journal used to be Scopus indexed but is already discontinued in 2010 https://www.scopus.com/sourceid/11200153562
-I am pretty sure that this version of the journal is not included in the UGC care approved title list since on the homepage with changing logos there appear notorious examples of so-called misleading metrics such as CiteFactor and ISI (see also https://beallslist.net/misleading-metrics/ ). This is a strong exclusion criterium for UGC
So, I would say stay away from this one.
Best regards.
PS. Selection of a proper journal is discussed here on RG in numerous questions and discussions:
  • asked a question related to Computing
Question
3 answers
Currently, data is available in forms of text, images, audio, video and other such forms.
We are able to use mathematical and statistical modeling for identifying different patterns and trends in data which can be used through machine learning which is a A.I's subsidiary for performing different decision making tasks. The data can be visualized in variety of forms for different purposes.
Data Science is currently the ultimate state of Computing. For generating data we have hardware, software, algorithms, programming, and communication channels.
But, what could be next beyond this mere data creation and manipulation in Computing?
Relevant answer
Answer
I understand manipulation of data from the past. How will we manipulate data from the future? Further the analysis of data seems constrained by science and mathematics development. David Booth
  • asked a question related to Computing
Question
3 answers
After elaborate identification of faculty development program I found the following list of programs which can specify the the need for true, comprehensive and complete space-time analysis to address semantic and space-time scales was stressed, using scalable algorithms and infrastructure for large volumes of data. Fourth, means were thought necessary to represent and utilize and analyze data and information quality, reliability, and confidence. This implies the need to determine (1) what information is needed by particular users and the appropriate evaluations methods, and (2) how to make information on certainty or uncertainty useful and to support reasoning with uncertainty and with heterogeneous kinds of information. Lastly, the group discussed the need to develop adaptive visual analytic methods that support a range of users, uses, and devices across a range of interaction science issues; human-algorithm interaction; speed of response to support interaction; sensitivity assessment in real time; and uncertainty representation.
The information can be strengthened provided we can get more analytical information
1. Training for Social Contentedness and Inspiration
2. Environmental Geo-technology
3. Big data Analytics
4. Electric Vehicles
5. IoT(Internet of Things)
6. Waste Technology
7. Computer Science and Biology
8. Novel Materials
9. Social Enterprise Management
10. Green Technology and Sustainability Engineering
11. Telemedicine
12. Data Sciences
13. Control Systems and Sensor Technology
14. Mural
15. Wearable Devices
16. Smart Cities
17. Artificial Intelligence
18. Robotics
19. 3D Printing and Design
20. Photonics
21. Engineering Law
22. Block chain
23. Cyber Security
24. Machine Learning and Pattern Recognition
25. Quantum Computing
26. Emotional Intelligence
27. Augmented and Virtual Reality
28. Systems Engineering
29. Innovation Management
30. Artificial Intelligence and Robotics
31. Lab on Chip
32. Gamification
33. Data Science
34. Leader Excellence and Innovation Management
35. Sustainable Engineering
36. Immersive Virtual Reality
37. Design Thinking
38. Student Centered Teaching Learning Methods and Strategies for higher Education
39. Personal Effectiveness
40. Electronics and Computer Engineering
41. Electric and Computer Engineering
42. Technology Management
43. 3D printing and design
44. Energy Engineering
45. Management Information System
46. Robotic Process Automation tools and Techniques
47. Advances in manufacturing
48. Biomedical Instrumentation
49. Construction Technology
50. Graphic Design
51. Advanced Communication Engineering
52. Event Management
53. Advances in 3d printing and future scope
54. Capacity building
55. Productivity enhancement
56. Team building and coordination
57. Heritage management
58. Synthetic biology
59. Precision Health technology
60. Manufacturing and Monitoring
61. Global Navigation Satellite System
62. Operation Management
63. CFD-Computational fluid dynamics
64. Hybrid Machining Solutions for typical complex Engineering Applications
65. Apparel Design
66. Human Centric Computing
67. Alternate Fuels
Relevant answer
Answer
Communications and social networking sites
  • asked a question related to Computing
Question
2 answers
Edge computing is a research hotspots, but I can not find any open data set of edge computing. Does anybody know any big data set available in literature for edge computing?
Relevant answer
Answer
It depends on the application you are using Edge computing for.
  • asked a question related to Computing
Question
1 answer
I want to know which approach is better between computing the RSME with scaled data or with unscaled data. Please share your idea.
Thanks
  • asked a question related to Computing
Question
2 answers
Question-4: How mature are our computing platforms and programming languages for enabling autonomous software generation mostly at run-time?
Relevant answer
Answer
  • asked a question related to Computing
Question
5 answers
I have an idea to reduce error ratio in quantum computers. We try simulating digital gates without a proper success. Then, we need a revolution. I have suggested that we simulate bases of DNA.
Relevant answer
Answer
The research does not belong to me but I am happy.
  • asked a question related to Computing
Question
2 answers
I am planning on purchasing 24 core dual xenon gold processors. Would that be enough for analysis of processed data from scRNA seq analysis?
Relevant answer
Answer
Thanks for the suggestion and the link Vicky :)
very helpful.
  • asked a question related to Computing
Question
3 answers
I am working on Forest Canopy Density. There is a parameter called "Scaled Shadow Index(SSI)" while computing Forest canopy density. In most of the papers I found that, SSI has been calculated by "Linearly Transforming" Shadow Index. I have computed the Shadow Index. But i am not getting the idea to compute Scaled Shadow Index. Kindly help me out. Moreover, If I am using Landsat 5 and 8 Surface Reflectance Image for FCD Mapping and as the Reflectance value ranges from 0 to 1, is it still mandatory to normalize these Surface Reflectance data before calculating Vegetation Indices?
Relevant answer
The density clustering with wavelength clustering algorithms and Clustering by Wavelet Analysis may help your work
  • asked a question related to Computing
Question
4 answers
Hi
I am developing a method for computing fussy similarity in WorDnet. Previous work mainly focused on the simialrity of SynSets (concepts).
I am serarching for a snatdard baseline for reasons of comparison. My question is: what is the standard baseline for comuting the similarity of words in wordnet.
Thank you.
Relevant answer
Answer
You might also try to take a look at this article:
We used a little bit more sophisticated approach than the one Olga Seminck indicated. However, I would simply try to combine both approaches.
  • asked a question related to Computing
Question
3 answers
I am trying to implement Multi Edge Computing in Vehicular Networks (V2X). I want to know how to integrate or implement MEC (probably ETSI standards) in v2x (coded in NS3)?
Relevant answer
Answer
Dear Harinder Kaur:
You may benefit from these valuable articles about your topic:
"A Mobile Edge Computing Approach for Vehicle to Everything Communications"
############
############
###########
##########
Also the attached pdf files....
I hope it will be helpful...
Best wishes...
  • asked a question related to Computing
Question
4 answers
I would like to develop a system that runs on a microcontroller in a remote controlled submarine. While the submarine moves freely in the water, it should always calculate its exact position using data from an accelerometer, gyroscope and magnetometer.
For the distance measurement, orientation and position I have already gained basic knowledge about the Kalman filter, quaternions and AI systems.
The calculation must use as little computing power as possible but still be as accurate as possible.
Relevant answer
Answer
This is an extremely important scientific research project. Is the following scientific research article helpful or too elementary?
"settings Open AccessArticleA Machine Learning Approach for an Improved Inertial Navigation System Solution *📷by📷Ahmed E. Mahdi📷,📷Ahmed Azouz📷,📷Ahmed E. Abdalla📷 and📷Ashraf AbosekeenElectrical Engineering Branch, Military Technical College, Kobry El-Kobba, Cairo 11766, Egypt*Author to whom correspondence should be addressed.Academic Editor: Chris RizosSensors 2022, 22(4), 1687; https://doi.org/10.3390/s22041687Received: 21 December 2021 / Revised: 31 January 2022 / Accepted: 16 February 2022 / Published: 21 February 2022(This article belongs to the Section Navigation and Positioning)Citation ExportDownload PDFBrowse FiguresAbstract The inertial navigation system (INS) is a basic component to obtain a continuous navigation solution in various applications. The INS suffers from a growing error over time. In particular, its navigation solution depends mainly on the quality and grade of the inertial measurement unit (IMU), which provides the INS with both accelerations and angular rates. However, low-cost small micro-electro-mechanical systems (MEMSs) suffer from huge error sources such as bias, the scale factor, scale factor instability, and highly non-linear noise. Therefore, MEMS-IMU measurements lead to drifts in the solutions when used as a control input to the INS. Accordingly, several approaches have been introduced to model and mitigate the errors associated with the IMU. In this paper, a machine-learning-based adaptive neuro-fuzzy inference system (ML-based-ANFIS) is proposed to leverage the performance of low-grade IMUs in two phases. The first phase was training 50% of the low-grade IMU measurements with a high-end IMU to generate a suitable error model. The second phase involved testing the developed model on the remaining low-grade IMU measurements. A real road trajectory was used to evaluate the performance of the proposed algorithm. The results showed the effectiveness of utilizing the proposed ML-ANFIS algorithm to remove the errors and improve the INS solution compared to the traditional one. An improvement of 70% in the 2D positioning and of 92% in the 2D velocity of the INS solution were attained when the proposed algorithm was applied compared to the traditional INS solution.Keywords: INS; MEMS-IMU; machine learning; ANFIS; positioning; navigation1. Introduction With the advantages of being a self-contained system and providing an uninterrupted navigation solution, the inertial navigation system (INS) has become an essential component to obtain a robust navigation solution in several fields such as aircraft applications, autonomous navigation, and vehicle dynamic control [1]. Despite the advantage of the INS having a high short-term accuracy, it suffers from the drift accumulation of the biases over time. The accuracy of the INS’s navigation solution and the ability to reduce the errors accumulated over time depend on the type of inertial measurement unit (IMU) [2,3]. Recently, the utilization of micro-electro-mechanical systems (MEMSs) has been introduced for inertial sensor systems with the advantages of low cost, small size, and low power consumption [4]. On the other hand, the disadvantage of the high error accumulation rate of MEMSs has raised the challenge of modeling these errors to improve the accuracy of the navigation solution [5].The difficulty of modeling these errors is due to the existence of non-linear errors. These errors cannot be modeled by the traditional techniques such as the Kalman filter (KF), the extended KF (EKF), the unscented Kalman filter (UKF), or even the particle filter (PF) [3,6,7]. Accordingly, there is a great need to find an alternative to traditional methods that does not have the difficulty and complexity of error modeling. Therefore, researchers have taken advantage of the availability of a large amount of data extracted from the INS and added machine learning (ML) techniques to the navigation algorithms [6,8,9,10].ML techniques are utilized as estimators/predictors or classifiers of the navigation parameters. These techniques are utilized to smooth the choice of the sensors as an alternative to the KF in a plug-and-play manner [6,8]. This leads to the selection of the integration process and the raw measurements. Consequently, training the ML model helps produce a robust predictive model for the INS errors during GNSS outages. In addition, it can be used to improve visual positioning, mitigating the non-line-of-sight (NLOS) effects such as the multipath effect, spoofing, and jamming [11].There is growing interest in utilizing ML techniques to improve the INS navigation solution. An approach utilizing an artificial neural network (ANN) to overcome the limitations of the KF to bridge the GPS outages during the GPS/INS integration process was introduced in [12,13,14,15]. The proposed methodology was accomplished in two phases. In the first phase, the ANN was trained to predict the INS position error and remove it from the corresponding INS position without having the initial position of the INS. Furthermore, the work in [16,17] utilized the ANN and ANFIS after the GPS/INS integration to enhance the INS navigation solution. In contrast, the work in [18] introduced a non-linear autoregressive neural network with external inputs (NARX) combined with the UKF to enhance the position and velocity accuracy of the INS/GNSS integration. Furthermore, the work in [3] proposed a fast orthogonal search (FOS) model to reduce and compensate the unmodeled residual non-linear errors of a mag/radar/RISS/GPS integration system to improve the navigation solution during GPS outages. The work in [19] utilized the FOS model as a GPS swept anti-jamming technique to discriminate between the authentic GPS signal and the interference from the chirp frequency jammer. In [20], two approaches were introduced to overcome the drift during GNSS outages using parallel cascaded mechanization for non-linear error estimation of the INS solution. The results showed a slight improvement during the parts of the trajectory that had maneuvers such as turns, while the parts with few maneuvers had a significant improvement. The work in [21] proposed a random forest (RF) method for standstill recognition. The proposed method depends on the generated features from the IMU signals that represent the standstill state as an input for the classifier.In comparison, the work in [22] introduced a supervised machine learning technique for spoof detection. The work in [23] introduced an adaptive fuzzy extended Kalman filter (AFEKF) to enhance the prediction level of the position and velocity errors of the INS. In [6,8], the authors introduced a sensor fusion technique based on fuzzy clustering to fuse the Doppler speed from an FMCW radar and the speedometer data to improve the input speed of an RISS model. The results showed the enhancement of the navigation solution in some portions of the trajectory. On the other hand, the lack of sensor fault detection and a false reading algorithm caused a drift in some portions that had wheel slippage. The work in [24] proposed a fuzzy cluster means (FCMs) technique to fuse multiple IMUs to produce a robust measurement, which was utilized in INS mechanization integrated with GPS. The results in this work showed significant improvement when using FCMs with a multi-IMU structure compared to using only one. Furthermore, the work in [25] utilized the ANFIS model to predict the dual-mass MEMS gyroscope’s output drift caused by temperature. The work in [26] utilized the ANFIS model to enhance the navigation solution of the INS by training the ANFIS model on a differential GPS dataset as a reference position and evaluated the model on a raw public dataset (KITTI) with a trajectory that lasted from (140–300) s. Furthermore, the work in [27] utilized the ANFIS model as a solution for the navigation problem of a mobile robot.The work in [28] utilized empirical mode decomposition threshold filtering (EMDTF) and a long short-term memory (LSTM) neural network. The EMDTF disposes of the noise generated in the INS’s sensors, while the LSTM is used to predict the pseudo-GPS position during GPS outages. The presented EMDTF scheme improved the accuracy of east velocity, north velocity, longitude, and latitude by 9.12%, 15.14%, 13.78%, and 10.72%, respectively, while the LSTM scheme reduced the RMSE by 21.79%, 14.85%, 55.03%, and 19.66% over the traditional artificial neural networks. Moreover, the work in [29] overcame the dilemma of poor navigation accuracy in challenging environments by proposing a fusion scheme utilizing machine learning techniques. The proposed scheme utilizes the support vector regression-based adaptive Kalman filter (SVR-AKF) to regulate the covariance parameters of the KF. In addition, the adaptive neuro-fuzzy inference system (ANFIS) was used to predict the navigation solution errors of the INS during GNSS outages. The proposed scheme was compared to the traditional schemes using the KF and EKF over two real trajectories. The results showed an improvement in the position error of about 58.8% against the KF over Trajectory 1and 48% to 67.5% against the KF and 34.2% to 57.6% against the EKF over Trajectory 2. Another approach using the FIS to adapt the fuzzy covariance matrix for the online calibration of multiple LiDAR systems was presented in [30]. The aim of this work was enhancing the performance of low-cost laser sensors, and a minimum error for distance of 2.8 cm and a rotation of 1.2 degrees were obtained.A recent survey of ML techniques and how they can be involved in all the fundamental steps of inertial sensing applications to improve the navigation solution obtained from the INS was provided in [31], stating the advantages and the challenges. The authors mentioned several challenges that the use of ML faces with regard to inertial sensors such as the nonexistence of hardware combinations of inertial sensors and ML and the lack of work on the sensor measurements’ improvement using ML. In addition, the authors mentioned that the use of ML along with the sensor measurements is a promising field of research, as most of the work conducted on ML has only been on the INS navigation solution.From the discussion of the previous related work, we noticed that most of the presented research utilized ML techniques for the INS solution, but neglected the inertial sensors’ measurements. Therefore, in this paper, we propose an ANFIS algorithm to be applied to the raw measurements of a commercial IMU to leverage its performance. This process was carried out using a high-end IMU as a reference to provide a suitable model for the low-end IMU in the ML structure. The model was generated in the training phase using both IMUs, then applied only to the low-end IMU in the testing phase. The proposed ML algorithm was evaluated on a real road trajectory. The results showed a significant improvement of the commercial IMU measurements, as well as the INS navigation solution compared to the traditional INS solution. The contributions of this research paper are summarized as follows:The development of an ML-based ANFIS algorithm as an ML technique to leverage a low-grade IMU; Comparing the low-grade IMU measurements before and after applying the proposed algorithm to the reference IMU; The validation of the proposed algorithm by applying the tested IMU data to the INS mechanization.This paper is organized into six sections. In Section 1, the introduction, related work, and paper contributions are discussed. Section 2 gives the background of the INS and the ANFIS algorithm. The methodology is explained in Section 3. The experimental setup and the utilized units are detailed in Section 4. The results are discussed in Section 5. Finally, the paper is concluded in Section 6. 2. Background 2.1. Inertial Navigation Systems The traditional inertial navigation system is composed of an IMU and a navigation processor. The IMU is composed of three accelerometers providing the specific forces and three gyroscopes providing the angular rates [1,32,33,34,35,36], as shown in Figure 1.📷Figure 1. Strap down INS block diagram.The INS depends on the knowledge of the target’s initial states (position, velocity, and attitude (PVA)) and updates its current states accordingly, as shown in Equations (1)–(3).where V is the velocity, VN is the north velocity, VE is the east velocity, and VD is the down velocity.The mechanization process of the INS can be summarized in three main steps. First, we obtained the angular rates (ωx,ωy,ωz) from the gyroscopes, the accelerations (fx,fy,fz) from the accelerometers, and the attitude angles of the pitch, roll, and yaw (p,r,y) from the angular rates after calculating the transformation matrix. Second, with the assistance of the rotation matrix, the forces in the navigation frame from the body frame can be obtained and then transformed to the local-level frame (LLF). Finally, the velocity was obtained by integrating the transformed forces, and the position was obtained by integrating the calculated velocity [1,37].P=[φλh]T(1)where P is the position, φ is the latitude, λ is the longitude, and h is the altitude.V=[VNVEVD]T(2)where A is the quaternion representation of the attitude and the quaternions q0,q1,q2, and q3 are the parameters of the rotation matrix.The attitude is determined by Equation (3) [35].A=⎡⎣⎢⎢⎢⎢q0q1q2q3⎤⎦⎥⎥⎥⎥(3)where ωx, ωy, and ωz are the gyroscopes’ angular rates in the x, y, and z directions, respectively.The attitude rates are calculated as Equation (4).⎡⎣⎢⎢⎢⎢q0˙q1˙q2˙q3˙⎤⎦⎥⎥⎥⎥=0.5⎡⎣⎢⎢⎢⎢0ωxωyωzωx0−ωzωy−ωyωz0−ωx−ωz−ωyωx0⎤⎦⎥⎥⎥⎥⎡⎣⎢⎢⎢⎢q0q1q2q3⎤⎦⎥⎥⎥⎥(4)where ϕ is the roll angle in radians, θ is the pitch angle in radians, ψ is the yaw angle in radians.Furthermore, the quaternion attitude can be transferred to the Euler angles of the roll, pitch, and yaw, respectively, as in Equation (5).⎡⎣⎢ϕθψ⎤⎦⎥=⎡⎣⎢⎢atan2(2q2q3+2q1q0),(q23+q20−q21−q22))−asin(2q1q3−2q2q0)atan2((2q1q2+2q0q3),(q20+q21−q22−q23))⎤⎦⎥⎥(5)Equation (6) shows the transformation matrix from the body frame to the LLF using quaternion states [38].Cnb=⎡⎣⎢⎢q21+q20−q22−q232(q1q2+q3q0)2(q1q3−q2q0)2(q1q2−q3q0)q22+q20−q21−q232(q2q3+q1q0)2(q2q3+q2q0)2(q2q3−q1q0)q23+q20−q21−q22⎤⎦⎥⎥(6)where FN, FE, and FD are the transformed specific forces in the north, east, and down frame, respectively.The specific forces can be transformed into the LLF using the transformation matrix Cnb and are obtained with Equation (7).⎡⎣⎢FNFEFD⎤⎦⎥=Cnb⎡⎣⎢fxfyfz⎤⎦⎥(7)where gWGS0=9.78032677 m/s2 is the gravity at the Equator, gWGS1=0.00193185138639 m/s2 is the gravity formula constant, and E=0.0818191908426 is the first eccentricity [1,2].The velocity rates can be obtained with Equation (8).⎡⎣⎢⎢V˙NV˙EV˙D⎤⎦⎥⎥=⎡⎣⎢⎢⎢⎢⎢1000100010(λ˙+2wesin(φ))−φ˙−(λ˙+2wesin(φ))0−(λ˙+2wecos(φ))φ˙(λ˙+2wecos(φ))0001⎤⎦⎥⎥⎥⎥⎥⎡⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢FNFEFDVNVEVDg⎤⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥(8)where λ˙ and φ˙ are the longitude and latitude rates, respectively, we=7.2921158×10−5 rad/s is the magnitude of the rotation rate of the Earth, and g is the acceleration due to gravity, which can be obtained with Equation (9).g=gWGS01+gWGS1sin(φ)[1−E2sin2(φ)]12−[3.0877×10−6−0.0044×10−6sin2(φ)]h+0.072×10−12(9)where RM and RN are the meridian radius and normal radius of the Earth’s ellipsoid model, respectively.The position rates are obtained with Equation (10).⎡⎣⎢⎢φ˙λ˙h˙⎤⎦⎥⎥=⎡⎣⎢⎢⎢VNRM+hVE(RN+h)cos(φ)−VD⎤⎦⎥⎥⎥(10)where ω˜bib is the gyroscope measurement vector, ωbib is the true angular rate velocity vector, bg is the gyroscope instrument bias vector, Sg is a matrix representing the gyro scale factor, Ng is a matrix representing the non-orthogonality of the gyro triad, and εg is the vector representing the gyro sensor noise.Unfortunately, the INS suffers from error growth over time because of the two-times integration process of the target’s acceleration. The errors in the INS can be categorized and divided into deterministic and stochastic errors. The deterministic errors include the bias offset, scale factor, and axis misalignment errors. In contrast, the stochastic errors include bias drift, bias stability, scale factor stability, noise, and axis misalignment errors. The deterministic errors can be reduced or compensated if the sensors are properly calibrated, especially high-end sensors, while the stochastic errors were modeled randomly to reduce their effect [30]. Therefore, the gyroscope measurement model is represented by Equation (11).ω˜bib=ωbib+bg+Sωbib+Nωbib+εg(11)where f˜b is the accelerometer measurement vector, fb is the true specific force vector, ba is the accelerometer instrument bias vector, S1 is a matrix of the linear scale factor error, S2 is a matrix of the non-linear scale factor error, Na is a matrix representing the non-orthogonality of the accelerometer’s triad, δg is the anomalous gravity vector, and ηg is a vector representing the accelerometer sensor noise.Furthermore, the accelerometers’ measurement model is represented by Equation (12).f˜b=fb+ba+S1f+S2f2+Naf+δg+ηg(12)The classification of the INS depends on the IMU’s accuracy and its ability to reduce the error growth over time. Therefore, to compensate the low-cost commercial IMU’s errors, either traditional techniques or ML techniques are applied to enhance the INS’s navigation solution. 2.2. Adaptive Neuro-Fuzzy Inference System The adaptive neuro-fuzzy inference system (ANFIS) is a fusion technique between the artificial neural network (ANN) and the fuzzy inference system (FIS). Subsequently, it provides the advantages of both techniques and compensates their disadvantages. It drives the system to adapt through the self-organizing and self-learning process [39].The main structure of the FIS is shown in Figure 2. The FIS is based on the fuzzy conditional statements (if–then rules), which are responsible for making the decisions in an uncertain environment and its influencing factors [40]. The structure of the FIS is composed of five main blocks. The fuzzification process switches the crisp inputs into datasets by applying the membership function (MF). The base rule contains the fuzzy if–then rules, and the database comprises the MF utilized in the fuzzy rules. The decision-making unit executes the inference operation on the fuzzy rules. The defuzzification process turns the fuzzy results into a crisp output [41,42,43].📷Figure 2. The structure of the fuzzy inference system.where μAi is the MF, X is the input to node i, and Ai is the linguistic label for input X.Similarly, ANFIS’s functionality is equivalent to the FIS, as shown in Figure 3 [44]. The ANFIS structure is composed of five layers. In the first layer, each node assigns the crisp inputs after applying the MF. The output of this layer clarifies how well the input matchesthe linguistic label, as given by Equation (13) [40,42].O1i=μAi(x)(13)📷Figure 3. The ANFIS’s structure [44].The second layer multiplies the input signals at each node to obtain the rules’ weights (firing strength), as given in Equation (14).Wi=μAi(x)×μBi(y)(14)where x,y are the two inputs, Ai is the linguistic label for input X, and Bi is the linguistic label for input y. The third layer normalizes the weights of each rule by computing the ratio of the weight of each rule to the sum of all the rules’ weights, as given in Equation (15).W˜=Wi∑W(15)where Wi˜ is the normalized weight obtained from the third layer and pi,qi, and ri are called the consequent parameters.In the fourth layer, the normalized weights of each rule are multiplied by the output of the second layer, as given in Equation (16).O4i=Wi˜fi=Wi˜(pix+qiy+ri)(16)Finally, the fifth layer sums all the incoming signals to compute the overall output Of, as given in Equation (17).Of=∑iW˜fi=∑iWifi∑iWi(17)Algorithm 1 INS Solution Improvement Using MLInputIMU’s sensor measurements of three gyroscopes and three accelerometers (ωx,ωy,ωz,ax,ax,ax) for the MEMS-IMU and the reference IMU, initial PVA states (Lat0,Long0,Att0,VN0,VE0,VD0,p0,r0,y0), and the navigation solution of the reference IMU (pos_ref,vel_ref,att_ref).Step 1Prepare and tune the ML-ANFIS options (input data, output data, type of clustering, MF type, number of Ms, F and epochs/iterations).Step 2Apply the ML-ANFIS on 50% of the input data (training phase).Step 3Generate the ML-ANFIS.Step 4Evaluate and apply the ML-ANFIS on the remaining data (testing phase).Step 5Evaluate the ML-ANFIS’s output (improved IMU sensor measurements (ωx,ωy,ωz,ax,ay,az).Step 6Compare the MEMS IMU’s sensor measurements and the ML-ANFIS IMU’s sensor measurements to the reference IMU’s sensor measurements to compute the percentage of improvement caused by the ML-ANFIS (RMSE). RMSE=1n∑n(Xn,Ref−Xn,ML)2−−−−−−−−−−−−−−−−−−−√ where Xn,Ref and Xn,ML are the reference IMU and trained IMU measurements, respectively.Step 7Compute the ML-ANFIS’s navigation solution (PVA) by using the output of the ML-ANFIS as the input to the INS.Step 8Compare the MEMS IMU (PVA) and the ML-ANFIS (PVA) to the reference IMU (PVA) to compute the percentage of improvement of the ML-ANFIS (PVA) using the RMSE metric.OutputThe INS solution (PVA) of the MEMS-IMU and the ML model compared to the output using the reference IMU. 3. Methodology As mentioned in the previous section, the accuracy of the INS’s navigation solution depends on the quality/grade of the IMU sensor and the ability to compensate its errors. In this paper, we exploited the capability of the ML-ANFIS technique to estimate the inertial sensors’ errors by training a low-grade IMU with a high-end one. This work aimed to boost the low-grade IMU’s performance.The proposed ML technique is composed of two phases, the training phase and the testing phase. The training phase block diagram is shown in Figure 4. In this phase, the training dataset consisted of the low-grade IMU’s sensor measurements as the input and the high-grade IMU as the output. This phase was carried out using half of the trajectory data. The triangular and Gaussian MFs were utilized. In this paper, six triangular MFs were utilized as the ANFIS input layer for each IMU measurement. The triangular MF is simpler and faster to implement compared to other MFs such as the Gaussian MF [45,46]. Subsequently, the rule base contains one rule for every input MF combination. The clustering method utilized was the “grid partition”, in which every input variable is equally distributed over the input MF and generates a single-output “Sugeno fuzzy system”. The output ANFIS layer utilizes linear MFs in which the output of every rule is linearly related to the input variables and scaled by the previous result’s value. Finally, a thousand iterations were used to produce the model, which was applied later to the IMU measurements for the testing phase.📷Figure 4. The block diagram of the training phase of the ML-based-ANFIS showing the model generation process.The testing phase is shown in Figure 5. The ML model generates its predicted IMU measurements (six sensor measurements) to obtain the position, velocity, and attitude (PVA). Then, the navigation solution of the ML model is compared to the navigation solution of the reference to obtain the ΔPVAML of the ML model. Similarly, the navigation solution of the low-grade IMU is compared to the navigation solution of the reference to obtain the ΔPVA of the low-grade IMU. The differences in the errors in the navigation solution between the ML model and the low-grade IMU were calculated to compute the influence of the ML model in enhancing the navigation solution of the low-grade IMU’s sensor measurements, which is shown in the upcoming sections.📷Figure 5. The block diagram of the testing phase of the ML-based-ANFIS showing the application of the generated model to the XBOW-IMU and comparing the produced PVA with the reference IMU.The overall algorithm is explained in Algorithm 1 in pseudo-code form. 4. Experimental Setup The experimental work was carried out to verify the effectiveness of the ML model through a real road test trajectory with no pre-processing steps. The IMUs utilized in the experimental work were set up inside the test van as shown in Figure 6. The testbed was installed inside the van, coinciding with its axes. Furthermore, utilizing a standard seat chassis, the testbed was rigidly and firmly settled in the rear seat location. The low-grade IMU sensor utilized in this research was the Crossbow MEMS-grade XBOW IMU300CC, and the high-grade IMU utilized as a reference was the IMU-CPT, which includes three MEMS accelerometers and three fiber-optic gyroscopes (FOGs). The specifications of the two IMU units can be found in Table 1.📷Figure 6. The utilized IMUs mounted on the testbed showing their placement and orientation inside the van.Table 1. Utilized IMUs’ performance characteristics.📷 5. Results and Discussion A real road trajectory was used to test the proposed ML technique’s performance in the downtown area of Kingston, ON, Canada. The reference trajectory was the INS solution that utilized the IMU-CPT sensor measurements as a control input to the IMS mechanization. Moreover, the trajectory lasted for 2300 s (almost 44 min) and contained various maneuvers at different speeds.The application of the ML-based-ANFIS to the XBOW IMU measurements was carried out in two stages. The first stage was the training stage, in which the IMU-CPT was utilized as a learning source. This stage was applied with 50% of the data to generate the ML-based-ANFIS model. Three gyroscopes and three accelerometers were trained in this stage to produce a suitable model. The ML-based-ANFIS utilized six membership functions with an adaptive step size. In the results shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, the reference is designated by red color, the XBOW-IMU in blue, and the proposed ML-based-ANFIS in green for both raw measurements and INS solution comparisons.📷Figure 7. The 3D gyroscope angular rates with the ML–based–ANFIS (training stage).📷Figure 8. The 3D accelerometers with the ML–based–ANFIS (training stage).📷Figure 9. The 3D gyroscope angular rates after applying the ML–based–ANFIS (testing stage).📷Figure 10. A zoomed–in part of the IMU gyroscope measurements.📷Figure 11. The 3D accelerometers with the ML–based–ANFIS (testing stage).📷Figure 12. A zoomed–in part of the IMU accelerometers.📷Figure 13. Position (Lat, Long, and Alt) components’ comparison.📷Figure 14. Velocity (VN, VE, and VD) components’ comparison.📷Figure 15. Attitude (roll, pitch, and yaw) angles’ comparison.The 3D gyroscope and accelerometer measurements in the training stage are shown in Figure 7 and Figure 8, respectively.The generated ML-based-ANFIS model was then applied to the remaining XBOW-IMU measurements. This step was the testing stage where the generated model was applied and its performance measured.Figure 9 shows a comparison between the raw XBOW-IMU gyroscope measurements in three directions (x, y, and z) and those with the applied ML model compared to the reference angular rates from the IMU-CPT. A zoomed-in view for a portion of the testing data is shown in Figure 10. The results in Figure 10 show that the biases and scale factor and a significant part of the associated noise were removed when applying the proposed ML technique to the low-grade IMU gyro measurements.The accelerations’ comparison for the testing part is shown in Figure 11. Additionally, a zoomed-in view for the accelerations in this stage is shown in Figure 12. The results showed the proposed ML technique’s ability to estimate and remove the errors associated with the low-grade IMU measurements. Furthermore, not only was the noise mostly removed, but both the bias and scale factor errors were also reduced significantly. Therefore, the produced IMU measurements from the proposed ML technique provided a more robust input to the INS mechanization, which led to a more accurate navigation solution.To validate the resulting measurements, a comparison of the raw XBOW-IMU measurements before and after applying the proposed ML technique is shown in Table 2 using the RMSE for each measurement. The results showed a significant improvement of the IMU measurements when using the ML-based-ANFIS.Table 2. IMU raw measurements’ RMSE comparison.📷The output of this process was new IMU measurements that were ready to be applied to the INS algorithm. Consequently, to verify the performance of the proposed ML-based-ANFIS, the modified and unmodified measurements were applied to the INS algorithm to produce the navigation information PVA. The output PVA was then compared to the reference PVA to check the improvement and the worthiness of using ML in training on the raw MEMS-IMU measurements by a high-end IMU.The results showed the INS solution produced from the XBOW IMU (low-grade) and the corresponding modified measurements (ML-based-ANFIS) when using the proposed ML technique compared to the reference solution from the IMU-CPT.The position components’ (latitude in radians, longitude in radians, and altitude in meters) comparison is shown in Figure 13. The results showed that the position components when using the proposed ML technique were closer to the reference position components compared to the ones when using the raw XBOW IMU measurements. Therefore, the utilization of ML to improve the IMU measurements leveraged the position solution of the unit.The comparison of the velocity components in the navigation frame (VN, VE, and VD) are shown in Figure 14.The results showed that there was a significant improvement in all the velocity components when using the proposed ML technique. A comparison of the attitude components’ (roll, pitch, and yaw) angle is shown in Figure 15, illustrating the significant improvement of the attitude components’ solution.The overall trajectory comparison is shown in Figure 16. The trajectory shows the 2D position information obtained by applying the reference high-end IMU-CPT, the ow-end XBOW IMU, and the ML-based-ANFIS XBOW IMU to the INS mechanization in red, blue, and green, respectively. Furthermore, arrows show the start point and the direction of motion. The trajectory from the low-end XBOW IMU severely drifted over time compared to the one generated from the proposed ML-based-ANFIS XBOW. The result of the proposed method showed the effectiveness of using the ML-based-ANFIS technique in improving the performance of the INS navigation solution. Moreover, the proposed method trajectory followed the reference even during maneuvers with a smaller shift compared to the original XBOW one. This result came from the great enhancement of the XBOW IMU measurements after applying the proposed ML technique.📷Figure 16. Overall trajectory comparison.A statistical analysis of the INS solution position, velocity, and attitude components in the LLF from the testing part of the trajectory (24 min) is shown in Table 3.Table 3. Results’ analysis of the testing part of the trajectory (24 min).📷A 70% overall improvement of the 2D position and 92% improvement of the 2D velocity were achieved when using the proposed ML-based-ANFIS technique. Moreover, the attitude components had a great improvement. The roll angle RMSE was reduced from 55.6 degrees to 6 degrees with an improvement of 89.2%; the pitch angle RMSE was reduced from 42.6 degrees to 6.5 degrees with an improvement of 84.7%; the yaw angle was reduced from 86.3 degrees to 79.3 degrees with an improvement of 8%. The yaw angle’s improvement percentage was less than other attitude components due to the proximity of the raw wz to the reference wz, as shown in Figure 9 and Figure 10. The results showed the superiority of applying ML to leverage the low-grade IMU, which significantly enhanced the INS navigation solution compared to the traditional solution using the raw measurements of the low-grade IMU. 6. Conclusions and Future Work This paper discussed the utilization of the ML-based-ANFIS to improve the raw MEMS-grade IMU measurements. The proposed ML algorithm was applied to real data collected with a low-cost IMU. The proposed ML technique was applied to 50% of the collected data and tested on the remaining data. The output of this process was then applied to a strap-down INS to produce a navigation solution PVA. The produced navigation solution achieved a 2D position improvement of 70% and a 2D velocity improvement of 92%. Furthermore, an improvement of 89.2%, 84.7%, and 8% of the attitude components of the roll, pitch, and yaw, respectively, was achieved. The work in this paper showed that using ML to boost a low-grade IMU had a significant impact on the inertial sensors’ performance. Moreover, it had a great impact by producing a more accurate and robust INS navigation solution. As a future step, this work can be combined with either another ML technique or the EKF to bridge GNSS outages in challenging GNSS environments.Author Contributions A.E.M. proposed the algorithm, performed the experiments, and wrote the manuscript; A.A. (Ahmed Azouz) reviewed the manuscript and provided important suggestions to improve the algorithm; A.E.A. revised the manuscript; A.A. (Ashraf Abosekeen) participated in preparing the algorithm, the experimental work, and the results’ analysis and wrote and revised the manuscript. All authors have read and agreed to the published version of the manuscript. Funding This research received no external funding. Conflicts of Interest The authors declare no conflict of interest. References Noureldin, A.; Karamat, T.B.; Georgy, J. Fundamentals of Inertial Navigation, Satellite-Based Positioning and their Integration; Springer: Berlin/Heidelberg, Germany, 2013; Volume 1, p. 313. [Google Scholar] [CrossRef] Titterton, D.; Weston, J. Strapdown Inertial Navigation Technology; Institution of Engineering and Technology: London, UK, 2004. [Google Scholar] [CrossRef] Abosekeen, A.; Iqbal, U.; Noureldin, A.; Korenberg, M.J. A Novel Multi-Level Integrated Navigation System for Challenging GNSS Environments. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4838–4852. [Google Scholar] [CrossRef] Li, Y.; Chen, R.; Niu, X.; Zhuang, Y.; Gao, Z.; Hu, X.; El-Sheimy, N. Inertial Sensing Meets Artificial Intelligence: Opportunity or Challenge? arXiv 2020, arXiv:2007.06727. [Google Scholar] Abosekeen, A.; Noureldin, A.; Karamat, T.; Korenberg, M.J. Comparative Analysis of Magnetic-Based RISS using Different MEMS-Based Sensors. In Proceedings of the 30th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2017), Portland, OR, USA, 25–29 September 2017; pp. 2944–2959. [Google Scholar] [CrossRef] Abosekeen, A.; Iqbal, U.; Noureldin, A. Improved Navigation Through GNSS Outages: Fusing Automotive Radar and OBD-II Speed Measurements with Fuzzy Logic. GPS World 2021, 32, 36–41. [Google Scholar] Nam, D.V.; Gon-Woo, K. Robust Stereo Visual Inertial Navigation System Based on Multi-Stage Outlier Removal in Dynamic Environments. Sensors 2020, 20, 2922. [Google Scholar] [CrossRef] [PubMed] Abosekeen, A.; Iqbal, U.; Noureldin, A. Enhanced Land Vehicles Navigation by Fusing Automotive Radar and Speedometer Data. In Proceedings of the 33rd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2020), St. Louise, MO, USA, 21–25 September 2020; pp. 2206–2219. [Google Scholar] [CrossRef] Abosekeen, A.; Abdalla, A. Fusion of Low-Cost MEMS IMU/GPS Integrated Navigation System. In Proceedings of the 8th International Conference on Electrical Engineering, Cairo, Egypt, 29–31 May 2012; Volume 8, pp. 1–23. [Google Scholar] [CrossRef] Rashed, M.A.; Abosekeen, A.; Ragab, H.; Noureldin, A.; Korenberg, M.J. Leveraging FMCW-radar for autonomous positioning systems: Methodology and application in downtown Toronto. In Proceedings of the 32nd International Technical Meeting of the Satellite Division of the Institute of Navigation, ION GNSS+ 2019, Miami, FL, USA, 16–20 September 2019; pp. 2659–2669. [Google Scholar] [CrossRef] Hsu, L.T. What are the roles of artificial intelligence and machine learning in GNSS positioning? Inside GNSS 2020, 1–8. [Google Scholar] Sharaf, R.; Noureldin, A.; Osman, A.; El-Sheimy, N. Online INS/GPS Integration with a Radial Basis Function Neural Network. IEEE Aerosp. Electron. Syst. Mag. 2005, 20, 8–14. [Google Scholar] [CrossRef] Semeniuk, L.; Noureldin, A. Bridging GPS outages using neural network estimates of INS position and velocity errors. Meas. Sci. Technol. 2006, 17, 2783–2798. [Google Scholar] [CrossRef] Sharaf, R.; Noureldin, A. Sensor Integration for Satellite-Based Vehicular Navigation Using Neural Networks. IEEE Trans. Neural Netw. 2007, 18, 589–594. [Google Scholar] [CrossRef] Ragab, M.M.; Ragab, H.; Givigi, S.; Noureldin, A. Performance evaluation of neural-network-based integration of vision and motion sensors for vehicular navigation. In Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2019; Dudzik, M.C., Ricklin, J.C., Eds.; International Society for Optics and Photonics: Baltimore, MD, USA, 2019; Volume 11009, pp. 140–151. [Google Scholar] [CrossRef] Jaradat, M.A.; Abdel-Hafez, M.F.; Saadeddin, K.; Jarrah, M.A. Intelligent fault detection and fusion for INS/GPS navigation system. In Proceedings of the 2013 9th International Symposium on Mechatronics and its Applications (ISMA), Amman, Jordan, 9–11 April 2013; pp. 1–5. [Google Scholar] [CrossRef] Du, S.; Gan, X.; Zhang, R.; Zhou, Z. The Integration of Rotary MEMS INS and GNSS with Artificial Neural Networks. Math. Probl. Eng. 2021, 2021, 1–10. [Google Scholar] [CrossRef] Al Bitar, N.; Gavrilov, A. A new method for compensating the errors of integrated navigation systems using artificial neural networks. Measurement 2021, 168, 108391. [Google Scholar] [CrossRef] Tamazin, M.; Korenberg, M.J.; Elghamrawy, H.; Noureldin, A. GPS Swept Anti-Jamming Technique Based on Fast Orthogonal Search (FOS). Sensors 2021, 21, 3706. [Google Scholar] [CrossRef] [PubMed] Iqbal, U.; Abosekeen, A.; Georgy, J.; Umar, A.; Noureldin, A.; Korenberg, M.J. Implementation of Parallel Cascade Identification at Various Phases for Integrated Navigation System. Future Internet 2021, 13, 191. [Google Scholar] [CrossRef] Sánchez Morales, E.; Dauth, J.; Huber, B.; García Higuera, A.; Botsch, M. High Precision Outdoor and Indoor Reference State Estimation for Testing Autonomous Vehicles. Sensors 2021, 21, 1131. [Google Scholar] [CrossRef] [PubMed] Semanjski, S.; Semanjski, I.; De Wilde, W.; Muls, A. Use of Supervised Machine Learning for GNSS Signal Spoofing Detection with Validation on Real-World Meaconing and Spoofing Data—Part I. Sensors 2020, 20, 1171. [Google Scholar] [CrossRef] Sabzevari, D.; Chatraei, A. INS/GPS Sensor Fusion based on Adaptive Fuzzy EKF with Sensitivity to Disturbances. IET Radar Sonar Navig. 2021, 15, 1535–1549. [Google Scholar] [CrossRef] Abosekeen, A.; Abdalla, A. Improving the Navigation System of a UAV Using Multi-Sensor Data Fusion Based on Fuzzy C-Means Clustering. Int. Conf. Aerosp. Sci. Aviat. Technol. 2011, 14, 1–12. [Google Scholar] [CrossRef] Cao, H.; Wei, W.; Liu, L.; Ma, T.; Zhang, Z.; Zhang, W.; Shen, C.; Duan, X. A Temperature Compensation Approach for Dual-Mass MEMS Gyroscope Based on PE-LCD and ANFIS. IEEE Access 2021, 9, 95180–95193. [Google Scholar] [CrossRef] Duan, Y.; Li, H.; Wu, S.; Zhang, K. INS Error Estimation Based on an ANFIS and Its Application in Complex and Covert Surroundings. ISPRS Int. J. Geo-Inf. 2021, 10, 388. [Google Scholar] [CrossRef] Aouf, A.; Boussaid, L.; Sakly, A. TLBO-Based Adaptive Neurofuzzy Controller for Mobile Robot Navigation in a Strange Environment. Comput. Intell. Neurosci. 2018, 2018, 1–8. [Google Scholar] [CrossRef] Zhang, Y. A Fusion Methodology to Bridge GPS Outages for INS/GPS Integrated Navigation System. IEEE Access 2019, 7, 61296–61306. [Google Scholar] [CrossRef] Yue, S.; Cong, L.; Qin, H.; Li, B.; Yao, J. A Robust Fusion Methodology for MEMS-Based Land Vehicle Navigation in GNSS-Challenged Environments. IEEE Access 2020, 8, 44087–44099. [Google Scholar] [CrossRef] Nam, D.V.; Kim, G.W. Online Self-Calibration of Multiple 2D LiDARs Using Line Features with Fuzzy Adaptive Covariance. IEEE Sens. J. 2021, 21, 13714–13726. [Google Scholar] [CrossRef] Li, Y.; Chen, R.; Niu, X.; Zhuang, Y.; Gao, Z.; Hu, X.; El-Sheimy, N. Inertial Sensing Meets Machine Learning: Opportunity or Challenge? IEEE Trans. Intell. Transp. Syst. 2021, 1–17. [Google Scholar] [CrossRef] Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems, 2nd ed.; Artech House: Boston, MA, USA; London, UK, 2008; p. 505. [Google Scholar] Savage, P.G. Strapdown Inertial Navigation Integration Algorithm Design Part 2: Velocity and Position Algorithms. J. Guid. Control Dyn. 1998, 21, 208–221. [Google Scholar] [CrossRef] Abosekeen, A. Multi-Sensor Integration and Fusion in Navigation Systems. Master’s Thesis, Military Technical College, Cairo, Egypt, 2012. [Google Scholar] [CrossRef] Corke, P.; Lobo, J.; Dias, J. An Introduction to Inertial and Visual Sensing. Int. J. Robot. Res. 2007, 26, 519–535. [Google Scholar] [CrossRef] Huang, G. Visual-Inertial Navigation: A Concise Review. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9572–9582. [Google Scholar] [CrossRef] Abosekeen, A.; Noureldin, A.; Korenberg, M.J. Improving the RISS/GNSS Land-Vehicles Integrated Navigation System Using Magnetic Azimuth Updates. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1250–1263. [Google Scholar] [CrossRef] Diebel, J. Representing attitude: Euler angles, unit quaternions, and rotation vectors. Matrix 2006, 58, 1–35. [Google Scholar] Zhang, L.; Liu, J.; Lai, J.; Xiong, Z. Performance analysis of adaptive neuro fuzzy inference system control for mems navigation system. Math. Probl. Eng. 2014, 2014, 1–7. [Google Scholar] [CrossRef] Al-Hmouz, A.; Shen, J.; Al-Hmouz, R.; Yan, J. Modeling and Simulation of an Adaptive Neuro-Fuzzy Inference System (ANFIS) for Mobile Learning. IEEE Trans. Learn. Technol. 2012, 5, 226–237. [Google Scholar] [CrossRef] Bhattacharyya, S.; Dutta, P. Fuzzy Logic: Concepts, System Design, and Applications to Industrial Informatics. In Handbook of Research on Industrial Informatics and Manufacturing Intelligence: Innovations and Solutions; Khan, M.A., Ansari, A.Q., Eds.; IGI Global: Hershey, PA, USA, 2012; Chapter 3; pp. 33–71. [Google Scholar] [CrossRef] Sivakumar, R.; Sahana, C.; Savitha, P. Design of ANFIS based Estimation and Control for MIMO Systems. Int. J. Eng. 2012, 2, 2803–2809. [Google Scholar] Erdem, H. Application of Neuro-Fuzzy Controller for Sumo Robot control. Expert Syst. Appl. 2011, 38, 9752–9760. [Google Scholar] [CrossRef] Jang, J.S. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef] Jin, Z.; Bose, B. Evaluation of membership functions for fuzzy logic controlled induction motor drive. In Proceedings of the IEEE 2002 28th Annual Conference of the Industrial Electronics Society. IECON 02, Seville, Spain, 5–8 November 2002; Volume 1, pp. 229–234. [Google Scholar] [CrossRef] Prajapati, S.; Fernandez, E. Performance Evaluation of Membership Function on Fuzzy Logic Model for Solar PV array. In Proceedings of the 2020 IEEE International Conference on Computing, Power and Communication Technologies (GUCON), Greater Noida, India, 2–4 October 2020; pp. 609–613. [Google Scholar] [CrossRef] Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Sensors, EISSN 1424-8220, Published by MDPI Disclaimer"
  • asked a question related to Computing
Question
4 answers
Recently, I used a dataset with over 6298 rows after cleaning. Trending YouTube video's dataset from Kaggle is used. But after building the model while training the process get failed several times. Then I used Auto model feature which took much time to training and evaluating final results. Then it builds results over 1 GB. So how can I deal with these larger data? Is there any provision to minimize the computing time and storage space to use rapid miner effectively?
Relevant answer
Answer
My System Config:
Edition Windows 11
Processor Intel(R) Core(TM) i7-8700T CPU @ 2.40GHz 2.40 GHz
Installed RAM 16.0 GB (15.8 GB usable)
System type 64-bit operating system, x64-based processor
  • asked a question related to Computing
Question
7 answers
Computing limit of a function at a point is very important task in mathematics. The concept is applicable in science and engineering. Who would compute the limit of the function F at (0,0)? Get an attached file!
Thanks!
Relevant answer
Answer
function f(x) works to do the prediction of accuracy to predict and built the next function of calculus.
read this link below:
  • asked a question related to Computing
Question
4 answers
Can federated learning and edge computing be combined? The answer is yes. So what should we do? To improve the data security and privacy protection in the modeling of each participant (terminal).
Relevant answer
Answer
Hi,
Yes. It is very much compatible with each other. Edge server can be both act as a client or aggregator. Especially, when collecting data from IoT sensors, edge can work as a collaborator. You can check the following article as a reference.
  • asked a question related to Computing
Question
1 answer
Can you have any measuring fact technique?
Relevant answer
Answer
  • asked a question related to Computing
Question
8 answers
In general the performance of a system is directly proportional to the system configuration. However, in context to IoT, the devices have limited computation power, storage, energy etc.
Relevant answer
Answer
Dear Mehbub Alam,
the AI / ML techniques can be used to predict the load on processors in constraint computing devices with the aim of being able to adapt the clock speed of processors to the load on computing devices.
The AI / ML techniques can enable adaptive energy savings in constraint computing devices. They can be used to enable the adaptive load-dependent energy consumption of IoT devices.
For more information on reducing energy consumption in constraint computing devices, see my answer at:
Best regards
Anatol Badach
  • asked a question related to Computing
Question
1 answer
Hello everyone.
Any one can help me with the computation of the ray parameter (p) for seismological phases?
If a code is available in Matlab, it would be most useful!
  • asked a question related to Computing
Question
10 answers
We have a Computer Science and Communication Journal in the college (Journal of Computing and Communication ). We target to publish Research articles in all the disciplines of Computer Science and Communication. We plan to publish two issues per year. Can anyone tell me the ways and means to index the Journal in the google Scholar
Relevant answer
Answer
Also check please the following very useful link: https://scholar.google.com/intl/en/scholar/publishers.html
  • asked a question related to Computing
Question
8 answers
Can anyone point me to resources on how to evaluate the computing curriculum at both the K-12 and college levels? Thank you.
Relevant answer
Answer
Dear Dr Mercy Oluwadara Jaiyeola See the following useful RG link:
  • asked a question related to Computing
Question
3 answers
I am doing the master in Mobile Edge computing ,so i want to know the best simulation tools to simulate the model.and how can i use it?
Relevant answer
Answer
Dear Rania Azouz,
Greetings!
Followings are the required tools:-
1) NS-3
2) OMNEST
3) OpenMobstar
4) simu5G
5) Matlab
  • asked a question related to Computing
Question
1 answer
I have multiple measurements related to the object of interest (stationary). I want to identify the type of object.
I am computing a convex sum to strengthen my understanding of the type of object using the sum. sum(I) =weight* sum(i-1) + (1- weight)*measurement(i)
I want to know how to assign the weights to the multiple measurements such that
some convergence is observed interest.
  • asked a question related to Computing
Question
4 answers
Program execution time depends on the number of instructions as well as on computing power of the machine. Does anyone have some recommendation where to find an analytical model for estimating program execution time according to program instructions and CPU, RAM, and DISK characteristics?
For example, if we know the number of instructions, CPI (cycles per instruction), as well as hardware specifications of CPU, RAM, DISK, how to calculate (estimate) program execution time?
Relevant answer
Answer
Prof. Alem Čolaković: I think that our colleague Prof. Ashraf Suyyagh has given a good answer to your valuable question.
  • asked a question related to Computing
Question
1 answer
I have a multi-objective minimization problem and the final objective function is written as: F = f1 + f2. However, f1 and f2 are the error functions of two different quantities: orientation error (degrees) and position error (meters). I would like to minimize both of them at the same time.
The questions is: how can I properly scale, or make dimensionless, f1 and f2 before computing F in order to make them comparable (not to add oranges with apples)?
The main difficulty is that I do not know a-priori their range of variation (i.e. orientation and position errors). On the contrary, the ranges of variation of the "original" quantities (i.e. orientation and position) are known from the experimental data.
How is this problem commonly solved in the optimization practice?
Thank you,
Marco
Relevant answer
I would like to thought about the solution with you. That is what I said here is just a proposal hoping it will be beneficial to you.
You have to look at the independent parameters of the two error functions.
If the independent parameters of every one is are different for each of them, then you can optimize every error independent from the other.
If there is mixing of the independent parameters and dependent parameters then you can try the minimization process by independent parameters for each one independently till you get an acceptable error.
If you could not reach the acceptable error, then you can study the minimization of the two together by the adjusting the common parameters.
This may be one possible scheme for solving the problem.
I think there are mathematical tools for multi objective minimization or maximization. Please follow the link:
Best wishes
  • asked a question related to Computing
Question
3 answers
  1. I read this paper( DOI: 10.1021/jacs.8b01543) and I had the following problems.
  2. 1-The structure examined in the article is two-dimensional.Given that the K points are considered to be 4*4*1 but the unit cell is shown as a bulk. Has a three-dimensional unit cell been used in the computing section?
  3. 2-Considering that by cutting a unit cell and creating a vacuum and repeat, a two-dimensional structure can be created. Why is a two-dimensional structure not used in the calculation section?
Relevant answer
Answer
Thank you, Professor Dalibor Matýsek
I realized that these structures are actually quasi-two-dimensional.
  • asked a question related to Computing
Question
1 answer
Dear colleagues,
I need to compute charge transfer integral J(RP), spacial overlap S(RP) and site enegries of the dimer H(RR) and H(PP) (two same molecules, R and P, specifically oriented), formulated as it is shown in pictire, from JPhysChemB, 2009, v113, p8813.
Could you specify the keywords of the Gaussian 09 to do this?
Thanks in advance,
Andrey Khroshutin.
Relevant answer
Answer
Dear @andrey please look for ADF software but it not a free software.
Another way is to use the tool proposed by josua Brown. https://github.com/JoshuaSBrown/QC_Tools.
Good look
Kind regards
  • asked a question related to Computing
Question
3 answers
Accounting traditionally is presented as describing efficiently flux (what comes in, and goes out) and stock (what is held, at a given time), and as debit and credit. It is also about matching the terms of an exchange.
How can we move the model beyond the basic number-based description, into more data-rich (including metadata, descriptors, etc) frameworks, while benefiting from the deep and long experience of accounting over human history?
With Matrices of sets [1], a first endeavour was made to describe objects rather than numbers attached to them (price, quantity, measurements and features).
With Matrices of L-sets [2] we are going one step further, distinguishing actual assets (as classical sets) and wish lists, orders, needs, requirements which are not yet owned or available. We show how an operational and computable framework can be obtained with such objects.
References:
Relevant answer
Answer
Hi! Some people say that blockchain will change the traditional accounting model. Some more information in my latest paper published. Please, see attached.
  • asked a question related to Computing
Question
8 answers
In recent years, machine learning has been used in 2D MPM owing to excellent computing performance in processing multi-source and non-linearity geosciences datasets. Nowadays, machine learning is rarely reported in 3D MPM. What limits the application of machine learning in the 2D/3D mineral potential mapping?
Relevant answer
Answer
Dear all,
I am working on an open-source machine learning solution (https://github.com/italo-goncalves/geoML) for spatial modeling in general, including 2D, 3D, implicit modeling, and compositional data. I hope you find it useful.
  • asked a question related to Computing
Question
3 answers
A repository to aggregate a set of reproducible and reconfigurable codes and notebooks for testing various task placement policies for edge and fog server networks.
The simulation models are based on two of the most powerful python-based simulation modelling frameworks, namely salabim and simpy.
The systems that are modelled cover the basic types of task placement problems in edge computing servers. The models are useful for managing a network of edge computing servers.
There are animation and GUI options which are indispensable in simulation modelling.
The uploaded models are templates for building simulation models for a variety of edge network policies. Researchers who research task and server placement problems in edge computing would find the models useful.
Relevant answer
Answer
Just uploaded a model- load_balancing_with_mobllity_aware_task_placement.py -It models an edge network of three edge servers, with mobile and stationary user equipment. The task placement policy assigns computing requests to a) the nearest edge server and b) to the least utilized server. This implements a policy to reduce network bandwidth usage while ensuring load balancing within the edge network.
The model with other examples is on GitHub
  • asked a question related to Computing
Question
5 answers
I am computing taylor series expansion in which distance of a objects from a origin(0,0) is computed. I am expressing the distance in terms of sum of position and velocity*time of the object. I only wrote the first two terms of taylor series expansion.
Please check if the expansion in the document attached is correct.
Relevant answer
Answer
The two term Taylor expansion in equation (2) is not clear.
Would you clearly specify the distance function which you have considered in the expansion. Also clearly state the dependent and independent variables.
In fact, it is not a big deal; several calculus books teach Taylor expansion of functions of one, two, three...variables. But why is Taylor series expansion for?
Thanks!
  • asked a question related to Computing
Question
2 answers
Am faced with the proplem with computional cost of step time during sequential coupled thermomechanical analysis in abaqus standard.
I have a nodal temperature for 2340 sec in heat transfer analysis and predefined in step 2
For step 1 mechanical time period is 1 sec
For step 2 predefined time period is 2340sec But the software didn't complete the job even in 3 days due to the second time period is huge.
If there is another method please suggest me
Relevant answer
Answer
Interesting
  • asked a question related to Computing
Question
3 answers
Actually, I need high computing power for executing my simulation and as mentioned by ansys i have contacted ANSYS and emailed them for free trial as written on their webiste. but haven't got any reply? how to get ANSYS cloud free trial? is there anybody who is on the free trial of ANSYS Cloud?
Relevant answer
Answer
@Muhammad Shariq Khan did you got the free access?
And if yes how much time they take to verify the account?
  • asked a question related to Computing
Question
3 answers
I basically haven't found any relevant research, but I think if it is data from surgery or implantable robots, is there a requirement for real-time inference/computing? I haven't found any evidence to support my thoughts.
Relevant answer
Answer
Not sure exactly what you're looking for, but remote patient monitoring, increasingly common, takes place using not only wall-mounted/mobile cart cameras and mobile devices, but advanced algorithms to detect events of concern, along with Bluetooth-aware devices for real-time physical assessment, and obtaining data from devices from smart stethoscopes to "smart beds." Telestroke programs use point-of-care cameras so that the remote clinician can clearly visualize papillary reflex changes, for example. This real-time data collection can take place with a tablet or phone as well.
  • asked a question related to Computing
Question
1 answer
Hi everyone,
I have a question about calculating RDF by LAMMPS. I want to know that what exactly LAMMPS do for computing g(r), In fact, I mean on which atom, LAMMPS take the RDF. Are we able to recognize it in our sample?
Relevant answer
Answer
Hi
It is recommended to calculate the RDF by VMD software rather than LAMMPS. It has an explicit GUI with controllable parameters.
  • asked a question related to Computing
Question
4 answers
I Have a panel data set of 10 years and I am facing a problem in computing the blau value for my dummy variable CEO duality measured as: "1" if the person is the CEO and and the chairman of the board and "0" otherwise.
  • asked a question related to Computing
Question
2 answers
Hi, I have questions about HLM analysis.
I got results saying 'Iterations stopped due to small change in likelihood function'
First of all, is this an error? I think it needs to keep iterating until it converges. How can I make this keep computing without this sign? (type of likelihood was restricted maximum likelihood, so I tried full maximum likelihood but I got the same sign) Can I fix this if I set higher '% change to stop iterating' in iteration control setting?
Relevant answer
Answer
I general convergence is when there is no meaningful change in the likelihood!
I cannot tell if this is a warning or it has been successful.
You may want to have a look at this:
This is the HLM 8 Manual ; I suggest you search it for the word "convergence"
  • asked a question related to Computing
Question
8 answers
Hi, I am a software engineering student currently in my final year of the degree program. I am trying to find a research topic based on Edge computing. Your recommendations are welcome!
Relevant answer
Answer
Hi, edge computing has become a research hot spot these years for providing fast and convenient service at the edge of networks. To start with, I think it would be better to read some surveys and summarys on that topic, and you can find them on IEEE surveys and tutoriols, IEEE network, etc.
Hope that may help!
  • asked a question related to Computing
Question
8 answers
Which simulator is more helpful in doing research in Edge Computing
Relevant answer
Answer
Authentication and key exchange
  • asked a question related to Computing
Question
6 answers
Can someone guide me on "Investigation of Security Enhancement Techniques in Edge Computing using Deep Learning Models."
Any base paper which is related to this topic is helpful
  • asked a question related to Computing
Question
15 answers
While Computing undergraduate studies, several different software projects are required to be submitted. When in academic institutes, to development of software, it needs to follow a prescriptive process model. However, when considering group projects, the stages or works can be shared among the members. But in individual projects, as a single member, no work can be shared but can be loved the practical problems by managing the process groups efficiently.
Hence, rather than group projects, the selection of the prescriptive process model is important.
Then when selecting a prescriptive process model, what are the things that an underegrad should be considered?
Disclaimer
The discussion targets the perspectives of two types of participants.
  • Software Engineering Student: please analyse the situation with your knowledge and answer.
  • Software Engineering professional / scientists: please use your experience and insert advice.
It is warmly welcome the ideas of both the groups as well as interest audience
Reply to this discussion
Relevant answer
Answer
When selecting a perspective process model, there are some facts to be
Considered they are Scope of validity, the impact of the process, Degree of confidence
And tailorability of the process.
When considering a flash flood prediction system,
If the Scope is well defined so its Helps to define clear goals in order to reduce cycle time also improving code quality. So according to this system the goal would be give accurate prediction reports on time.
The impact of process- This reflects the weight that should be given to each context of the project. In here If the impact is not known then the prescriptive process will lead to unpredictable results. So this is very important part.
Degree of confidence- Effectiveness of this application to be deployed in different areas where floods can be occur.
The ability to modify when changes in requirements can is referred to Tailorable. It must be able to adapt the process described by the prescriptive model to specific goals of the project. In this system adding new databases ,data sets or new functions can be taken as examples.
  • asked a question related to Computing
Question
1 answer
I'm looking to survey faculty on their research computing needs to advocate for
researchers, and am looking for good questions, or a good example.
Relevant answer
Answer
James J Coyle , I made my post-doc in a research institute with HPC facilities (HZDR in Germany), and from my experience you can ask:
1) People need to process/store data locally or is it better on the cloud for them?
2) Do researchers use GPU or CPU?
3) What software do they use? (R, Python, Matlab, etc)
4) What OS do they use/prefer?
I love to run models with GPU but I personally found cumbersome to use GPU instead of CPU at times, as there is some extra work needed to prepare the model for running the models in GPU (and sometimes the difference in computation time tends to be negligible). Good CPU with a lot of RAM is more than enough for most researchers. Also HPC cloud computing, despite being more powerful, can be troublesome, as there is some extra work needed to connect and get used to use HPC cloud computing. Finally, I am user of Windows (not because I like it), but everyone loves Linux, and hence we had some problems in running some Python libraries made for Linux in a Windows environment, and vice versa, hence the OS can be important.
  • asked a question related to Computing
Question
4 answers
By introducing certain constraints, It is common that in some of the existing literature that solves the American style options, the early exercise feature and the Greeks are either not computed accurately or unavailable.
I am seeking insight and would love to inquire about various numerical, analytical, and/or analytical approximation techniques for computing the early exercise feature in a high-dimensional American options pricing problem.
Relevant answer
Answer
Thank you, J. Rafiee, Lilian Mboya and Paul A Agbodza for the great suggestion and insight. I will go through all your suggestions accordingly.
  • asked a question related to Computing
Question
83 answers
User mode and Kernel-mode are two processing statuses of the operating system. Please suggest to me, a very simple example in which you can explain the differences and other functionality such as system calls, interrupt to a novice learner.
Further, just inform how to map the example with subject
Disclaimer
The discussion targets the perspectives of two types of participants.
  • Operating System subject's Student: please analyse the functionalities and construct an answer.
  • Computing professionals/teachers: please use your experience and insert answers as advice.
It is warmly welcome the ideas of both the groups as well as interest audience
Relevant answer
Answer
For an example, let's take Deposit money to the Bank Account. The Depositor is the User Mode with purpose of deposit money. In User Mode depositor can't execute any processes. He or She has to give a system call to Kernel Mode which act as bank officer. Giving the money to the officer and give account details is the system call. Kernel Mode can access directly with any hardware and execute the process and display the outcome to the user. Which means officer do the processing part to complete the depositor call and tell him or her that process is complete. That means return the System Call. There is no user process without kernel mode, Kernel Mode is the ones who doing processing.
The Depositor - User Mode
Deposit Money - System Call
The Bank Officer - Kernel Mode
  • asked a question related to Computing
Question
6 answers
Hi all, I'm using CPLEX for solving VRPTW (vehicle routing problem with time window) and observe there is a huge computing time difference even when I change the problem size by just 1. By "problem size", I mean number of nodes inside problem.
For example, for 20 nodes, it took only 20 secs to solve. However, it took more than 1 hour to solve 19 nodes instance. I understand VRPTW is NP-hard and so such phenomenon is expected to happen.
The gap is still too big, I wonder if there is any technique to make computing time more consistent with problem size?
Relevant answer
Answer
Ondřej Benedikt Michael Patriksson Alexandre Frias Faria Adam N. Letchford Thanks all for your valuable insights. I have played more with my instances and find out that it's more related to the structure of my model formulation.
My model needs to determine both routes of vehicles & time-windows of customers given a set of scenarios each of which contains parameters like traveling time. So, it's quite harder than classic VRPTW where time-windows are known.
It's hard for some specific problem size. If I simply ignore these "bad instances", then the solving time indeed increases when problem size grows.
  • asked a question related to Computing
Question
3 answers
hey, I want to develop energy efficient SDN and to integrate with Edge computing based IoT applications. So, which emulator/simulator would you recommend for integration SDN, Edge computing and IoT devices. Please suggest best open source Emulator / Simulator.
Relevant answer
Answer
I recommend EXata Network Simulator/Emulator. Its new versions have models that support SDN like EXata 5G and EXata Cyber. For sure, it is not open source as you requested, because it is a powerful commercial tool. However, sometimes they provide trial licenses for students and discounts for academic licenses. You may refer to the simulator website for more details:
  • asked a question related to Computing
Question
3 answers
I am operating data for establishing the norms for Likert 7-point scale. I was suggested to use z-norms. But while computing the standard scores in SPSS, I got the value +2.15 and -1.96 (instead of ± 3). And while checking the data, it showed that the data do not follow the normality. In this case, how do I set the norms for interpretation?
Relevant answer
Answer
More can be found by looking at David L Morgan work in the attached screenshot. Only Morgan can be really trusted here. Good luck, David Booth
  • asked a question related to Computing
Question
3 answers
[1] In this experiment, we use a very large virtual world, with a dimension of 30*30 units, total number of avatars is equal to 25000, and the number of servers, 16. The radius of the Area of interest of each avatar is equal to 0.5.
[2] To generate a simulation environment, we randomly position 100 cloudlets and 500 players on a 1,000 meter square grid. Players are partitioned into 20 regions
[1] An efficient partitioning algorithm for distributed virtual environment systems | IEEE Journals & Magazine | IEEE Xplore
[2] Delay-Sensitive Multiplayer Augmented Reality Game Planning in Mobile Edge Computing | Proceedings of the 21st ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems
Relevant answer
A good job there
  • asked a question related to Computing
Question
6 answers
Faculty of Computing Should have what are the following departments or suggest anyone you like?
Computer Science
Information Systems / Information Technology
Software Engineering
Computer Engineering
Data Science
Cyber Security
Bioinformatics
Robotics
Relevant answer
Answer
I am not sure what your question is. I think you want to subdivide your faculty in a number of departments and your question is how that should be done. How many staff is there? My experience is that you should burn the candle from two sides. What kind of expertises do you have today? What are your strong positions? The other side is: where do you want to be in say the next 5 to 10 years? It should fit in the strategic plans of the faculty. This strategy depends on funding opportunities and relationships with industry and so on. I hope this
helps you some how.
  • asked a question related to Computing
Question
2 answers
As we know, the machine learning and data mining techniques under a "BIG data framework" have been booming in the fields of intelligent controlled robot, unmanned vehicles, 5G communications and AI chips, with papid advances in computer computing capabilities in the past few decades.
Currently, as far as I can see, the Quantum computing and cloud computing techniques are also possible solutions to promote the industrial processes further.
If our humans are able to make huge breakthroughs of these 2 technical ways to obtain theoretically infinite computing speed and resources, does it mean that all the industrial processes are able to moniter and operate under a supervised level in real-time (more intelligent, more safe, reliable and available).
The problems, "improvements of algorithms against the computing burdens" and "conveniences of online deployment" probably will not the more important issues for our control strategies, fault diagnosis and fault-tolerant issues any more.
What about your ideas for this?
I sincerely hope who are interested in this topic can give your valuable comments!
Relevant answer
Answer
I think it will be an extension of the digital revolution, which began in the 1940s, but first appeared with the Intel microprocessor (1971). Quantum computing will be the first significant change in the paradigm, but I think it the entire miniturization process that will be most important. inspiration comes from the 1959 lecture: “There’s plenty of room at the bottom” by Richard Feynman. This lecture shows the importance of forthcoming digital revolution and nano-technology. My expectation is that a new paradigm may appear go to the atomic level (below the one nanometer). I expect quantum computing to fit into this story in the next 10 to 20 years.
  • asked a question related to Computing
Question
19 answers
I have drived a formula of computing a special Hessenberg determinant. See the picture uploaded here. My question is: Can this formula be simplified more concisely, more meaningfully, and more significantly?
Relevant answer
Answer
Till now, I do not get the book
J. M. Hoene-Wro\'nski, \emph{Introduction \`a la Philosophie des Math\'ematiques: Et Technie de l'Algorithmie}, Paris, 1811.
  • asked a question related to Computing
Question
4 answers
There are three mediators included and the product of the coefficient approach is used to compute the value of indirect effect through each mediator respectively. After computing the value of the indirect effect, bootstrapping test will be conducted to calculate the standard error of each indirect effect to study their significance.
how can apply the last step to use bootstrapping test to calculate the standard error of each indirect effect to study their significance?
Relevant answer
Answer
Refer the following research paper :
  • asked a question related to Computing
Question
4 answers
After computing the value of the indirect effect, bootstrapping test will be conducted to calculate the standard error of each indirect effect to study their significance.
I need to know how I can do that?
sorry for my language!!!!
Relevant answer
Answer
BTW that may mean they continue to bootstrapping automatically. Read the manual. As Kansas Sung: "Carry on my Wayward Son", etc. David Booth
  • asked a question related to Computing
Question
5 answers
I need to know the computing requirements to do SAR time series analysis and how many SAR rasters (images) should be analyzed?
Relevant answer
Answer
Well, it's depended how you are doing SAR Data processing.
if you are using Sofware like SNAP, you need a powerful processing computer.
As of now Google Earth Engine is suitable alternative if you have programming command.
Number of images required is totally depends on what you are doping through SAR data
  • asked a question related to Computing
Question
5 answers
I'm computing climate indices for precipitation, maximum and minimum temperatures using RClimDex program. However, in the case of precipitation, the application displays the following error:
" Error in daynormm[(i - i1):(i + i1),]: subscript out of bounds"
As a result, what is the best way to fix this error?
I'm looking forward to hearing responses from you as soon.
Thank you for your assistance in advance
Relevant answer
Answer
This is a common error, for the discussion of the possible reason(s) please consider the following MATLAB site:
Please verify what is the size of your array daynormm and the min max values for (i-i1) and (i+i1).
The best way to fix your error is to clarify why the program is selecting out of the bounds indices.
  • asked a question related to Computing
Question
3 answers
Let's say we have an undirected graph with only weighted nodes/vertices (representing an attribute/measure) and unweighted edges (where all nodes are fully connected).
Are there any theorems for represnting and computing the shortest path to traverse at least 2 nodes?
  • asked a question related to Computing
Question
4 answers
When using MOE software for computing Ligand interactions, which file format is suited to save/export interaction file of these format png,jpg, bmp,emf+ svg?
I want to save in that format which are accepted by academic research papers. Please advise.
Relevant answer
Answer
Different publishers, different guidelines. i would save the pictures in PNG and SVG formats.
  • asked a question related to Computing
Question
3 answers
Dear colleagues,
The idea that its language can identify a scientific or technical field was a relevant topic in the work of Jürgen Habermas. However, in computing fields, this difference would be mild or fuzzy.
I have designed a small instrument to measure the domain difference between computer science and software engineering. I would like that you answer it ( http://shorturl.at/akFS7 ) or let me your opinion about it. Depending on the number of answers, I would let the results here.
Thank you very much
Relevant answer
Answer
Filled the survey. I come from an Information technology undergraduate background.
  • asked a question related to Computing
Question
4 answers
Homomorphic encryption schemes allow users’ data to be protected anytime it is sent to the cloud while keeping some of the useful properties of cloud services like searching for strings within a file.
Relevant answer
Answer
Although in a preliminary stage, Homomorphic encryption over the cloud actually implies that data can be manipulated in its cipher-text form over the cloud, which allows for saving bandwidth for manipulating/editing purposes. Up to present times, data in numeric forms can be manipulated but complex alpha-numeric characters cannot. Also, data in numeric format can be operated upon (Add/Subtract).
  • asked a question related to Computing
Question
10 answers
As we may provide a computerized graph structure for synthesizing and displaying the data on a region’s ecosystem-economic system how could we create a causal network of the synergistic impact mechanism among certain climate related factors? How can we also tackle the various synergistic effects in a thematical framework? For instance by applying Mathematica-based graph modelling
Relevant answer
Answer
We may abate GHG emissions by applying abatement methods (like scrubbers) to control GHG emissions or by reducing the carbon content of the fuels used or using instead of fossil fuels various alternatives energy sources like renewables.
  • asked a question related to Computing
Question
2 answers
I am looking to establish theoretical frameworks for such research on one to one computing projects.
  • asked a question related to Computing
Question
2 answers
For computational methods in FEA/FVM and Partial DIff Eqn's using C++.
Also, some guidance with the learning path (how to get along) will be appreciated.
Relevant answer
Answer
Mr.Shibajyoti,
I suggest you to read the following text book for finite element method;
1. 'A first course in the finite element' (2012), authored by Daryl L. Logan, published by Cengage Learning
200 First Stamford Place, Suite 400
Stamford, CT 06902
USA
2. 'Concepts and Applications of finite element analysis', (2002), authored by Robert D. Cook et al. published by John Wiley & Sons INC..ok all the best..
  • asked a question related to Computing
Question
9 answers
While computing numerical solution of a PDEs if the exact soln is not know we use a double mesh Technic for representing exact soln
. Is there other options to represent the exact computed soln?
Relevant answer
Answer
If your numerical method is, say, of second order accuracy, then the discretisation errors will be quartered when the intervals are halved. So if you have a sequence of meshes of that type, such as 50x50, 100x100, 200x200 and so on, and if one monitors, say, the middle value in the centre of the domain, then the absolute error will soon become apparent. I'll illustrate in the following table.
h x_centre R.E. error
0.2 1.814054 0.041600
0.1 1.782554 1.772054 0.010100
0.05 1.774960 1.772429 0.002506
0.025 1.773079 1.772452 0.000625
0.0125 1.772610 1.772454 0.000154
h is the steplength, x_centre is the value we are monitoring, R.E. is a Richardson Extrapolate ( 4 x more accurate value - less accurate value)/3 which could now be 4th order accurate), and error is the error in the x-centre data using 1.772454 as the proxy for the exact solution.
So this tells us that a step of h=0.025 has a relative error 3.5 x 10^{-4}, and therefore may now decide if that is accurate enough for our purposes. The point here is that RE values have converged to just about the 7th significant figure and quite obviously so, which is why we were able to decide that 1.772454 is almost exactly correct.
Some methods (such as those which use upwinding) are somewhere between 1st and 2nd order accurate and therefore this RE approach can't work. But something like Poisson's equation will work very well.
In my computations, I spend a lot of time using ever finer grids for a case which is close to an extreme configuration (e.e. the largest Rayleigh number that I intend to use) and try to iterate to machine accuracy so that I have the best possible solution of the discretised equations, so any errors are discretisation errors, and not errors caused by declaring convergence too early. That then serves as my baseline case for me to decide the number of intervals AND the convergence criterion. Generally residuals are the best way to go because they do not rely on the speed of convergence. A slow convergence can also cause the premature declaration of convergence when a criterion based on the relative change between iterations is used. Yes, this is very time-consuming, distressingly so, but it is essential to know everything about the accuracy of one's work. The statement, "our work compares well with...." is very poor. The statement,"our work is correct to four significant figures" tells the readership that you know what you are doing and have taken care.
  • asked a question related to Computing
Question
3 answers
Good morning everyone,
I’ve got a question about computing a new variable in SPSS, based on the highest value of multiple variables for each case. Please see the example below, where I indicate the value of the new variable for each case.
Case 1
Score A: 3
Score B: 2
Score C: 5
Score D: 4
Score E: 2
Score F: 3
New var: Score C
Case 2
Score A: 5
Score B: 2
Score C: 3
Score D: 4
Score E: 2
Score F: 3
New var: Score A
Case 3
Score A: 3
Score B: 2
Score C: 2
Score D: 5
Score E: 2
Score F: 3
New var: Score D
The question is: how to compute this new variable?
I know that the actual values of this new variable will range from 1-4, and by labelling I can display those as Score A - Score D.
But the step of finding the highest value and then showing where this value is derived from, is unclear to me.
Any help is much appreciated!
Best,
Michiel
Relevant answer
Answer
I suggest you review the section below. You can make the changes you want from this section.
Transform -> Recode into different variables - "Old and new values"
  • asked a question related to Computing
Question
7 answers
I am interested in postdoctoral studies, how can I apply. Can someone recommend me some universities? research area, complex networks, machine learning and transparent computing,.
Relevant answer
Answer
check this website: https://www.findapostdoc.com
I hope it will help
  • asked a question related to Computing
Question
3 answers
Container technologies such as Docker, or Kubernetes are widely used in the industry for virtualization deployments. Edge computing paradigms such as MEC and Fog are intending to leverage such infrastructure for their feasible deployments. My intention is to simulate and emulate the service migration process of such an edge computing scenario (migration from one edge computing node to another) to understand the security aspects of them. Thus, I have the two questions;
- What is the best simulator I can use to implement a migration model (preferably MDP)?
- In order to emulate, should I use existing tools like Openshift, Kubernetes...etc. ; Do they have the migration function? or do I have to implement such a platform from the scratch using docker engine?
Relevant answer
Answer
Thank you for sharing this question
  • asked a question related to Computing
Question
9 answers
Results of single-case research designs (i.e., n-of-1 trials) are often evaluated by visually inspecting the time-series graph and computing quantitative indices. A question our research team is pondering is whether visual analysis or quantitative indices should be set as the criterion for determining effects from experiments. I'd appreciate thoughts from both sides of the argument.
Relevant answer
Answer
Clinically, I think visual analysis can be very important, though even there you're seeing a movement toward more objective, quantitative means of making decisions (e.g., Roane et al., 2013; DBI). Given that it is generally unreliable (even among experts), I think the days of visual analysis as a tool for assessing research are numbered. The WWC no longer considers it to be the gold standard for SCDs (2020a, 2020b). Attempts to make visual analysis more transparent, systematic, and reliable result in procedures that can usually be performed without looking at a graphic display (e.g., Lane & Gast, 2014). Some argue that visual analysis detects unique characteristics of the data (e.g., consistency, immediacy), but more recent work has quantified many of these features (e.g., Tanious et al., 2019). The within-subject replication rule provides an additional rationale for the visual analysis, but the legitimacy of replication rule (as currently conceived) is somewhat questionable (Lanovaz et al., 2019) and in any event is probably amenable to quantification. Sure, there will be questions on which formulae/cutoffs should be used, but these are all questions that exist regardless of method.
Hope this helps!
Lane, J. D., & Gast, D. L. (2014). Visual analysis in single case experimental design studies: Brief review and guidelines. Neuropsychological Rehabilitation, 24(3-4), 445-463.
Lanovaz, M. J., Turgeon, S., Cardinal, P., & Wheatley, T. L. (2019). Using single-case designs in practical settings: Is within-subject replication always necessary?. Perspectives on Behavior Science, 42(1), 153-162.
Roane, H. S., Fisher, W. W., Kelley, M. E., Mevers, J. L., & Bouxsein, K. J. (2013). Using modified visual‐inspection criteria to interpret functional analysis outcomes. Journal of Applied Behavior Analysis, 46(1), 130-146.
Tanious, R., Manolov, R., & Onghena, P. (2019). The assessment of consistency in single-case experiments: Beyond ABAB designs. Behavior Modification, 0145445519882889.
What Works Clearinghouse. (2020a). What Works Clearinghouse Procedures Handbook, Version 4.1. Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. This report is available on the What Works Clearinghouse website at https://ies.ed.gov/ncee/wwc/handbooks.
What Works Clearinghouse. (2020b). What Works Clearinghouse Standards Handbook, Version 4.1. Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. This report is available on the What Works Clearinghouse website at https://ies.ed.gov/ncee/wwc/handbooks.
  • asked a question related to Computing
Question
11 answers
I want to learn new computing technology. I heard that Fog computing is the current one. Whether other latest computing technology available?
Relevant answer
Answer
Dew computing is very trendy nowadays. You can check this very recent paper on this.
  • asked a question related to Computing
Question
4 answers
I am using only one Physics interface in Comsol i.e. Heat transfer in fluids with checking the box which says Heat transfer in porous media because a sub domain in the whole computing domain is porous. solving the stationary step and getting this error. please suggest possible solutions.
Relevant answer
Answer
not sure about this app.
  • asked a question related to Computing
Question
6 answers
What does 't' denote in the equation (image attached) for computing risk. Is it "continuous" time or "discrete time? Please elaborate. A detailed response will be appreciated.
Relevant answer
Answer
In the context of that paper, t is continuous time, at least formally. The key is noting that Xt,f is used as a "given" in the conditional probability. The paper mentions t as a "time period" later in the paper, but I think that's really a reference to a brief time period around t, like [t,t+dt]. You can discretize t as long as the distribution of states Xt,f is approximately constant within each time interval for the risk estimate to remain within a margin acceptable to you. Risk in this context is a hazard function, an instantaneous snapshot of the rate of increase in expected severity over time. Accumulated risk over time [0,t] will be a function of an integral of this quantity over time t, F(integralt Risk(t) dt). The nature of that function F() depends on the severity model. An incremental (low-level) severity may be largely additive (F(x)=x), but a "total loss" severity should take into account that it only happens once (F(x) = 1 – exp(–x)).
  • asked a question related to Computing
Question
2 answers
I use queuing theory for determining premium pricing through aggregate loss. I haven't found suitable data.
Relevant answer
Answer
Dear sir,
Can you elaborate it more....
Thank you
  • asked a question related to Computing
Question
4 answers
If Lmn(x) is the Laguerre polynomial of order m and degree n
Is there a means of computing Lmn(x) for
1) m goes to infinity
2) n goes to infinity
Relevant answer