Science topic
Data Processing - Science topic
Explore the latest questions and answers in Data Processing, and find Data Processing experts.
Questions related to Data Processing
To give more chance to my next research, I would like to use advanced graphs and diagrams for geochemical data processing
Hi everyone
I'm facing a real problem when trying to export data results from imageJ (fiji) to excel to process it later.
The problem is that I have to change manually the dots (.) , commas (,) even when changing the properties in excel (from , to .) in order not count the numbers as thousands, (let's say I have 1,302 = one point three zero two) it count it as (1302 = one thousand three hundred and two) when I transfer to excel...
Lately I found a nice plugin (Localized copy...) that can change the numbers format locally in imageJ so it can be used easily by excel.
Unfortunately, this plugin has some bugs because it can only copy one line of the huge data that I have and only for one time (so I have to close and reopen the image again).
is there anyone that has faced this problem? Can anyone suggest me please another solutions??
Thanks in advance
Problem finally solved... I got the new version of 'Localized copy' plugin from the owner Mr Wolfgang Gross (not sure if I have the permission to upload it here).
Good day everyone.
I have been doing some GRACE data processing in GEE, but from what I can tell - only the first mission's data (dating from 2002/04 to 2017/01) is accessible through the library for import.
Any recommendations how I can access more recent data from the GRACE-FO mission for analysis in GEE?
Any feedback is greatly appreciated.
Best wishes.
CV
I, as a professional urban planner, want to specialize in data processing and programming skills. Therefore, I seek to figure out from experienced experts what programing language better suits urban planning context and urban analysis.
I have a two-factor experiment. We investigated the effect of the drug on the level of erythrocytes after surgery. One of the factors is the presence of the operation, the second is the presence of the drug, the third is the interaction of these factors. I have difficulty in interpreting the received data. We have one time point. Example: we obtained the influence of the operation factor (P <0.01) on the level of erythrocytes after the operation and the interaction of factors - operation * drug (P <0.05). But I didn't get the influence of the drug factor. Can I conclude that the drug affects the level of red blood cells after surgery?
Dear Professsors and studend friends,
I am an undergraduate student and I want to desing an experimental graduation project with one of my friends by using EEG. However, we do not have any data aquations and processing experiences. Can you give me some recomendations. from your experiences.
If you had the opportunity, what artificial intelligence would you design and create to be helpful in the research, analytical, editorial, other work you do in conducting your scientific research and/or describing its results?
In your opinion, how would it be possible to improve the processes of conducted research and analytical work, processing of the results of conducted research through the use of artificial intelligence in combination with certain technologies typical of the current fourth technological revolution, technologies categorised as Industry 4.0, including analytics conducted on large sets of data and information, on Big Data Analytics platforms?
The development of artificial intelligence technologies has accelerated in recent years. New applications of specific information systems, various ICT information technology solutions combined with technologies typical of the current fourth technological revolution, technologies categorised as Industry 4.0, including machine learning, deep learning, artificial intelligence and analytics performed on large data and information sets, on Big Data Analytics platforms, are emerging. Particularly in the field of ongoing research work, where large sets of both qualitative information and large sets of quantitative data are produced, the aforementioned technologies are particularly useful in facilitating analytics, processing, elaboration of research results and their preparation for presentation at scientific conferences and in scientific publications. In the analytics of large quantitative data sets, analytical platforms built using integrated information systems, computers characterised by high performance computing power, equipped with servers, high-capacity memory disks, on which Big Data Analytics platforms are built, are used. On the other hand, artificial intelligence technology can also be useful for aggregating, multi-criteria processing and elaboration of large sets of qualitative information. In addition to this, certain IT applications, including statistical and business intelligence applications, are also useful for processing the results of studies carried out, presenting them in scientific publications, statistically processing large data sets, generating descriptions and drawing graphs based on them. As part of the digital representation of researched, complex, multi-faceted processes, digital twin technology can be useful. Within the framework of improving online data transfer, remote communication conducted between researchers and scientists, for example, Blockchain technology and new cyber security solutions may be helpful.
Probably many researchers and scientists would like to have state-of-the-art ICT information technologies and Industry 4.0. including Big Data Analytics, artificial intelligence, deep learning, digital twins, Business Intelligence, Blockchain, etc. Many researchers would probably like to improve the processes of the research and analytical work carried out, the processing of the results of the research carried out, through the use of artificial intelligence in combination with certain technologies typical of the current fourth technological revolution, technologies categorised as Industry 4.0, including the use of artificial intelligence and analytics carried out on large sets of data and information, on Big Data Analytics platforms.
The construction of modern laboratories, research and development centres in schools, colleges, universities, equipped with the above-mentioned new ICT information technologies and Industry 4.0 is therefore probably an important factor for the development of scientific and research and development activities of a particular scientific institution. However, it is usually limited by the financial resources that schools, colleges, universities are able to allocate for these purposes. However, should these financial resources appear, the questions formulated above would probably be valid. In such a situation, as part of a systemic approach to the issue, the construction of modern laboratories, research and development centres in schools, colleges and universities, equipped with the above-mentioned new information technologies, ICT and Industry 4.0, would also be determined by determining the priority directions of research work, the specific nature of the research carried out in relation to the directions of the teaching process, the mission adopted by the scientific institution in the context of its research, scientific work, the achievement of specific social objectives, etc.
In view of the above, I would like to address the following questions to the esteemed community of scientists and researchers:
In your opinion, how would it be possible to improve the processes of conducted research and analytical work, processing of the results of conducted research through the use of artificial intelligence in combination with certain technologies typical of the current fourth technological revolution, technologies classified as Industry 4.0, including analytics conducted on large sets of data and information, on Big Data Analytics platforms?
If you had the opportunity, what artificial intelligence would you design and create to be helpful in the research, analytical, editorial, other work you carry out as part of your scientific research and/or describing its results?
What artificial intelligence would you design and create to be helpful in the research, analytical, data processing, editorial, other work you are doing?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to the discussion in scientific issues and not the ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Thank you very much,
Best wishes,
Dariusz Prokopowicz

Now I want to access the two future simulations of temperature and precipitation in the CMIP6 dataset (in ssp126, ssp245, ssp370, ssp585 senario). It is only necessary to obtain the data for the Chinese region. Finally it will be carried out to calculate the annual average of rainfall and temperature for each province according to the regional boundaries of the Chinese provinces. How do I implement it?
If the answer is YES, whether the use of artificial intelligence for statistical data processing must be stated in the methods
I want to transfer the healthcare data, which includes demographic and multimedia such as medical imaging, etc., from Asia & the US to Europe and vice versa. So the healthcare data will be collected from various hospitals and clinical centers in India, the US, and EU countries.
The questions are as per below:
Q1. I know that i can transfer demographic data in anonymized or pseudo codes form, but how can I generate anonymized or pseudo codes for multimedia data that contain patient identity?
Q2. How can i use healthcare data in Europe countries when consent was taken in other countries of different parts of the world? The patient consents to data per legal regulations of other countries, such as the US & India. While EU countries have data protection laws such as GDPR. In contrast, the US has health data protection laws like HIPAA.
I am looking for some solutions to these two questions.
Consider the natural case when we have an imbalanced dataset, maybe with missing values and containing categorical features. We need to impute missing values, do some sort of balancing and also encode categorical variables. In train-test split case we need to put aside a test set and never perform any learning on it (for example not use sklearn's fit or fit_transform methods, use only transform method). Generally, what is the appropriate order of steps in case of K-fold cross-validation to avoid estimators learning from test fold during the process?
Main ideas in processing theory of learning
** be clear not clever, because the bad kind of clever leads to bad results
**be specific. Because Specifity leads to clarity
Hello, now I am processing a 300kv cryoEM data by relion for a small membrane protein(~100kd), which only has transmembrane domain. However, when undergoing 3d classification, I found protein model didn't stack well. It seemed multiply slides in the model. Also the model looks not so compact but a little scattered. Does anyone meet this problem before, or has some suggestion during data process?
Can Big Data Analytics technology be helpful in forecasting complex multi-faceted climate, natural, social, economic, pandemic, etc. processes?
Industry 4.0 technologies, including Big Data Analytics technology, are used in multi-criteria processing, analyzing large data sets. The technological advances taking place in the field of ICT information technology make it possible to apply analytics carried out on large sets of data on various aspects of the activities of companies, enterprises and institutions operating in different sectors and branches of the economy.
Before the development of ICT information technologies, IT tools, personal computers, etc. in the second half of the 20th century as part of the 3rd technological revolution, computerized, partially automated processing of large data sets was very difficult or impossible. As a result, building multi-criteria, multi-article, big data and information models of complex structures, simulation models, forecasting models was limited or impossible. However, the technological advances made in the current fourth technological revolution and the development of Industry 4.0 technology have changed a lot in this regard. More and more companies and enterprises are building computerized systems that allow the creation of multi-criteria simulation models within the framework of so-called digital twins, which can present, for example, computerized models that present the operation of economic processes, production processes, which are counterparts of the real processes taking place in the enterprise. An additional advantage of this type of solution is the ability to create simulations and study the changes of processes fictitiously realized in the model after the application of certain impact factors and/or activation, materialization of certain categories of risks. When large sets of historical quantitative data presenting changes in specific factors over time are added to the built multi-criteria simulation models within the framework of digital twins, it is possible to create complex multi-criteria forecasting models presenting potential scenarios for the development of specific processes in the future. Complex multi-criteria processes for which such forecasting models based on computerized digital twins can be built include climatic, natural, social, economic, pandemic, etc. processes, which can be analyzed as the environment of operating specific companies, enterprises and institutions.
In view of the above, I address the following question to the esteemed community of researchers and scientists:
In forecasting complex multi-faceted climate, natural, social, economic, pandemic, etc. processes, can Big Data Analytics technology be helpful?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

I am a beginner in this field. I want to learn basic audio deep learning for classifying audio. If you have articles or tutorial videos, please send me the link. Thank you very much.
Hello there,
After some research about Brillouin sensors, ı couldn’t understand data processing after optical signal obtaining.
How to data is processed in conventional BOTDA and BOTDR after the detection of backscattering light on the photodetector?
Do the analysis processes done in the time domain or frequency domain?
I would be very appreciated if you help.
Thanks in advance,
Best Regards.
Dear scholars,
I have a database with 100 geolocated samples in a given area, each sample contains 38 chemical elements that were quantified.
Some of these samples contain values Below the Detection Level of the instrument (BDL), clearly, when we have 100% of the samples with BDL values there is not much to do, but what can be done when, for example, when there is only 20% BDL, what do we do with them, with what value do we replace a BDL sample?
Some papers show that a BDL sample can be replaced by the detection level (for the instrument's minimum detection level for that chemical element) divided by 0.25, others show that you have to divide it by 0.5... What would you do in each case, and is there any literature you would recommend? If it matters, I am mostly interested in Copper and Arsenic.
Regards
Dear scholars,
I have a database with 100 geolocated samples in a given area, each sample contains 38 chemical elements that were quantified.
Some of these samples contain values Below the Detection Level of the instrument (BDL), clearly, when we have 100% of the samples with BDL values there is not much to do, but what can be done when, for example, when there is only 20% BDL, what do we do with them, with what value do we replace a BDL sample?
Some papers show that a BDL sample can be replaced by the detection level (for the instrument's minimum detection level for that chemical element) divided by 0.25, others show that you have to divide it by 0.5... What would you do in each case, and is there any literature you would recommend? If it matters, I am mostly interested in Copper and Arsenic.
Regards
I would like to calculate the strain rates from horizontal velocities obtained from data processing of GPS permanent stations .
ATEX is a very simple EBSD data processing software. I want to use this software to further process XRD data, such as texture, etc., but I don't know where to start and what steps are required?
I have a problem. I obtain a FTIR data of my thin films but it looks like only a part of spectra seems to have valid imaginary part. Is that possible ?
I am using a 26th order of Savitzky-Golay filter to smooth the data but I'am pretty sure that I did not corupt my data.

Is there any free online software to process the fluxomics raw data files.???
1. To calculate 12C and 13C ratio
2. Pathway mapping
If I have multiple scenarios with multiple variables changing, and I want to conduct a full factorial analysis, how do I graphically show the results?
I am currently attending a PhD program in innovative Urban Leadership and I am interested how and in what way have the church leadership responded to the pandemic that happened in the past two years. I want to learn as how most leaders responded to the scenario created by the COVID-19 Pandemic. What innovative leadership techniques have been employed by church pastors and how were those techniques of innovative leadership principles and values have been employed to address the dire situation of the church goers? What models have been employed to address the wholistic needs of the members of the church? How did the church leadership overcame their vulnerability within this dire situation as they were in the forefront of fighting the pandemic? What is the learning most institutions generated and how do we recreate those learnings and use in the future when similar incidents happen?
I'm using XPS these days to analyze a set of nanomaterials that I am synthesizing. I'm getting familiar with the Avantage data processing tool that is installed on the computer attached to the XPS instrument. For learning the processing tool at my pace, I wanted to know if I can install it on my personal computer. Suggestions will be highly appreciated.
Hi all,
I am currently trying to quantify the levels of ergesterol in soil samples for a small part of my PhD work. I have completed the necessary extractions and have ran them via HPLC. I am a complete novice using HPLC and cannot seem to find any literature on how to convert the data into a usable format.
When extracting, I spiked some of my samples to quantify extraction efficiency. I also created a series of standards for the creation of calibration curves and ran a middle standard as a drift every 10 samples.
The data I have is as follows:
A chromatogram including run information (time, injection volume etc.), retention time, area (mAU*min), height (mAU), and relative area/ height (%) (see attached word document)
Any advice/ literature references would be extremely useful.
Thanks in advance,
Dan
I am trying to understand how multivariate data preprocessing works but there are some questions in my mind.
For example, I can do data smoothing, transformation (box-cox, differentiation), noise removal in univariate data (for any machine learning problem. Not only time series forecasting). But what if one variable is not noisy and the other is noisy? Or one is not smooth and another one is smooth (i will need to sliding window avg. for one variable but not the other one.) What will be the case? What should I do?
I am interested in the usefulness of zero knowledge proof in verifying an algorithm (for bias, privacy, data processing, and general deployment process). Have you come across examples of it in regulatory compliance?
I am working on 2D pre-stake marine seismic sections and I need to make Surface Related Multiples Elinmiantions (SRME) by using REVEAL. Unfortunately, this is the first time for me to deal with seismic processing and reveal. I finished the first five steps which are Importing a SEGY, Survey QC, Creating a project binning, Sorting to CMP, and Velocity Analysis.
Now I am trying to apply SRME flow and I started with SRMENearInsert but when I submit the flow an error appears every time. So I need someone who has experience in seismic processing to help me to solve this problem.
I attached some screenshots for a seismic section sorted to CMP, input and parameters for SRMENearInsert, and the error.
Hi, I'm trying to start protein NMR again. When I was doing protein NMR 10 years ago, I used NMRpipe for data processing. But I think NMRpipe is not user friendly. Please teach me any other software that is similar to NMRpipe. Free software, user friendly GUI, run on Windows is better.
Hi,
there ist something wrong in one of my pages:
"Definition von individueller Datenverarbeitung (IDV)
January 2019
DOI: 10.1007/978-3-658-25696-8_1
In book: Sustainable Agriculture and Agribusiness in Iran"
It has nothing to do with Iran but it should be:
"Individuelle Datenverarbeitung in Zeiten von Banking 4.0: Regulatorische Anforderungen, Aktueller Stand, Umsetzung der Vorgaben "
Regards,
Holger
I am looking for a book detailing the application of Convolutional neural networks (CNN) for the buildup of efficient computational frameworks for complex/Turbulent Flows modelling and data processing.
Hey!
I tried out Knime to save some time because some of my evaluation processes include a lot of copy-pasting of data into the right format for me.
I created a workflow that helped me now. What would be great is if the results file would be integrated in the original Excel file as a new work sheet. Can anyone explain to me how to do this? Is it possible at all? Right now I have an Excel writer node create a new file with the results.
Also I usually have several files and already found that If they are in the same folder i can execute the same routine on all files in the folder. But then it can not be processed because 1) i only have 1 output file and 2) some data sets are different (too good---without error message rows) and give the error that the respective rows are missing.
So if you have an idea how i can make the respective changes please help.
Attachted a file how my original data looks like ("Results") and the results i got with Knime copied into the next work sheet (sorting, getting rid of duplicates and rows i don't need). Also tried to export the workflow so you can have a look. I execute the writer at the bottom most of the time when i saw at the machine that the duplicates are ok.
Hello everyone
I have fitted data on XPSpeak41 software but when i try to export the spectrum (in .dat format), it only exports the raw spectrum but not the fitted curves. I tried plotting the .dat file in origin but it only shows raw spectrum as i said above...Any help in this regard would be highly appreciated if anyone knows how to save the fitted curves from XPSpeak41?
Does it make sense to develop compression methods for large matrices used in chemometrics for multivariate calibration?
The main argument of opponents of this method is “increasing computational power and speed of computers for data processing and unlimited cloud data storage available” do not require compression since the compression slightly reduces the accuracy in multivariate calibration. (Cited from Personal communication).
Hello there!
I am a Masters student working at EPFL with Empatica E4. For the moment, as both the device and the data from it are very new to me, I am testing the solidity of the signals.
At first look, the EDA signal looks very noisy and filtering with Ledalab doesn't seem to solve the problem. However, as I said I don't have experience in processing EDA signals, so I would like to know if anyone can help me going through the basic and most common and efficacious processing steps. Any suggestion is highly appreciated!
Thank you in advance to anyone whose willing to help!
I want to know about theory of the upward continuation and why do we use upward continuation and down ward continuation? Other thing is what is the relationship between the filtration and up/downward continuation?
The interactive wavelet plot that was once available on the webpage of colorado (C. Torrence and G. P. Compo, 1998) does not exist anymore. Are there any other trusted sites to compare our plot? And, in what cases we normalize our data by the standard deviation to perform continuous wavelet transform (Morlet)? I have seen that it is not necessary all the time. Few researchers also transform the time series into a series of percentiles believing that the transformed series reacts 'more linearly' to the original signal. So, what actually should we do? I expect an explanation by mainly focusing on data-processing techniques (standardization or normalization or leaving as it is).
I am currently working on a thesis on "analysis of public perception and acceptance of the COVID-19 vaccination process using the Structural Equation Modeling method". There are 6 variable used in the research : Behavioral Beliefs, Attitudes towards Vaccination, Perceived Norms, Motivation to Comply, Perceived Behavioral Control, and Intentions to Receive Vaccination
However, these results seem to make no sense to me:
- attitudes towards vaccination have a significantly negative relationship with motivation to comply
- attitudes towards vaccination have a significantly negative relationship with perceived norms";
- behavioral beliefs have a significantly negative relationship with attitudes towards vaccination .
I used this journal (Bridging the gap: Using the theory of planned behavior to predict HPV vaccination intentions in men, 2013, Daniel Snipes) as references for the research
Hi All
I'm looking for some papers on Self driving cars that discuss the current capability and work, on how much Data processing these cars do, and what are the steps and techniques applied to improve them
Drones, being speedy equipment, is the first choice to implement for the detection of cracks on railway tracks.
I am aware of the method of image capturing and then post-processing of the data.
However, I am looking for a sensor apart from a normal RGB camera, that can be mounted on a drone and can detect the cracks.
Thank You for all the possible help.
Regards,
Garvit
I recently submitted a paper for publication, in which I describe the species of zooplankton identified in water samples using metabarcoding, as well as the percent composition. The percent composition estimated as the number of reads of a species over total reads. These results were compared for different markers A reviewer made the following comments below, but I do not understand exactly what they think I should have done.
Comment by reviewer:
"Are DNA metabarcoding data processing steps, i.e., standardization or rarefaction, data transformations, etc. performed? The methodology lacked this vital data analysis information which makes it difficult to comment on whether proper data processing was explored in the study to support a reliable interpretation of the results."
I'm in the process of doing a meta-analysis and have encountered some problems with the RCT data. One of my outcom is muscle strength. In one study, I have three different measurements of muscle strength for the knee joint (isometric, concentric, eccentric). I wonder how to enter data into the meta analysis. If I give them separately, I increase the number artificially (n). The best form would probably be to combine them within this one study, because in other studies included in the meta analysis, the authors give only one strength measurement.
Thank you all for any help.
Hello, everyone!
I would like to discuss with you about the method, which you are using for GCMS data processing in R.
I tried to study this question and look at a lot of literature, but unfortunately, I did not find a reliable answer. Maybe there are people among us who use R for data processing (untarget).
At the moment, I have 88 profiles of various samples (in the mzXML format) and my task is to find the difference between them. I loaded the data using the readMSData() method and then use the findChromPeaks() method to detect the peaks with CentWaveParam(). But I am confused by the correctness of the settings a CentWaveParam(). Maybe someone can suggest the tuning method for this setting, or at least examples that were used for GC-MS data (quadrupole).
Thanks for your help!
Hello,
I need help with SNAP, and basically with the Sen2Core plug (v.2.8)
I need to do BOA correction for Sentinel-2 data from level 1C to 2A
I read that it is impossible to use Sen2Core in SNAP in Graph builder to do it in Batch process for many scenes simultaneously,
You have to use Sen2Core in the command-line interpreter (eg cmd in windows). Only then can you make corrections for all scenes in the directory.
Sen2Core works for single scenes, no problem with this in my computer.
but it doesn't work for many scenes, here comes my problem.
I'm using "for / d% i in (*) for L2A_Process.bat% i"
I have the data in a simple path, i.e. C: \ snap \ S2A_MSIL1C_20170531T100031_N0205_R122_T33TTG_20170531T100536.SAFE
but I get a message that "Product metadata file cannot be read"
please help, where could the problem be, any suggestions?
and I have a question, using Sen2Core in SNAP, you can do correction for all resolutions simultaneously by selecting the Resolution: ALL option
From what I read, using Sen2Core in CMD, I can only make corrections for one resolution at a time. It is suggested that you do for 60 first, then 20 and 10m. Is it true? Can't you do it for everyone at once?
I don't have any information about the aquisition of the data, except the year of aquisition!
Please see the attachment.
I want to find out the relationship among genes (n=5) that causes low to high mortality in test organisms.
How can I visualize the relationship among genes compared with the mortality rate of the test organism?
What are the methodological differences in the processes of examining economic effectiveness or specific selected issues, aspects in the scope of analyzing the effectiveness of a given business activity in a situation of comparison of analyzes carried out for small enterprises and large business entities conducting diversified economic activities?
For small business entities representing the SME sector, those operating in one area of economic activity, the simplest solution is to select economic and financial indicators relevant to the needs, which determine specific issues of efficiency, eg fixed assets, current assets or other classified capital categories, production factors. It is also possible to analyze and measure the effectiveness of specific processes in an enterprise, the effectiveness of measures, specific investment projects, efficiency of logistics processes, work efficiency of employees, etc. For each of the mentioned types of effectiveness tests other economic or financial indicators are used.
However, in the situation of the analysis of complex, multi-factorial processes realized with economic entities, multifaceted processes covering various spheres of activity of a specific enterprise, covering the entirety of a large enterprise operating in various business areas and with the involvement of much larger financial resources for conducted economic efficiency analyzes, then they should Complex indicator models built from many interrelated economic, financial and other indicators can be used.
A good solution in this situation is the involvement of Business Intelligence technology using large data sets describing the functioning of a specific large enterprise, gathered in Big Data database systems. In addition, advanced data processing and analysis can be made using cloud computing technology. In addition, access to data, data update and commissioning of specific analyzes of economic performance research can be carried out from the level of mobile devices, i.e. through the use of the Internet of Things technology.
Do you agree with me on the above matter?
In the context of the above issues, I am asking you the following question:
What are the methodological differences in the processes of examining economic effectiveness or specific selected issues, aspects in the scope of analyzing the effectiveness of a given business activity in a situation of comparison of analyzes carried out for small enterprises and large business entities conducting diversified economic activities?
Please reply
I invite you to the discussion
Thank you very much
Best wishes

With the availability of huge amount of remotely sensed data comes the issue of big data processing. I am wondering if there exists any associated new statistical image processing and/or information extraction algorithms that have been developed for processing big data? I have searched the net and not much is there. Also any suggested readings in the subject are highly appreciated as well. Thanks
If somebody wants to mathematically model data, information, and knowledge. Data represents a raw material for processing service delivery solutions to produce information. Knowledge acquired by handling such information by experts in a special field such as computer science, psychology, mathematics, and statistics. How can mathematical models be developed to describe knowledge acquired by individual, population, or community?
Considering the specifics of the increasingly common IT systems and computerized advanced data processing in Internet information systems, connected to the internet database systems, data processing in the cloud, the increasingly common use of the Internet of Things etc., the following question arises:
What do you think about the security of information processing in Big Data database systems?
Please reply
Best wishes

Greetings,
I am planning to work with open source paleo-climate data for a thesis, so far the only source for these kind of data i know is : https://www.ncdc.noaa.gov/paleo-search/
Is there any other sources that provides a good amount of paleo-climate data or this the most available source currently ?
With that being said, would any paleo-scientist like to tell me what are some of the special things that you take into consideration while you are dealing with such data, especially because they are from past and mostly climatic reconstruction or proxies ? If you went through that link, you'd see most of them are in (.txt) files, therefore what would be some potential software or programming languages you have used or planning to use, that would be helpful in this regard ? or you'd process it like any usual data (e.g. netCDF are very popular in climatic studies but unfortunately i don't know whether (.nc) files exists for paleo-climate data)?
Any advice or suggestions, in addition to my question would be deeply appreciated.
Where can i buy GPU for the purpose of COVID-19 image and data processing with Deep learning?
X-ray images
CT-Scan Images
How many core needed?
Which configuration is best?
With in 10K USD and optimum configuration?
Either online or in Saudi?
Is there any assembled configuration possible from vendor?
Does Stata allow for error tracking in data processing?
Does the indirect effect (in a simple mediation model) of your mediator only consider the IV, or also take into account the control variables in your model?
I am trying to check the robustness of the indirect effect in my simple mediation model (with 6 control variables), with the Sobel/Aroian/Goodman Test. Do I need to include my controls to check the indirect effect or not?*
*I have tried both. The indirect effect is significant as I include my controls, but insignificant as I exclude them. In my initial methodology, my indirect effect is shown to be insignificant (using bootstrapping method).
I am a beginner in the field of GPU computing and big data processing. I want to perform research or an experiment that could integrate both the technology. Please provide me some suggestions
Hi all, i am doing the global profiling of plant using LC-MS(QTRAP_6500) method. I am using XC-MS as my data analysis tool. So for each m/Z corresponding to a metablite, the Rt varies from 1 to 30. how it possible and which Rt I will select for my metabolite
I'm working with stream data processing and I'm confused regarding how to define and prove the hypothesis for stream data processing? I need some sample examples.
Is the progressive increase in the digitization of education process instruments a feature of the current technological revolution known as Industry 4.0?
Measuring the impact of digital technology, including new online media on the process of learning and the effects of the education process can be based on a comparison of assessments at a specific time, a specific educational process supported by the use of new online media, social media portals and other technologies typical of the digital age. These technologies currently include mainly new technological solutions, streamlining improvements, innovations etc. regarding, among others, advanced data processing, including data obtained from the Internet, data processing in the cloud, Big Data database systems, using artificial intelligence, etc. These new technologies of advanced information processing co-create the current fourth technological revolution called Industry 4.0.
In view of the above, the current question is: Is the progressive increase in the digitization of education process instruments a feature of the current technological revolution known as Industry 4.0?
Please, answer, comments. I invite you to the discussion.

I have a spectrum recovery algorithm based on OMP. I want to use it for wireless sensors to optimize their data procession and data transfer as they generate loads of sensed parametric data.
My algorithm is in matlab and I need to reduce its execution time to make it fast enough to make my sensor nodes able to learn faster and adapt accordingly.
I use StatSoft Statistica 12 for Data Processing and developing the algorithm of building of confidence interval. And I need test arrays of pseudo random numbers to check the algorithm efficiency.
Dear all,
I have a huge database of crude hourly meteorological data, and I am looking for some tutorials for processing it, filling gaps, detect outliers, etc. preferably using R (but I could also use Python and Julia).
Any suggestion?
Thanks in advance!
My recorded FACS data only includes FSC-A and SSC-A, (and my color channels) no FSC-H or FSC-H.
Normally I gate for singlets first with FSC-A vs FSC-H.
I can not do this now.
Did I forget to record the FSC and SSC -H?
Is there a way around this? Can I get the height values by data processing?
How bad would it be to skip the singlet gating step?
Thank you and best regards,
David
The relative standard curve method in qPCR is a standard curve-based method for relative real time PCR data processing. Can anyone explain the calculations of this method to me? I read something about pooling of cDNA samples and using them as a calibrator is this right?