Questions related to Information Retrieval
To handle second order factor, I used the latent variable scores calculated by smartpls. After attaching that data file with my path, i tried to create groups on the basis of a categorical variable. But it simply doesn't work. It doesn't give an error either. Has anyone faced the similar problem ?
I have done Rietveld refinement on PXRD but i do not know where to find the information of Final R indices[I>2sigma(I)] , R indicies (all data) , Extinction coefficient, Largest diff. peak and hole.
Please help me with this issue.
I used Delvetool.com to analyse my qualitative data after I found my edited transcripts were lost in Nvivo and no cause could be found for it. I also felt that Nvivo is quite complicated to use, while Delve is much more straightforward. However, the ethics committee at my university had never heard of Delve before and raised a concern to me about the data storage in Delve. Delve uses 3rd party processors, which means that the data uploaded in Delve is stored in a number of external servers. Delve developers argued that they have listed all 3rd party processors on their website and they are all well-known and trusted (https://delvetool.com/subprocessors).
I have searched and found a few publicated papers that used Delve for thematic analysis. It is really hard for me to go back and use Nvivo now. Could you please advise me if there would be serious ethics concerns to uploading data into Delve and using it for thematic analysis?
Thanks a lot!
Question: How do I get Mach No contour for CFD Post-processing?
- Solution computed using HPC
- Cannot open the file back in fluent with Case & Data to export Mach No variable.
- However, with the data file, I can open it in the post but no Mach number is available.
What could I try?
Any help is appreciated.
I was reviewing a paper for a journal, and the authors stated that they randomly selected company reports from several databases to give them a sample size of 600 firms.
I was of the opinion that “sampling” is inherently associated with primary data collection or am I mistaken?
My suggestion would be that the authors state that they selected 600 “cases” for their analysis from the databases (using whatever criteria) and not make reference to the word “sampling”…?
Laptop / pc recommendations for efficient writing, endnote, spss, data storage?
% Create an empty array to store the electron density data
ne_array = ;
% Loop through the files in the folder
folder_path = 'D:\ionPrf_prov1_2020_002'; % replace with the path to your folder
file_list = dir(fullfile(folder_path, '*.0001_nc')); % replace '.0001_nc' with the file extension of your data files
for i = 1:numel(file_list)
% Read the electron density data from the file
filepath = fullfile(folder_path, file_list(i).name);
fid = fopen(filepath, 'r');
line = fgetl(fid);
if startsWith(line, 'ELEC_dens') % look for the line starting with 'Ne'
ne_data = strsplit(line);
ne_data = str2double(ne_data(2:end)); % extract the Ne data as an array of doubles
ne_array = [ne_array; ne_data]; % add the Ne data to the array
% Save the electron density data to a text file
output_filename = 'ne_data.txt';
dlmwrite(output_filename, ne_array, 'delimiter', '\t', 'precision', '%.3e');
I frequently make use of LigParGen website to generate a molecule OPLS-AA data file for Lammps by uploading a .pdb file.
Generally it does not take a long time to get the lammps file.
Now after a long time following the .pdb file upload I am receivingng an out -of-time error message.
Anybody is experiencing the same problem?
Basically I want to generate a mapping between uris(rdf,rdfs,owl) and NL keywords for a distributed meta-meta information system to assist SPARQL query construction from natural language query using controlled vocabulary like wordnet. For that I have to crawl linked open data to get uris describing entities and it's best matched keyword (e.g Rdf: label).
I'm looking for some Classified Partial Discharge datasets (Corona Discharge, Arcing Discharge, Surface Discharge, Internal Discharge) for pattern recognition study. I'd really appreciate if you could kindly mention some data sources from where I'd download the datasets.
I am wondering if Google scholar has preset preferences with respect to location, race, etc. when it comes to displaying Search results. I noticed that when some of my scholarly articles are searched for on the platform using THE EXACT ARTICLE TITLE, such does not come up in the results whereas closely-related AND dissimilar articles by authors from some other parts of the globe does. Yet, my articles are indexed on Google Scholar and visible on my profile. Some only come up when BOTH my name and article title are searched for together on the platform. What could be responsible? I will appreciate any insight, thanks!
I am seeking a reliable data source or database that can provide current and past affiliations for authors of scientific publications.
While I am familiar with platforms such as Scopus and Web of Science, I am unable to use them due to the requirement of an institutional subscription or the need to search for authors individually. Ideally, I am looking for a database where I can extract information on all authors affiliated with a specific institution.
My research is focused on computer science, so a data source that specializes in this field would be perfect, but I'm open to any suggestions you might have. Thanks!
I am simulating nylon 6 using LAMMPS
I encounter this error when running
I have been looking for the limitations for consurf server. As I have a data file containing 95k sequences but the job in consurf is still processing from last 3 week. So I wanna know is there any limitation for data file size (No of sequences) in ConSurf.
Given the issue of copyright of source publications, does the use of ChatGPT for the automated creation of new texts used in specific practical and commercial applications raise specific ethical issues?
Given the issue of copyright of source publications, the use of ChatGPT for the automated creation of new texts for certain practical, business and commercial uses may raise specific ethical issues if ChatGPT's creation of new texts uses certain source publications downloaded from the Internet and is not adequately acknowledged in the source footnotes. In such a situation, copyright may not be respected, which is a serious drawback, a breach of current standards for the use of source publications, and may seriously limit the practical use of new texts created in this way. Well, as a standard, ChatGPT does not provide a list of data sources in the responses. It is possible to ask for these data sources additionally and then it provides them, but there is no certainty that it provides them all. Sometimes, for general concepts, it lists sources such as textbooks, industry monographs, etc., and adds a statement that, in addition to these sources, the ChatGPT has also used so-called 'own sources', i.e. sources drawn from a knowledge base of several tens of terabytes obtained in 2021 from the Internet and contextually selected in relation to the question asked and, possibly, the preceding question's description of the context. The ethical issues related to the use of ChatGPT for the creation of texts used for specific practical and profit-making applications by freelancers, where a certain amount of creative work is required, are determined, inter alia, by the attitude of the person, company, institution or other entity using this tool to the data available on the Internet. Well, not all persons and entities using Internet resources treat the issues of openness of data and information provided on the Internet in the same way. There may be different approaches to the issues of demonstrating data sources, using them, respecting copyright. As a standard, i.e. according to the applicable legal and ethical regulations, even for data published under the open access formula, when using and writing texts, the sources of data, sources of inspiration, etc. must be indicated in the form of footnotes with information allowing to identify the specific source of the data. If this important issue is omitted and the sources of data, information, inspiration, specific statements, theses, explanations of concepts, etc. are not shown in a text that should be a new creative text, then serious drawbacks may arise both in terms of respecting copyright and the possibility of developing research in a given field, in a given topic, and in terms of verifying the veracity of specific information that ChatGPT originally took from the Internet (as of 2021 and according to a specific part and not the entirety of the data available on the Internet). If these issues are not met and the issue of copyright is treated with discretion, certain ethical considerations arise, i.e. the failure to comply with certain ethical principles. Besides, the issue of precise demonstration of data sources is also important for being able to verify the veracity of the data and information contained in the ChatGPT-generated answer to a question, i.e. the automatically generated specific text. The importance of this issue is directly proportional to the scale of errors and fictitious information generated by the ChatGPT of non-existent "facts" appearing in the texts generated by this system that are answers to the question asked. And, unfortunately, the scale of these errors and non-existent "facts" generated by ChatGPT, fictitious "data and information" created within the "free creation" of this system is not small.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Considering the issue of copyright of source publications, does the use of ChatGPT for the automated creation of new texts used in specific practical and profit-making applications generate specific ethical problems?
What do you think about it?
What is your opinion on this subject?
I invite you all to discuss,
Thank you very much,
Hi everyone, I'm trying to convert WRF data to ARL format to run trajectories, using the Hysplit software. But, I have the next error:
child killed: segmentation violation
child killed: segmentation violation
"exec $exec_dir/arw2arl.exe $data_file >STDOUT"
(procedure "xtrct_data" line 37)
invoked from within
"xtrct_data $gdir $gbase"
invoked from within
("uplevel" body line 1)
invoked from within
"uplevel #0 [list $w invoke]"
(procedure "tk::ButtonUp" line 24)
invoked from within
(command bound to event)
Previosuly, i converted the wrf data successfully!
I would like to learn ANN process (For IC Engine related). I have seen many videos related to ANN in YouTube. Also, I read many articles related to ANN. The articles provide only results. But I can't understand how the input/output parameters are assigned/simulated in the ANN (For the IC Engine related). If possible, Kindly provide and guide with any reference data file or videos. Thanks in advance.
When I click on import data, I choose the SPSS data file (.sav) option and then select the file. In the next step, I click on the save button for the program to save the file as LISREL data (.LSF). Then, the error message comes up stating that ''Error importing, ... [location and name of the file] does not exist." I am repeatedly receiving this error. I changed the name of the file and reduce the variables in the dataset to minify the size of the file but they have not helped me so far. Is there anything I missed and how can I work out this problem?
Thank you in advance,
I am looking at identified candidate genes for isolated orofacial clefts identified in various populations to see if the associations are replicated in a UK population. (Along with environmental exposures).
I will be using a secondary data source only, and I am aware of papers that explain what analysis is needed for primary data but I haven't seen if this differs for the use of secondary data?
I have successfully applied non-linear langmuir and freundlich isotherms in origin. However, the fit does not converge in case of Sips model. I am unable to solve the issue behind this. I would be really thankful for help in this regard.
PS: I have attached the data file here.
task # 0
from pw_readfile : error # 1
error opening xml data file
task # 2
from pw_readfile : error # 1
error opening xml data file
task # 3
from pw_readfile : error # 1
error opening xml data file
Can anyone suggest me a free plataform or API to retrieve data from sensors (in real time if possible)?
The data type can be about weather, agriculture or similar.
Hope you're doing well.
Is is possible to download data from NASA website and use it in research papers written for international journals. If possible then is the data source is authentic to use for research work?
For your information the website of NASA is given below from where I want to download and use data....
Please have a look and give your kind suggestions regarding the data authenticity.....
Thanks in advance
Did anybody used GOplot in R, I have the code running with EC.rda data file supplied with the code but I can not run it with my data (using .csv file). Can someone share their raw data file for me to have a look or can suggest how can I make it work.
I would like to know what statistical analytical tool/model can be used on a dataset where price comparison data of daily necessities are recorded weekly from primary and secondary data sources.
The data is collected on a 30-week time frame and collected every Wednesday.
I have considered using trend analysis and time series, but I am looking for any analytical tool/model that goes beyond these two.
If you would kindly let me know what kind of interpretation or results I am likely to get with the analytical tool/model in question, that would also be great.
If you have for example, 5 different samples, that were sequenced, using 3 different regions, but have realised the one gene region is not powerful enough to resolve it to species level, so you add two more genes to better the topology of the phylogenetic tree
Would you please share your experience from where I can collect firm-level green patent citations for green innovation? Please mention the name of the source ( I prefer free one).
Hello everyone. I am doing my dissertation on factors affecting SME internationalization in Bangladesh a comperative insight between UK and Bangladesh. and I decided to complete my paper using secondary data sources. but I am stuck in deciding which data analysis I should use. i am using interpretism philosophy and an inductive approach. and my strategy is the case study.
Is it possible to conduct a moderation analysis when all variables (X, Y and W) are categorical (dichotomous)? I believe I have identified a moderation effect by splitting the data file based on my moderator (W) and observing different effects of the predictor (X) on the outcome (Y) but when I explore the interaction in a PROCESS moderation model (Model 1), the interaction term is not significant. Am I missing something? All variables have been dummy coded.
I am looking for Nitrogen, Potassium, Phosphorus, Organic carbon stock , pH value, Added Nutrients features in soil dataset.
Kindly suggest me the relevant data source.
I want to do MD simulation with LAMMPS on calcite, magnesite, silica and other mineral to figure out their minerals. But, firstly I need to create data file that contains coordinates, bond, angles, charge and other coefficients. I tried with atomsk to convert cif files into lammps data files, but, there was not found any parameters for bond, angle and charge. I need help to solve this problem. Would you like to suggest any software or direction by which I can make the data files with those coefficients? Thank you.
I am simulating a centrifugal compressor using the single passage method in Ansys.
The case file works correctly in Ansys 19, however, when I open the converged case and data file with Ansys 2021 R1 and continue the iteration process, a sudden jump occurs in the residuals and the final results also changes and wrong.
Can anyone help in this regard?
Thanks in advance.
I urgently need the reference XRD data file for beta molybdenum carbide. I also need a free source to get diffraction data other than COD.
I am wondering is there any available EUV brightpoint database from the SOHO and SDO period? This catalogue can be a data file, fits or jpg.... I want to make EUV brightpoint synoptic maps. I am also interested in plages, filaments, faculae and/or prominence databases.
If someone knows these types of databases then would you be so kind to post related links?
Thank you in advance.
I want to know how to run an MD simulation on a metal or substance for getting to know about its stress, strength energy, deformation, etc. for doing research on it.
I obtained an EIS data when I am doing research on biopolymers for corrosion inhibition. But I can not find a good fit of equivalent circuit. Could any body help me with it?
I have attached the data file in which the first column is the frequency, the second column is the Z' and the third column is the Z''.
Hi. Everyone. I have a problem with WordSmith tools on doing the data collection of 4-word Lexical Bundles that I only see the number over there but cannot check the data source like I use AntConc to do 4-word bundles that can be shown the LBs types. So how can I check the data source of 4-letter words? For example, Text 1 has 135 LBs was calculated and where can I see the whole LBs types? I spent a whole day finding out, but nothing can help at all. I am so confused about this tool now. If anyone can help me, I will appreciate it.
Does anyone know how to generate LAMMPS Data files from material studio Coarse grained beads?
I generated a CG model using material studio's Mesocite Module. But, I am not able to generate the LAMMPS data file from it. When I am exporting the file, the file does not contain any info.
Any suggestion would be highly appreciated.
I have a data set that contains a text field for approximately more than 3000 records, all of which contain notes from the doctor. I need to extract specific information from all of them, for example, the doctor's final decision and the classification of the patient, so what is the most appropriate way to analyze these texts? should I use information retrieval or information extraction, or the Q and A system will be fine
From same data source/work, is it okay to publish a correspondence/letter to the editor first using preliminary findings then after completion of whole work publishing the original article using more indepth analysis of data in different angles of presentation and detailed discussion?
We are conducting a study about COVID-19 Burden in Romania in 2020-2021, and we need the number of cases treated in hospitals in Romania. Can you indicate an open-source data in electronic format about this parameter?
Greetings. How can we generate the data file for the mobility plugin in Cooja that contains the location data for node movements?
I need epigenetic data file run through the hovarth clock. DNA Methylation Age Calulator, but did not find any description how it should be done?
Please can you help me to find link to describe the process of running the data?
I developed an approach for extracting aspects from reviews for different domains, now I have the aspects. I want some suggestion on how to use these aspects in different applications or tasks such as aspect based recommender system.
Note: Aspect usually refers to a concept that represents a topic of an item in a specific domain, such as price, taste, service, and cleanliness which are relevant aspects for the restaurant domain.
I'm doing data analysis for data that has two sources of diversity with replicated data (source of data is distributed in group). Unfortunately the data are not normally distributed. The alternative test that I got was the Friedman test but I found that the Friedman was only for non-replicated data. Are there any other non parameter test i can use?
As many of you are aware, Industry 4.0 and smart manufacturing is widely gaining traction among many manufacturing practitioners.
One of the aims of smart manufacturing is to bring in more real-time intelligence to shop-floor and manufacturing planners, by employing different shop-floor data sources (For eg, smart products, advanced equipment, and machine sensors, collated through the Industrial Internet of things, RFID gateways, etc). The data collected from these shop-floor data sources can be processed by means of Artificial intelligence-based algorithms ( machine learning, evolutionary algorithms, etc) to produce optimized production schedules, process plans, service schedules, and maintenance plans. The application of such techniques has been researched by several researchers and numerous publications are already available.
However, at the same time, central to the functioning of any manufacturing industry are Enterprise resources planning packages, encompassing all the functions of a manufacturing business, ranging from procurement to production planning and control to service management and even auxiliary support functions such as accounting and packages.
As a result, there are two different software systems that can benefit manufacturing firms:
1. AI-based smart manufacturing tools which seem promising in improving production efficiency: Such packages are more manufacturing operation centric
2. Tried and tested ERP work packages, developed by numerous software firms: Such packages can focus on both the business planning and manufacturing operations management.
The questions that I often wonder about in this regard are :
- Have AI- algorithm-based smart manufacturing tools been integrated into existing ERP software?
- What is the level of maturity of ERP work packages with respect to AI-based intelligence algorithms ( particularly with respect to integration and hosting of AI-based work packages, API access to databases, software architecture)?
Many ERP work packages support multiple scheduling rules for a host of production scenarios such as flow manufacturing, make to stock, make to order, job shop- etc.
- But do they actually employ AI Solvers, such as genetic algorithms, search-based algorithms, Neural network models, reinforcement learning-based algorithms?
From my initial assessment of the market, I understand that we are staring at a situation where the development in these two work packages progressing is in two nonintersecting planes, with two separate software packages being the only way out- one for AI-enabled manufacturing execution and the other for ERP systems?
In that case, I'd be keen to know the market feedback-
- Are industrial engineers of today willing to accept the need for two separate software packages for manufacturing scheduling and control, considering that liaising with software systems is not their core job ?
I've found a few articles on Intelligent ERP systems, but they are pretty generic, and mainly focus on the need for ERP systems to integrate cloud and mobile-based support and automated inspections, but do not discuss much on AI solvers.
A search on google scholar does not reveal much either, with many articles proposing architecture for ERP of the future, integration with smart agents, etc, without discussing a lot on existing maturity levels and capabilities.
Looking forward to your valuable answers!
The main problem I always encounter with external packages or internal commands is that bonds that pass through the boundary of the simulation cell, or atoms that exist exactly at the wall of the cell(super common for inorganic structures, polymers, MOFS, coordination polymers, and all the interesting stuff...) always get problematic in some way. Bonds that pass through the wall usually break), other times atoms that exist on the wall get duplicated while they really shouldn't.
I have used topotools, Lammps-interface, Atomsk, OVITO and other external software with topotools and Lammps-interface seeming the most promising from a topological point of view, but again I get the same problems with them
I am new to molecular dynamics. I am currently using the Material Studio to prepare the data file and then convert the car.file into a LAMMPS data file.
But when I use the amorphous cell, some molecules are actually across the boundary of the cell.
I want to confine the molecules inside the cell boundary.
Or I want to know how to make the cell bigger.
I know I can add a vacuum or revise the length of the cell but that only add space for one side.
Appreciation for any response
In past I was using MASCOT to search, identify and explore the proteomics data. I am looking for any windows based free software to analyze my big proteomics data files mostly in mgf format.
I need the cif diffraction data file for Nb2O5. The specification for this compound are given below:
Chemical Name: Nb2O5
Crystal Structure: psudo-hexagonal
a:3.607 b :3.607 c:3.925
I cannot find the cif file from Findit or ICSD. I would be really grateful if anyone share it.
Thanks a lot.
My name is Huiyeon and I am using STRUCTURE 2.3.3 software for or Population structure analysis. But I have a problem.
I have genotype data scored as 0 and 1 for SSR marker in formats available on the NTSYSpc program.
First, I converted my Excel files into TxT file format (Tab delimited).
Next, I tried to import the file into a new project.
And I specified according to instruction and number of my sample is 105.
When I finish setting and want to proceed, error "Bad Format in Data Source: Expect 27 data entries at line 1" occurs, and clicking "OK", a window says "The data file is expected to have following format: 1 row with 24 entries(marker name)/1 rows with 27 entries (data)".
Though I turned back to the setting and modified Number of loci several times, almost the same error occurs.
How can I convert my genotype data file into STRUCTURE input data file for Population structure analysis?
I am trying to use Infocrop model for climate change effect. I have imported csv file of daily weather data from 1971-2002 into infocrop. It has successfully created the 32 files and CLI file also. When I create the project and use the weather file, it does not list out the years in the pull down. So please tell me the what is the reason and how to solve, or is their any other way to import the weather file,
Thanks in advance.
There are few steps to make heatmap of your qRT-PCR data (fold change or relative quantification) using R.
Data file preparation:
Make excel file of your data in which your will place your gene of interest in column and your treatment or conditions in row.
Save the file in *csv extension.
Import data file in R:
By using following codes, import your data file into R,
data2 <- read.csv("data1.csv")
~ data1.csv will be file name your data file your created in excel and data2 is the name of your data in R. You can use your own names instead of data1 or data2 and you can even give your data a single name at both places.
When you will import the data, you will see first column composed of serial numbers. We need to replace the numbers with the names of actual column of your data that contain your gene of interest. To do this use this code:
rownames(data2) <- data2$Name
~ Name is first column
This will replace the serial numbers with your first column. But now you have two columns with your genes of interest. To remove duplicate, use this code:
data2$Name <- NULL
Now your data is ready to create heatmap.
First create matrix of your data by using following code:
data2 <- as.matrix(data2)
Now install a package to create heatmap "pheatmap" by following code:
after installing you will call that package every time when you want to use it by following code:
Then give a command to make heatmap of your data by following codes:
Usually we show fold change/relative quantification value inside our heatmap to add them modify your code in the following way:
pheatmap(data2, display_numbers = TRUE)
- You can customize your heatmap in many ways. Contact me any time if your any help.