Science topic

RDF - Science topic

Explore the latest questions and answers in RDF, and find RDF experts.
Questions related to RDF
  • asked a question related to RDF
Question
4 answers
Suppose their are different fertigation levels 50% RDF, 75% RDF, 100% RDF and 125% RDF
Fertilizer use efficiency for 50% was 115 kg/kg NPK, for 75% 102 kg/kg NPK, for 100% 95 kg/ kg NPK and for 125% 85 kg/kg NPK
Relevant answer
Answer
In your case, the FUE for 50% RDF (Recommended Dose of Fertilizer) was highest at 115 kg/kg NPK, and it decreased as the RDF increased. This suggests that at lower fertilizer levels, the plants were able to utilize a greater proportion of the applied nutrients, resulting in higher FUE. Conversely, at higher fertilizer levels, a larger proportion of the applied nutrients were likely lost or remained unabsorbed, resulting in lower FUE.
It’s important to note that while reducing fertilizer levels can increase FUE, it’s crucial to ensure that the plants are still receiving adequate nutrition for optimal growth. Balancing these factors is key to sustainable and efficient agricultural practices.
  • asked a question related to RDF
Question
5 answers
Hello,
I'm trying to simulate the pyrolysis process of an RDF sample in vertical tube furnace.
I like to include this reaction in my simulation using Comsol Multi-physics :
Cm Hn Ol + (m/2 - l/2)O2 => mCO+n/2H2
Should I identify the (m,n,l) as variables, if so how can I do it ?!
Much appreciated
Relevant answer
Answer
Simulating a chemical reaction in COMSOL Multiphysics involves using the Reaction Engineering interface. This interface allows you to set up and solve reaction systems involving multiple species and reactions. Here's a general guide on how to simulate a chemical reaction in COMSOL:
  1. Launch COMSOL Multiphysics: Start by opening COMSOL Multiphysics and create a new model or open an existing one.
  2. Choose Physics: In the Model Builder window, select the "Add Physics" button, and then choose "Chemical Engineering" -> "Reaction Engineering" from the list of available physics interfaces.
  3. Define the Geometry: Set up the geometry of your simulation by importing or creating the relevant 2D or 3D geometry in the geometry section.
  4. Set up Species and Reactions: In the "Reaction Engineering" section, define the chemical species involved in the reaction by adding species and their properties. Then, specify the reactions by adding them and setting the reaction rate expressions, stoichiometry, and reaction kinetics.
  5. Define Initial Conditions: In the "Study" section, set the initial concentrations and other initial conditions for the species involved in the reaction.
  6. Boundary Conditions: Define appropriate boundary conditions for the reactor, which may include concentration, temperature, pressure, or other relevant parameters.
  7. Choose Solver and Mesh: In the "Study" section, select a solver for the simulation, such as the "Transient" solver for time-dependent simulations, and generate an appropriate mesh for your geometry.
  8. Run the Simulation: Click on the "Compute" button to start the simulation. COMSOL will solve the reaction system and provide results.
  9. Analyze and Visualize Results: After the simulation is complete, you can analyze and visualize the results using various tools available in COMSOL, such as plot groups, 1D/2D/3D plots, animations, and exporting data for further analysis.
Remember that the specific steps and settings required for simulating a chemical reaction may vary depending on the complexity of the reaction system and the specific physics involved. Make sure to refer to the COMSOL documentation and tutorials related to reaction engineering for more detailed information on setting up your specific chemical reaction simulation.
  • asked a question related to RDF
Question
1 answer
For the completion of RDF graphs. Do you recommend only leaving triples whose expressiveness corresponds only to RDF framework or another more expressive language such as RDF-S or OWL?
Relevant answer
Probably completion in a real world application is a strongly domain depending task. And starting from the fact that I am not aware of the specific knowledge represented here, I would speculate that OWL should be a good option, considering that its deductive capabilities can help in completion, at least by obtaining new items, not explicitly defined, and preserving the consistence, by regularly checking.
Can be argued that complexity will be a concern when rising the graph order/size, but you always can easily move from and OWL to other representation, keeping those nodes/edges obtained by deduction. On the other hand, you could consider apply "light weight ontology languages", but it's always depending on domain.
I suggest to be a bit more descriptive, and, BTW, include other keywords, like graph completion, in order to attract the attention the related to the problem specialists.
  • asked a question related to RDF
Question
1 answer
Hi,
I have a water simulation box with 116 water molecules. I used TIP4P water model. Now I need to calculate RDF of water-water, RDF of Oxygen-Oxygen and RDF of Hydrogen-Oxygen.
I tried to calculate RDF of water-water and ended up with the wrong plot which I did not expected.
My Index file order is like this: O “System” 1 “Other” 2 “HO4”
I used following code to calculate RDF;
gmx rdf -f traj_comp.xtc -s run01.tpr -n index.ndx -o Water.xvg and for the sel and ref I choosed option “O”
How can I used GMX RDF option to calculate RDF data correctly.
Cheers, Kal
Relevant answer
  • asked a question related to RDF
Question
3 answers
I want to query from multi SPARQL endpoints in real time and visualize RDF triples as a graph.
How to visualize RDF well? Are there some better rdf visualizers?
- Open source.
- Support RDF triples, not just ontologies.
- The visualization effect is better.
- Can be used in the program, best to Java or HTTP.
- Supports more RDF triples.
I found some rdf visualizers, But they don't fit:
- [Apache ECharts](https://echarts.apache.org/en/): Free but not an RDF style.
- [Graphviz](https://graphviz.org/): Free but the style is not very attractive.
- [rdf-visualizer](https://issemantic.net/rdf-visualizer): Nice style but can't be used in program.
- [WebVOWL](http://vowl.visualdataweb.org/webvowl.html): Nice style but only the ontology can be shown not all RDF triples.
Relevant answer
Answer
Anelia Kurteva Thank you for your advice. Thanks?(?ω?)?
But I just want to visualize RDF triples, and I don't want to use neo4j to store RDF triples.
My data is stored in RDF Databases and queried by SPARQL Endpoint.
  • asked a question related to RDF
Question
3 answers
When we think about semantic software, descriptive RDF files of HTML pages and mapping of relational databases to RDF immediately come to mind, however, does semantic software development only include those aspects?
Relevant answer
Answer
We in USW Ltd work on creating smart systems, which is obviously related to a semantics processing.
We work with graph organized data since 1997 year. This experience helped us to define the so called by us Semantic Network Based Architecture -https://www.researchgate.net/publication/334680146_Semantic_Network_Based_Architecture. Using this architecture we developed our Unified Platform for Innovations(UPI). Till now we built two UPI-based workflow management systems- one for administration (Smart DOCMAN), and a second for the industry. A general presentation can be found on www.usw.bg.
Currently we work on tools and technologies in UPI-environment, ensuring us to make our systems “smart”. Clearing our understanding about semantics, concepts, data and so on, we have reached to our definition for KG and the idea, that the KG has to be unified- Unified Knowledge Graph (uKG)- https://www.researchgate.net/publication/361532200_Unified_Knowledge_Graph.
Our next step was to clear our understanding about semantic constructions, which led us to the concept for The Ontology as a Unified Knowledge Graph construction- https://www.researchgate.net/publication/361813986_The_Ontology_as_a_Unified_Knowledge_Graph_construction.
We think, that smart systems have not to be learned, they have to be educated, as is described in Root Language and Machine Education- https://www.researchgate.net/publication/361814428_Root_Language_and_Machine_Education.
  • asked a question related to RDF
Question
2 answers
SPARQL property paths only return the start and end nodes of a path and do not allow variables in property path expressions.
SPARQL property paths don't return intermediate nodes neither triples.
SPARQL property paths cannot retrieve and naturally represented a path in the tabular result format of SPARQL.
Considering RDF-Star and SPARQL-Star current specifications, is it possible to bind the RDF triples from a property path query to embedded triples of RDF-Star triples?
For example
SELECT ?since ?t WHERE { ns:p3 ns:FRIENDS_OF+ ns:p2 AS path (?s ?p ?o)     <<?s ?p ?o>>  ns:date_of_start ?since.     BIND(<<?s ?p ?o>> AS ?t)     FILTER (?since > 2016) . }
Similar to the Cypher query bellow
WITH 2016 AS since MATCH rels= (p1:Person) - [:FRIENDS_OF*1..2]->(p2:Person) WHERE ALL(k1 in relationships(rels) WHERE k1.date_of_start > since) RETURN rels;
RDF-star and SPARQL-star
Final Community Group Report 17 December 2021
Relevant answer
Answer
You can't bind results of property paths to triples with standard SPARQL.
Pobably you could try something like this:
SELECT ?since ?t {
?f1 ns:FRIENDS_OF+ ?f2 .
ns:p3 ns:FRIENDS_OF* ?f1 .
?f2 ns:FRIENDS_OF* ns:p2 .
BIND(<<?f1 ns:FRIENDS_OF ?f2>> AS ?t) .
?t ns:date_of_start ?since . FILTER (?since > 2016) .
}
First it generates all possible combinations of friends ?f1 and ?f2. Then it only keeps those that are related to ns:p3 and n2:p2.
Afterwards the RDF-star triple is created via a binding and used for testing the start date.
  • asked a question related to RDF
Question
1 answer
Hi,Is there any way to count the number of gas adsorption on graphene?
I used RDF diagram but It was'nt useful?How can I count the number of gas adsorption on it by RDF ?
Relevant answer
Answer
Hello
I think this article will be helpful
  • asked a question related to RDF
Question
1 answer
It has apparently been proven that SQL with cyclic tags is Turing-complete (see [1]).
There are also approaches that convert relational structures to OWL (e.g. [2], [3]).
Can one conclude that one can define any algorithm in OWL or in one of its derivatives?
Does anyone know a paper?
Thanks!
Best,
Felix Grumbach
Relevant answer
Answer
hi,
The W3C Web Ontology Language (OWL) is a Semantic Web language designed to represent rich and complex knowledge about things, groups of things, and relations between things.
OWL Full allows free mixing of OWL with RDF Schema and, like RDF Schema, does not enforce a strict separation of classes, properties, individuals and data values. OWL DL puts constraints on the mixing with RDF and requires disjointness of classes, properties, individuals and data values.
Kindly refer this link:
Best wishes..
  • asked a question related to RDF
Question
3 answers
For learning SPARQL it might be useful to have full control over both the query text and the data (RDF triples). While there are many public SPARQL endpoints available their data is typically read-only for obvious reasons. To actively apply SPARQL queries to ones own data, a local triple store might be useful, e.g. for reproducing the examples from https://www.w3.org/TR/rdf-sparql-query/.
However, setting up such an infrastructure with all its dependencies might be complicated.
Question: What is the simplest¹ way to setup a local triple store with SPARQL endpoint on a usual PC?
(¹: The meaning of "simplest" depends on ones system configuration and prior knowledge, which can be reflected by different answers.)
If one has already an up-to-date Python environment, then https://github.com/vemonet/rdflib-endpoint provides a simple solution with only two commands
  • pip install rdflib-endpoint (run once)
  • rdflib-endpoint serve <path_to_your_triple-file(s)>
  • →Access the YASGUI SPARQL editor on http://localhost:8000
However, I am interested which alternative solutions there are.
Relevant answer
Answer
My favourite is Blazegraph (not maintained anymore though)
Download the jar file and from command line launch it with one line
NB. It requires Java to be installed!
  • asked a question related to RDF
Question
7 answers
The term "Semantic Web" goes back to at least 1999 and the idea – enable machines to "understand" information they process and enable them to be more useful – is much older. But still we do not have universal expert systems, despite that they would be very advantageous, especially in the context of (interdisciplinary) research and teaching.
My impression is, that from the beginning semantic web technologies was dominated by Java-based tools and libraries and that the situation barely changed until today (2022): E.g. most of practical ontology development/usage seems to happen inside Protegé or by using OWLAPI.
However, in the same time span we have seen a tremendous success of numerical AI (often called "machine learning") technologies and here we see a much greater diversity of involved languages and frameworks. Also, the numerical AI community has grown significantly in the last decade.
I think, to a large part this is, because it is simple to getting started with those technologies and Python (-Interfaces) and Jupyter-Notebook contribute significantly to this. Also, Python greatly simplifies programming (calling a library function and piping results to the next) for people who are not programmers by training such as physicists, engineers etc.
On the other hand getting started with semantic technologies is (in comparison) much harder: E.g. a lot of (seemingly) outdated documentation and the lack of user-friendly tools to achieve quick motivating results must be overcome in this process.
Therefore, so my thesis, having an ecosystem of low-threshold Python-based tools available could help to unleash the vast potential of semantic technologies. It would help to grow the semantics community and to enable more people to contribute contents such as (patches to) domain-ontologies, sophiticated queries and innovative applications e.g. combining Wikidata and SymPy.
Over the past months I collected a number of semantic-related Python projects, see https://github.com/pysemtec/semantic-python-overview. So the good news is: There is something to use and grow. However, the amount of collaboration between those projects seems to be low and something like a semantic-python-community (i.e. people who are interested in both semantic technologies and python programming) is still missing.
Because complaining alone rarely leads to improvement of the situation, I try to spawn such a community, see https://pysemtec.org.
What do you think, can Python help to generate more useful applications of semantic technology, especially in science? What has to happen for this? What are possible counter-arguments?
Relevant answer
Answer
@
J. Rafiee
Thanks for your reply. While "Python is just another language" it has from my experience some advantages which make it favorable in a scientific context:
- possibility of interactive usage (e.g. via Jupyter-Notebooks)
- plenty of science-relevant libraries (numpy, sympy, pytorch, ...)
- comparatively low entrance barrier to getting started (e.g. for undergraduate-students from subjects other than computer science)
To prevent misunderstandings: I do not claim that Python is the perfect language. It definitely has weaknesses (such as execution speed).
My point is: If there were more (and better supported) python-tools for semantic-related tasks available and the existing ones were better known, this would significantly foster the development and the applicability of semantic technologies such as ontologies, reasoners and rule-processors.
I expect the scientific world would benefit from such a development because suitable management of available knowledge is one of its core-challenges and having more and easier tools available to tackle that challenge would consequently result in more and better research results.
Even if we discount for "hype-factor": Machine Learning (or numerical AI) techniques have had tremendous success (both in quality and quantity) in many research disciplines and Python-based libraries and interfaces take a large part of the credit for that. I think a similar development could happen with semantic technologies (aka symbolic AI).
  • asked a question related to RDF
Question
6 answers
I have been working with ontologies (RDF/OWL) a lot of time, using mostly them as an engineer, because they permitted SPARQL and rules essencially.
It's only recently, this year, that I started to really pay attention to the theoretical grounding of OWL. This lead me to dive into the zoo of many Description Logic and their desirable or undesirable properties.
I think there is some serious issues in the multiplication of work on DL, which are almost never considered under the perspective of actual usefulness, of their ability to describe the specific structures that are at core of many domains (law, clinical science, computer science...).
Quite some of the theoretical work in DL and logic seems to formally study and prove property about language (DL are language) that nobody is speaking or will ever speak. This is quite salient when considering the very little number of working reasoners (which are covering only a small fragment of DL described formally).
It seems to me that, after the incredibly fecund periods that started with Frege, Russel, Tarski, Hilbert, Godel, Carnap... The theoretical work was somewhat considered to be done and less attention was focused on formal language for Domain Description.
On the other hand, questions related to problem solving (planner) became treated only as SAT problem needing optimisation. With almost no reference to first order logic and thus having poor link with DL.
Finally, on the third hand, modal logic, which has clearly deep link with first order logic (the square operator/diamond operator and the existential quantifier/universal quantifier in particular), has been abandoned by computer scientist and become, more or less explicitly, a field of philosophy.
I think this state of affairs isn't satisfying and that there is a work of conceptual clarification and of revision of the foundation of mathematics that would integrate these development.
To that end, something that does seem absolutely essential is to give each other an easy access to reasoners. By easy access, I don't mean a program written in some obscure language whose source must be compiled on a specific linux.
I mean an access to the reasoning service through a (loosely standardized) REST API. These service should be accompanied with websites giving relevant example of using the reaoner, with an "online playground".
I think this could be done for classic DL such as EL or SHOIQ but also for modal logic in it's various kind (epistemic, deontic), and that could also could be done for planification based on First Order Logic.
I'm currently cogitating about the engineering question that would raise from such a logical zoo, and about a grammar that would be usable for every reasoning problem description involving this kind of logic.
If you are interested by the question and/or have skills in modern full stack architecture and Dockerisation, I'd be interested to have your opinion about the current situtation and the feasability of such a logic zoo, which would be an useful tool for clarifying the domain.
Relevant answer
Answer
Your idea sounds very interesting and I would really like to play around with those reasoners (ideally based on some well documented examples). Maybe a good start would be to setup a repo and create a dockerfile for each reasoner which should be supported. Once those backends behave well-defined it should not be too hard to develop a frontend providing the REST-API.
Some time ago I did something loosly related: https://github.com/ackrep-org/ackrep_deployment.
  • asked a question related to RDF
Question
3 answers
According to RDF* specification is it possible to have the same tripla pattern with differents qualifiers?
Example
( << s1, p1, o1>>, q1, v1 )
( << s1, p1, o1>>, q1, v2 )
( <<SPinera, president, Chile>>, start, 2010)
( <<SPinera, president, Chile>>, end, 2014)
( <<SPinera, president, Chile>>, start, 2018)
RDF* definition
An RDF* triple is a 3-tuple that is defined recursively as follows: 1. Any RDF triple t ∈ (I∪B) ×I×(I∪B∪L) is an RDF* triple; and 2. Given RDF* triples t and t′, and RDF terms s ∈ (I ∪ B), p ∈ I and o ∈ (I ∪ B ∪ L), then the tuples (t,p,o), (s,p,t) and (t,p,t′) are RDF* triples.
Reference for RDF* definition
Hartig, Olaf. “Foundations of RDF⋆ and SPARQL⋆ (An Alternative Approach to Statement-Level Metadata in RDF).” AMW (2017).
Relevant answer
Answer
Sales Aribe Jr. thank you for the reply.
My question is specific about RDF* ou RDF-Star. It is a "an extension of RDF's conceptual data model and concrete syntaxes, which provides a compact alternative to standard RDF reification. This model and its syntaxes enable the creation of concise triples that reference other triples as subject and object resources."
  • asked a question related to RDF
Question
1 answer
Hi
Relevant answer
Answer
Without knowing how the system is built, what potential is used, there's no way of providing any useful comments.
  • asked a question related to RDF
Question
3 answers
I have a mixture of DMF and water in 300 K simulated with an OPLS-AA force field. The hydrogen bonds between oxygen atoms of DMF and H atoms of water have a very short length and it's around 1.26 Å. I know that H-bonds are typically < 3.5 Å but isn't 1.26 in the range of covalent bonds?
I want to know if that's possible or if there's something wrong with my force field parameters/charges?
The RDF curve of oxygen atoms of DMF around H atoms of water (H-bonds) is shown below.
Kind regards,
Ehsan
Relevant answer
Answer
Dear Ehsan
If you carefully study the paper on hydrogen bonds, usually three classes are distinguished: weak, moderate, and strong bonds, with energetic boundaries at about 2 and 15 kcal/mol.
The weak hydrogen bonds involve less polar X-H groups in proton donors. The weakest hydrogen bonds considered in the literature are about 0.5 kcal/mol. It has distance of upto 4.5 Armstrong.
Moderate occurs in in range of 2.0 to 3.5 Armstrong. While Strong hydrogen bonds do occur in range of 1.3 to 2.0 Armstrong. Strong hydrogen bond generally are involved in systems having high electronegative atoms.
Since you are using gromacs, which is generally based on molecular mechanics, it would be better to use quantum based techniques like DFT for better understanding of behaviour of hydrogen atoms around Oxygen atom of DMF.
Since, DMF has partial double bond character for the C-N and C-O bonds.
There is possibility that highly electronegative oxygen atom may form hydrogen strong bond with hydrogen atom of water.
Hope it will help.
regards
Tanuj
  • asked a question related to RDF
Question
3 answers
Construction of KG which model is better for RDF triplet or knowledge extraction
Relevant answer
Answer
While building a knowledge graph from text, it is important to make sure our machine understand natural language. This is where Natural Language Processing (NLP) comes into the picture. The knowledge graph from an unstructured text can be built by using NLP techniques such as sentence segmentation, dependency parsing, parts of speech tagging, and entity recognition.
  • asked a question related to RDF
Question
1 answer
Hi everyone,
I have a question about calculating RDF by LAMMPS. I want to know that what exactly LAMMPS do for computing g(r), In fact, I mean on which atom, LAMMPS take the RDF. Are we able to recognize it in our sample?
Relevant answer
Answer
Hi
It is recommended to calculate the RDF by VMD software rather than LAMMPS. It has an explicit GUI with controllable parameters.
  • asked a question related to RDF
Question
2 answers
I need to store my RDF multidimensional data and I have to select the best appropriate strategy for that. I tested some Existing solutions like AllegroGraph and GraphDB, etc. But they did not address the problem well.
  • asked a question related to RDF
Question
1 answer
The Gromacs manual has the following regarding exclusions:
"If a run input file is supplied (-s) and -rdf is set to atom, exclusions defined in that file are taken into account when calculating the RDF. The option -cut is meant as an alternative way to avoid intramolecular peaks in the RDF plot."
The -cut option isn't appropriate for my case as I have intermolecular peaks (which I want to keep) in the same intramolecular pair distances that I want to exclude. How exactly can I add RDF "exclusions" to my input file after? Or, are there other ways to filter out intramolecular pairs from my RDF curve? The command I am using:
gmx rdf -f file1.xtc file2.tpr -n index.ndx -o rdf.xvg -cn coord.xvg -b t_initial -e t_final -bin 0.01 -rmax 1 -ref base_atom_index -sel distributed_atom_index
Relevant answer
  • asked a question related to RDF
Question
6 answers
Is an ontology required to create a knowledge graph or is the schema derived from the relationships in a graph. What I am wondering about is whether the predicate is defined in a knowledge graph, and if so, how? It seems that knowledge graphs can be created without defining relationships between the nodes in advance.
Relevant answer
Answer
Well, lets be clear about the definition of the term "ontology" first. Follwing Tom Gruber "An ontology is a specification of a conceptualization." (drawn from http://www-ksl.stanford.edu/kst/what-is-an-ontology.html) "That is, an ontology is a description (like a formal specification of a program) of the concepts and relationships that can exist for an agent or a community of agents." Borst defined an ontology as a “formal specification of a shared conceptualization” (from https://iaoa.org/isc2012/docs/Guarino2009_What_is_an_Ontology.pdf), in order to be precise about the meaning that the community of agents should agree on the conceptualisation. Studer, Benjamins & Fensel extended this by the term "formal", but we can ignore this aspect for the moment.
Anyway, a knowledge graph - put aside how it is defined - should represent knoweldge about the world. Without any ontological definition of concepts, the nodes in the graph belong to, and the relations beween them, a knowledge graph is nothing more than a semantic network for which the criticism of Woods "What's in a link?" applies, i.e. he criticed that the links in a semantic network lack a proper meaning.
This can be seen more or less as the starting point of research on description logics, which by the way build the foundation of OWL. An important contribution of this research was the distinction of T-Box and A-Box, i.E. the terminological and the assertional component of knowledge representation. In that sense the T-Box represents an ontology following all definitions above. The point now is that the T-Box specifies our conceptualisation and hence at the same time fixes the terminology we use to describe a domain of discourse. The A-Box contains the contingent knowledge of the domain of discourse. In other words, if our domain of discourse is information about the world then a knowledge graph to large parts belongs to the assertional component and is described in terms of the terminology of the T-Box.
That are the reasons why I would say: if a knowledge graph should express knowledge (and not only information) about the world an ontology is an essential precondition for sharing the knowledge in the KG between agents.
  • asked a question related to RDF
Question
2 answers
I was trying to calculate the radial distribution function by Lammps in between the moving particle (suppose an ion) and static particle ( suppose a wall). However, I couldn't find any result. I was wonder why RDF couldn't work although it was worked perfectly for the calculations of two moving particles ( suppose ion and water).
It would be really helpful if anybody help with explanations or references. Thanks.
Relevant answer
Answer
I think it is possible to calculate but the result will be the same as the ion concentration profile. For more info you can see this book:
Karniadakis G, Beskok A, Aluru N. Microflows and nanoflows: fundamentals and simulation. Springer Science & Business Media; 2006 Feb 9.
  • asked a question related to RDF
Question
5 answers
I was searching for the endpoint of the ion's ( as like Na+ / Cl-) hydrated boundary. I realized maximum researchers were talked about the first hydration shell endpoint. Some even talk up to the 3rd hydration shell ( using the RDF curve). However, I was wondered where is the end of these hydration layers and up to which order these hydration shells can be formed?
I would be really grateful if anybody gives me any suggestions or give a research paper related to my question. Thank you.
Relevant answer
Answer
As an answer to the question, I am sending pdf files for two publications.
  • asked a question related to RDF
Question
1 answer
Currently, I want to merge several SKOS files (.rdf files) into one, taking into account mappings between them, made previously.
Is there any tool that allows me to do this?
Relevant answer
Answer
This article using SKOS tool might be helpful, have a look:
Hope this helps.
Kind Regards
Qamar Ul Islam
  • asked a question related to RDF
Question
3 answers
Dear researchers,
In literature, so far I am seeing that downdraft is usually used for small-scale purposes (10kW-10MW). One of the reasons behind is "they do not allow for uniform distribution of flow and temperature in the constricted area (throat)". What can be other reasons?
And the main question is: is there any way to upscale it? For instance, for 30 t/h feed rate? Is it feasible and possible?
I would be grateful for any insight!
Relevant answer
  • asked a question related to RDF
Question
3 answers
While there are multiple Java implementation for managing semantic knowledge base (Hermit, Pellet...) there seems to be almost none in pure Javascript.
I would prefer to use JS than Java in my project since I find JS much more clean and practical and easy to maintain. Unfortunately, there seem to be almost nothing to handle RDF data together with rules inference in Javascript. Although there are some work to handle only RDF (https://rdf.js.org/), it's unclear what is the status of these works regarding to W3C specifications.
Relevant answer
Answer
Prasath Sivasubramanian Thank you for your answer, it seems like all Javascript tools listed on w3.org aren't maintained anymore though. Like Hercules last released was in 2009, OAT is not reachable anymore...
The Eye (https://github.com/josd/eye) engine looks quite interesting I wonder if it's comparable to https://github.com/ucbl/HyL ?
  • asked a question related to RDF
Question
6 answers
I designed and developed a domain ontology for solid waste collection management and have OWL/RDF version of my OntoWM domain ontology. Can I use quantitative or qualitative method to evaluate the OntoWM domain ontology? any sample thesis or article please.
Regards
Abdul
Relevant answer
Answer
Thank You Luis Ramos I will definitely read this article.
  • asked a question related to RDF
Question
2 answers
In an incubation study of soil with moisture content at field capacity, a combined treatment of NPK and vermicompost results in lower N and K availability as compared to the treatment with NPK alone, both at RDF. Can there be any relevant explanation to this?
Relevant answer
Answer
This might be due to high multiplication of micor organism that need nitrogen for building block of body and also need to lowering the C:N ratio in crop residue using for making of vermicompost.
  • asked a question related to RDF
Question
4 answers
Hi all,
Is there any software or webserver to read the molecular dynamics (MD) simulation trajectory which obtained from Ab initio MD simulation to calculate the atom-atom radial distribution function (RDF).
Thanks in advance.
Relevant answer
Answer
VMD is a freely available software to calculate RDF and many more properties.
  • asked a question related to RDF
Question
3 answers
I am trying to run a Radial Distribution Function (RDF) trajectory analysis between water (WAT) hydrogen atoms and ligand (UNN) nitrogen atoms using the production trajectory (prod1.crd) and the parameter file (solvated.prmtop).
When i use the rdf.in file as;
parm solvated.prmtop
trajin prod1.crd
radial rdf.xmgr 0.1 15.0 :WAT@H:UNN@N
The calculation ends in less than 1 second, and no output file gets generated.
However, if I modify the rdf.in to;
parm solvated.prmtop
trajin prod1.crd
trajout rdf.xmgr
radial rdf.xmgr 0.1 15.0 :WAT@H:UNN@N
and run the analysis using "cpptraj rdf.in", it starts to generate an output file (rdf.xmgr) that is 60+ Gb in size which cannot be read using xmgrace (Column count incorrect) as it seems to be an coordinates file (picture enclosed).
Will you please help me troubleshoot the cause and find a proper solution?
Relevant answer
Answer
you have missed 'out' in this command,this command should have been 'radial out rdf.xmgr'.
  • asked a question related to RDF
Question
2 answers
Dear all
I want to calculate coordination number of water around N+ group. I first calculated area of RDF to first minimum and it gave me a wrong coordination number( my RDF is correct) . Then I used this command gmx rdf -f file.xtc -s file.tpr -b -e -cn coord.xvg -n index.ndx -ref “group N” -sel “group OW”
But, the out put was not coordination number of water around N+( as photo uploaded here)
What is the correct way to calculate coordination number?
Thanks in advance
Relevant answer
Answer
Dear
Wojciech Kopec
Hi,
Thank you very much for your reply.
I compared my RDF with a famous article results with a same system. I figured out my problem. Actually, the plot is true but I didn't know how I can extract results from the plot.
Thank you again for your reply
Mohammad
  • asked a question related to RDF
Question
6 answers
I had modeled processes in my ontology like sale, purchase, etc. I want to implement these modeling constructs in OWL or RDF or any other semantic language. Can anybody suggest any language for the proper implementation of the aforementioned process?
Thanks
Relevant answer
Answer
Frank Haferkorn Thanks, I will read it and try it.
  • asked a question related to RDF
Question
2 answers
I have studied the structural changes of a protein in 3 different concentrations of ionic liquids (50 mM, 500 mM, and 1M). On analysis of the RDF plots, I find that the magnitude of the RDF plot for the anion at 0.35 nm from the protein followed the order 50 mM > 500 mM > 1M (as seen in the attached images). The magnitude of the plot is also very high at 50 mM (~ 60). However, on visualizing the trajectories, I can see that only at 1 M concentration, there is an increased concentration of the ions in the solvation shell of the protein, and at 50 mM, there are very few ions surrounding the protein. What could be the reason for such high g(r) values at lesser concentrations of 50 mM and 500 mM?
I used "gmx_mpi rdf com rdf mol_com -s em.tpr -f md_Dhp1M.xtc -cn -o ranion_Run2.xvg -tu ns -dt 100 -cut 0.35" for the analysis.
Thanks.
Relevant answer
Answer
Azadeh Kordzadeh
Thanks for your reply.
Yes, you are right these are the plots of the anions. The protein is insulin which has only 51 amino acids and I have calculated the RDF for a specific N-terminal residue (Phenylalanine). I have also calculated the RMSD, RMSF etc.
Would like to know the reason for the discrepancy with respect to RDF and concentration of the anions.
  • asked a question related to RDF
Question
11 answers
Hello,
I am trying to use the -surf option in gmx rdf in order to compute the rdf of certain molecules from the surface (nearest atoms) of my reference molecule .
My script is: gmx rdf -s tpr.tpr -ref ‘resname URA’ -sel ‘resname OCS’ -f xtc.xtc -surf mol -seltype whole_mol_com -bin 0.03
Unfortunately, I get the following error: “Inconsistency in user input: -surf only works with -ref that consists of atoms”
Obviously my molecule consists of atoms. And different scripts that don’t have the -surf flag work perfectly fine.
Anyone has an idea what is the problem?
Here are things I’ve tried but didn’t change the error:
  1. I tried adding -selrpos whole_mol_com, same error.
  2. I tried changing the molecules that I put under “resname X” but for any X it’s the same error.
  3. I tried changing the order in which all my flags appear.
  4. I tried doing -surf mol and -surf res.
  5. I tried running this script on different gromacs versions.
Relevant answer
Answer
I have tried adding the index file, but it says I have multiple groups called MEO and URE. This might indicate that my tpr/pdb files already know that there are groups called MEO and URE, no?
So I tried the following script and it worked:
gmx rdf -s tpr.tpr -ref MEO -sel URE -f xtc.xtc -surf mol -bin 0.03
However I got the same result- an unnormalized radial distribution.
What am I still doing wrong? Maybe I still don't understand the groups you suggested.
Thanks again for your help,
Ilan
  • asked a question related to RDF
Question
1 answer
Dear all,
i have a problem with Protege at the moment. Protege can only write a TURTLE or RDF/XML Sytnax with an OWL part in it, you can not "turn off" the OWL Syntax. My problem is that I'm not very good in coding, so Protege is very usefull with its graphical understanding but i really need the code in RDF/RDFS only
I you have any ideas for me, i'd love to here them.
Thanx you in advance.
Relevant answer
Answer
You may use code generation functionality to generate code. It will generate at least a skeleton of the code which can be converted to a working code by inserting instances.
  • asked a question related to RDF
Question
1 answer
I simulated a two-phase water-ionic liquid interface system for 10 ns. Then I calculated G(r) between O of water and C2 of ring of ionic liquid. I did it with Travis. However, I don't know how to interpret the resulting figures. Do you think that the data is valid? I guess I should calculate RDF in distinct regions. What are these regions, and with which program can I specify them...
Relevant answer
Answer
  • asked a question related to RDF
Question
2 answers
I am trying to add rdf triples to Jena Fuseki Server. When running the code:
import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import re
from rdflib import Graph, Literal, URIRef
import rdflib
from rdflib.plugins.stores import sparqlstore
page = requests.get(url)
response = requests.get(url)
response.raise_for_status()
results = re.findall('\"Address ID: (GAACT[0-9]+)\"', response.text)
address1=results[0]
new_url=a+address1
r = requests.get(new_url).content
store = sparqlstore.SPARQLUpdateStore()
store.open((query_endpoint, update_endpoint))
g = rdflib.Graph()
g.parse(r, format='turtle')
store.add_graph(g)
I got the error like:
/Users/mac/anaconda3/lib/python3.6/site-packages/SPARQLWrapper-1.8.1-py3.6.egg/SPARQLWrapper/Wrapper.py:510: UserWarning: keepalive support not available, so the execution of this method has no effect
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-26-279fa93014e1> in <module>()
9
10 g = rdflib.Graph()
---> 11 g.parse(r, format='turtle')
12
13 store.add_graph(g)
~/anaconda3/lib/python3.6/site-packages/rdflib-4.2.2-py3.6.egg/rdflib/graph.py in parse(self, source, publicID, format, location, file, data, **args)
1032 source = create_input_source(source=source, publicID=publicID,
1033 location=location, file=file,
-> 1034 data=data, format=format)
1035 if format is None:
1036 format = source.content_type
~/anaconda3/lib/python3.6/site-packages/rdflib-4.2.2-py3.6.egg/rdflib/parser.py in create_input_source(source, publicID, location, file, data, format)
169 else:
170 raise Exception("Unexpected type '%s' for source '%s'" %
--> 171 (type(source), source))
172
173 absolute_location = None # Further to fix for issue 130
Exception: Unexpected type '<class 'bytes'>' for source 'b'@prefix dct: <http://purl.org/dc/terms/> .\n@prefix geo: <http://www.opengis.net/ont/geosparql#> .\n@prefix gnaf: <http://gnafld.net/def/gnaf#> .\n@prefix prov: <http://www.w3.org/ns/prov#> .\n@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .\n@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .\n@prefix xml: <http://www.w3.org/XML/1998/namespace> .\n@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .\n\n<http://gnafld.net/address/GAACT714846009> a gnaf:Address ;\n rdfs:label "Address GAACT714846009 of Unknown type"^^xsd:string ;\n gnaf:gnafType <http://gnafld.net/def/gnaf/code/AddressTypes#Unknown> ;\n gnaf:hasAddressSite <http://gnafld.net/addressSite/710446495> ;\n gnaf:hasDateCreated "2004-04-29"^^xsd:date ;\n gnaf:hasDateLastModified "2018-02-01"^^xsd:date ;\n gnaf:hasGnafConfidence <http://gnafld.net/def/gnaf/GnafConfidence_2> ;\n gnaf:hasLocality <http://gnafld.net/locality/ACT570> ;\n gnaf:hasNumber [ a gnaf:Number ;\n gnaf:gnafType <http://gnafld.net/def/gnaf/code/NumberTypes#FirstStreet> ;\n prov:value 4 ] ;\n gnaf:hasPostcode 2615 ;\n gnaf:hasState <http://www.geonames.org/2177478> ;\n gnaf:hasStreet <http://gnafld.net/streetLocality/ACT3884> ;\n geo:hasGeometry [ a gnaf:Geocode ;\n rdfs:label "Frontage Centre Setback"^^xsd:string ;\n gnaf:gnafType <http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback> ;\n geo:asWKT "<http://www.opengis.net/def/crs/EPSG/0/4283> POINT(149.03747828 -35.20190973)"^^geo:wktLiteral ] .\n\n<http://gnafld.net/def/gnaf/GnafConfidence_2> rdfs:label "Confidence level 2"^^xsd:string .\n\n<http://www.geonames.org/2177478> rdfs:label "Australian Capital Territory"^^xsd:string .\n\n''
It seems that the error comes from the graph parser (g.parse()). If anyone has idea about how to solve this, it would be highly appreciated. Thanks in advance.
Relevant answer
Good Answer Debasis Dhak
  • asked a question related to RDF
Question
9 answers
I am simulating the SiC at different temperatures to study its structural properties but there occurred no chemical reaction(bond breakage or formation) in the final structure and since the radial distribution function is showing straight line instead of any peaks of homo and hetero bond (C-C, Si-C, Si-Si). so please tell me should i do equilibration before MD simulation or is there any problem with method used??
Relevant answer
Answer
There are two ways to have ReaxFF force field parameters:
1. Develop this force field for your system through quantum mechanical calculations.
2. Finding articles that they already studied a structures similar to your structure.
(In terms of temperature, pressure, Phase and almost similar species). In this case, you can use the force field parameters of those articles in the Supporting information section.
Like this:
  • asked a question related to RDF
Question
1 answer
In the older versions of GROMACS (before 5.1.1) we could use the instruction gmx rdf -com -f traj.xtc -n index.ndx -o out.xvg. The -com flag could calculate RDF with respect to the center of mass of first group. I haven't found how to calculate it in the latest versions of GROMACS.
Relevant answer
Answer
  • asked a question related to RDF
Question
6 answers
Hay all,
I have just started research ontologies. I need a good tutorial of ontologies development, OWL, RDF, RDFS, Description Logic and Machine learning role in ontologies. Is there any available online course or any kind of good material, please suggest to me?
Relevant answer
Answer
hello!
It's a good question.
you can look to my book: " ontologies et web sémantique".
and I'm here for any question.
thank you.
  • asked a question related to RDF
Question
3 answers
Agronomists conduct trials on nutrient (NPK and others) management in which 150% and even 200% of RDF (recommended dose of fertilizer) are considered as treatment. Research fundings show higher productivity with nutrient levels at >RDF. Further, STCR based trials show higher dose of nutrients, sometimes >200% of RDF. My question is:
"Is it the right time to relook into fertilizer recommendation for crops for more precise nutrient management?"
Relevant answer
Answer
Thanks to all who agreed with me. We should look into this and I feel it's the right time to guide future agriculture / crop production.
  • asked a question related to RDF
Question
2 answers
Overall, I would like to know how much waste (if possible, depending on the material) is needed to make RDF production viable. Somebody knows where could I found this information?
Thanks
Relevant answer
Answer
based on the economics in the US, about 1,000 tons/day plant would be the minimum for sustained operations
  • asked a question related to RDF
Question
3 answers
I'm trying to generate CGR and in order to do so I want to use the Marvin Sketch auto mapper function on a bunch of reactions, stored as reaction smarts. But in order to use the auto mapper function from Marvin sketch I have to pass the reactions as rxn files or as a RDF file.
Relevant answer
Answer
Alright, thank you guys!
  • asked a question related to RDF
Question
5 answers
Hi all?
I perform MD simulation to investigate the interaction of a ligand with the graphene surface by using Gromacs package.
How can I calculate the RDF between "surface" of graphene and center of mass (COM) of ligand?
Relevant answer
Answer
Dear Shaid,
The output of this command is the com-com RDF and I try to calculated surf-com RDF. In GMX when I calculate RDF respect to surface the g(r) values arestrangely high.
  • asked a question related to RDF
Question
1 answer
A long term experiment was initiated in kharif 1985 with four fertility levels (0%, 50%, 100% and 150% of RDF) with 6 replications in RBD. However, after 10 cropping cycles there was severe deficiency of Zn in 150% RDF treated plots. Hence out of 6 replications four replications were superimposed with different Zn and OM treatments. Two replications were left as such. So, now the question is how to analyse the data for a valid conclusion.
Relevant answer
Answer
since you have change across time I would try a longitudinal model. I would see this as somewhat like a simple clinical trial of one experimental drug versus standard treatments.A really good introduction to longitudinal models is Prof Marie Davidian's class notes available at this link.
Good luck, D. Booth
  • asked a question related to RDF
Question
4 answers
Dear all,
I'm looking at a system composed of a CNT surrounding by ionic liquid.
I would like to look at the radial distribution function of the ionic liquid inside the carbon nanotube.
The atoms forming my CNT are not frozen so the shape of the CNT is not a cylinder.
Please find attached the pictures.
I do appreciate your help !
Relevant answer
Dear Candy, Try to make an index of the CNT and ionic liquid atoms, then do the RDF between the two groups you want. Use the gmx make_ndx tool to do the index and gmx rdf to do the RDF.
  • asked a question related to RDF
Question
4 answers
My simulation is described in briefly as follows. A water slab is 6*6*7 nm3 with two surfactant (SDS) monolayers in z direction, and then two vacuum regions with height of 10 nm were added below and above the slab to create the air/water surface. After 100 ns, the RDF of water oxygen around the sulfur atom in SDS head group was calculated, but the g(r) value approached to 2.5 rather than 1 after normalization. Could you give me some help or suggestions? Thank you.
Relevant answer
Answer
How did you remove that vacuum layer? My rdf doesn't go to 1. stuck at 1.1
  • asked a question related to RDF
Question
2 answers
..
Relevant answer
Answer
What is the purpose of your MD simulations? What questions do you have that you want to answer by running and analyzing MD simulations? Ask these questions to yourself, and then think what kinds of measurements could give you the answers.
(Complement the structural analysis with energetical analysis. The energies will help you with the interpretation of the structures, and vice versa.)
  • asked a question related to RDF
Question
4 answers
There are several weighting factors which can be included into word embedding models in order to get useful and accurate semantic representations of terms. But when you have small data and synonyms or homonyms in your corpus you generate noisy results. You can leverage this problem by making use of already known semantic information as it is available in ontologies (RDF, OWL) or in terminological databases (TBX, SKOS). I would be interested to read your feedback on the best approaches to include existing semantic information into vector space algorithms (LSA, LDA) and models (GloVe, Word2Vec...) possibly using libraries like Gensim.
  • asked a question related to RDF
Question
3 answers
How we can apply dimensionality-reduction technique to a matrix of common-sense knowledge that has been built using RDF/XML e.g. SenticNet 3.?
Relevant answer
Dear Omar,
In addition to PCA and ICA, you may also try to utilize Nonnegative matrix factorization (NMF) technique, which is another method for projecting a data from a higher dimensional space into a lower dimensional space. Thus, reducing the dimension of the original dataset.
I hope you would find this tips helpful.
Best wishes.
  • asked a question related to RDF
Question
3 answers
Hi,
I am studying the stabilization of proteins using different solvents using GROMACS. Attached is the RDF of the cation (black) and anion (red) around the protein.
I used "gmx_mpi rdf com rdf mol_com -s em.tpr -f md_Dhp_2system.xtc -cut 0.5 -cn -o radialdist_cation_Dhp_2_com.xvg -tu ns" to calculate the RDF. The plot does not show any peaks. Does this mean that the cation and anion are uniformly distributed right from 0.5nm of the protein?
Thanks.
Relevant answer
Answer
Thanks Dr. Kranz and Dr. Sadeghi for such detailed explanations. The RDF plots make a lot more sense to me now. Earlier, I did not consider the effects of concentration while analyzing the plots. Through my simulations, I was expecting a double layer with the bulkier cations as the first layer.
There are just few more queries which I would like to clarify.
1. Could you please explain more on " your peaks are already pretty strong"? Can the plot be inferred as strong if the g (r) is approximately 1?
2. Does the plot mean that the both the cations and anions intercalate the protein backbone and are able to penetrate into the core of the protein?
3. Attached is the RDF of my second set of anions (acetate) with the same cation. Is it right to conclude that the acetate anions are interacting with the protein's surface much more than that of the anion 1? This combination of acetate and cation has indeed led to an increased RMSD.
  • asked a question related to RDF
Question
7 answers
I've been searching for ontologies (RDF) representing EDXL standards, emphasis on CAP and SItRep.
The only one I found available comes from COncORDE project:
. Overview:
. File:
There are other projects, but I couldn't find their RDF files:
- The RESCUER project:
- EPISECC project:
- US Department of Defense:
This last one is quite interesting since it uses BFO as a foundational ontology. Anyone know if the RDF is available?
Relevant answer
Answer
I remember there were some papers in the Semantic Web conferences in the past, but I don't recall specifically what these were about and which conference that was. A quick search leads me to a couple of potential starting points for further search:
Nguyen, D., Kopena, J., Loo, B., & Regli, W., Ontologies for Distributed Command and Control Messaging, Formal Ontology in Information Systems, Proceedings of the 6th International Conference, May 2010, doi: 10.3233/978-1-60750-534-1-373
Simas, F., Barros, R., Salvador, L., Weber, M. and Amorim, S., 2017, November. A Data Exchange Tool Based on Ontology for Emergency Response Systems. In Research Conference on Metadata and Semantics Research (pp. 74-79). Springer, Cham.
Steel, J., Iannella, R. and Lam, H.P., 2008, May. Using ontologies for decision support in resource messaging. In Proceedings of the 5th International ISCRAM Conference. F. Fiedrich and B. Van de Walle, eds. Washington, USA (pp. 276-284).
Kantorovitch, J., Giakoumaki, A., Korakis, A., Papadopoulos, H., Milis, G., Kolios, P. and Staykova, T., 2015, November. Knowledge modelling framework. In Information and Communication Technologies for Disaster Management (ICT-DM), 2015 (pp. 145-151). IEEE.
Cheers
Markus
  • asked a question related to RDF
Question
3 answers
i wnat to use semantic web to make website about breast cancer
Relevant answer
Answer
thank u a lot Gilles-Antoine Nys.
But these material aren't useful for me as it interested with the breast cancer itself .
while I want make website about breast cancer that get the information from others website that support semantic .
can u help me ?
  • asked a question related to RDF
Question
4 answers
I need any new statistic showing the growth of semantic data and RDF triple stores like the one shown in this link: https://www.quora.com/How-fast-is-semantic-web-and-or-linked-data-growing-per-year
Relevant answer
Answer
Apart from the LOD Cloud, which primarily consists of datasets published in Linked Data format, by contributors to the Linking Open Data community project via individuals or organisations, there exists other sources/use-cases of Linked Data which are private.
Many industries fuelled by scenarios including, finance, blockchain, automation and wireless sensor network are adopting semantic technologies for smart applications.
  • asked a question related to RDF
Question
3 answers
I have not been able to query RDF document using XPath since it does not work with RDF namespace, although RDF is basically an XML document. SPARQL is used for querying RDF document. What is difference between XPath and SPARQL?
Relevant answer
Answer
Hi there,
RDF documents are named directed graphs: an RDF document is a series of subject,predicate,object statements that you can think of as a set of graph edges, named after the predicate and running from the subject to the object.
XML is a way to serialize RDF so that it can be stored or transferred, and it is not the only possible serialization for RDF. TTL, for instance, can be used instead of XML.
If your RDF data happens to be serialized as XML, it is possible to use XPath to query it, although it's not very advisable. If you use XPath you will spend too much effort and energy worrying about the syntax of the serialization instead of worrying about what it is you want to retrieve from the document.
SPARQL is a SQL-like language that is a much more intuitive way to query RDF documents, regardless of how they are stored. With SPARQL you will query the data using patterns over the edges of the graph, so that getting the names of X's grandchildren will be as straightforward as
?X ex:child ?Y .
?Y ex:child ?Z .
?Z foaf:name ?Name .
Now about your concrete problem, and if haven't been persuaded yet to use SPARQL instead of XPath, you should post the minimal data and the query needed to reproduce the problem, so people can help.
Best,
Stasinos
  • asked a question related to RDF
Question
5 answers
As far as I know, for arriving at an RDF for a given crop scientists took hundreds of multi-location and multi-temporal crop trials with selected ranges of N, P and K. And, the NPK inputs corresponding to the highest physical yields were called RDF's.
Please guide further in this regard.
Thank you in advance.
MKD
Relevant answer
Answer
Unfortunately, this RDF for a given crop , when you replicate it on the same field , after 10 years , doesnt hold good, this is my first take . And , my another take on the issue is , how does this RDF cater to the needs of different soil types and other associated agro-pedological conditions using the same crop. How does the magnitude of crop response vary ..?? In that case , do we need to strengthen the soil -crop response models more stringently , especially with respect to interpretation of soil-crop relationship , since in field , it is multiple nutrients ( deficient or optimum levels) that a given crop has to respond...
  • asked a question related to RDF
Question
9 answers
Semantic web employs RDF, Ontology for storing structured data in contrast to HTML which does not have any structure. The data on the web is purely in HTML format. Further, there is no standardization. How to convert HTML data into standardized RDF format or ontology for integrating data from different existing resources?
Relevant answer
Answer
@Ivan: There is more on this earth (or better said web) than just Google ;-)
Of course, Google makes visual use of its knowledge graph, but I wouldn't call it "semantic web", since it does not follow the principles of the semantic web, i.e. to use (defacto) standards and URIs. You can easily inspect the box you mentioned and what you see there appears to be proprietary. Thus, in order to make this content machine-processable you have to understand it and hack your own interpreter.
However, sites which make use of the semantic web, often do it under the hood without any visual hint for human users. Inspect for example the source code of the pages:
Hence, the semantic web is actually imperceptible and that's probably the problem, why it has difficulties to be adopted more broadly: Its use is not directly visible. Therefore I found the statistic about the use of RDF and microdata, I mentioned above, very helpful.
Of course you are right, transforming HTML into RDF/microformats isn't the problem, the problem is - as you pointed out - understanding the content. But I am pretty sure, that sites which use the semantic web under the hood today, do not interpret their content. With high probability they have integrated the semantic web markup already into their CMS or HTML generation algorithms.
I would be happy to receive any confirmation about my last assumption.
  • asked a question related to RDF
Question
4 answers
Can anyone recommend me a peer reviewed scientific papers on a comparative study between available graph databases?
Relevant answer
Answer
Dear Mengist,
Please follow the reference papers given below:
1. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., & Taylor, J. (2008, June). Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data (pp. 1247-1250). AcM.
2. Angles, R., & Gutierrez, C. (2008). Survey of graph database models. ACM Computing Surveys (CSUR), 40(1), 1.
3. Vicknair, C., Macias, M., Zhao, Z., Nan, X., Chen, Y., & Wilkins, D. (2010, April). A comparison of a graph database and a relational database: a data provenance perspective. In Proceedings of the 48th annual Southeast regional conference(p. 42). ACM.
4. Angles, R. (2012, April). A comparison of current graph database models. In Data Engineering Workshops (ICDEW), 2012 IEEE 28th International Conference on (pp. 171-177). IEEE.
5. Patil, N. S., Kiran, P., Kiran, N. P., & KM, N. P. (2018). A Survey on Graph Database Management Techniques for Huge Unstructured Data. International Journal of Electrical and Computer Engineering (IJECE), 8(2).
Thanks,
Sobhan
  • asked a question related to RDF
Question
4 answers
I am working on a project that focuses on mental health conversations between counsellors and at-risk individuals. Although there are multiple mediums being used (i.e. audio, SMS txt, email), I am focusing on audio conversations only. Once I have created transcriptions of the conversations, I want to be able to contextualize them and the best way I have discovered from my research thus far is to transpose the conversations and their utterances into knowledge graphs, using a combination of techniques and tools (e.g. ML, NLP, LIWC etc.).
However, when exploring what kind of graph to use, I come across two primary forms to choose from: 1) Property graphs (e.g Neo4j) and 2) RDF triple stores (e.g. GraphDB).
What I am trying to do is put the conversational data in a format that allows me to explore areas at greater depth with clinical psychologists such as emotional states, measuring temporal changes in moods, rough sentiment analysis across individuals from different localities, gaining greater clarity of the general influences that are creating states of depression across a population.
Given the nature of human conversation and the domain within which these conversations are taking place (e.g. a depression or suicide hotline), what guidance might you have as to which graph database approach to use and why?
Or if you have thoughts on other approaches or avenues to explore, let me know.
Relevant answer
Answer
Dear Neil Movold,
Look the link, maybe useful.
Regards, Shafagat
  • asked a question related to RDF
Question
3 answers
Ontologies such as PROV-O manage the provenance of the resources in an RDF Graph at an instance level, but I need to manage the provenance of each triple, is there any good reference to do this?
Relevant answer
Answer
You can use named graphs.
In particular, Nanopublications (http://nanopub.org/guidelines/working_draft/) allow you to specify both provenance of triples and provenance of provenance statements.
  • asked a question related to RDF
Question
3 answers
according to site
there are mentioned that:
the three types of RDF data that occur in triples: IRIs, literals and blank nodes.
1- when are we use IRI not ordinary triple?
Relevant answer
Answer
Hi Amany,
It seems to me that you are asking not just how the fundamentals of RDF work, but what RDF /is/. Is that right? I think that because the other answers, however correct, didn't seem to satisfy you.
RDF is the Resource Description Framework. The name is important because it gives a clue to the original thinking of the inventors. RDF is a framework (that is, a model) to describe resources. Which resources? Originally, and still often, they were resources on the Web. That is, Web pages. These days, the (very general) RDF data model is used to describe all sorts of things whether they are on the Web or not.
Here is an important, but subtle and underused, point: You cannot say anything in RDF less than one triple's worth of information. That is, a single triple is the least content possible in an RDF graph.
RDF is therefore an odd sort of data model in that you cannot just name a resource explicitly. You can only name it implicitly by assigning it an IRI used in a triple. The triples that refer to a resource can collectively provide enough information about the resource that everyone (should) agree on what the resource is.
A triple has three components, as Keven and Harsh described. The easiest way to think about an RDF triple is to note that triples are one of two kinds:
1) A triple may describe two /things/ and the relationship between them. Here is an example of a single RDF triple that names two things and links them together with a relationship:
<an IRI representing me> <an IRI representing the verb "authored"> <an IRI representing this answer>
(the first thing, me) (the relationship, in this case "authored") (the second thing, this answer)
You can read that triple as "David Hyland-Wood authored this answer".
In this case, IRIs are used to name the two things and also to name the relationship. That's almost always the way that RDF triples are written.
2) A triple might also provide a piece of data about a resource. In this case, the second thing has some form of data type, such as a string or a number or a date. That ability allows the RDF model to contain measurements, human-readable descriptions, and such. It is very useful when you want to describe your resources in great detail. For example:
<an IRI representing me> <an IRI representing the concept "has birthday"> <a date representing my birthday>
(the first thing, me) (the relationship, in this case "has eye colour") (the date "1963-08-28")
You can read that triple as "David Hyland-Wood was born on 28 August 1963".
So to get back to your original question, we use IRIs to represent things (nouns) and relationships/concepts (verbs). We use other non-IRI data types such as strings, numbers, and dates to represent data.
If we have a lot of RDF triples describing resources, they form a conceptual graph because we presume the IRIs are unique in the world. If there are two RDF triples that both start with the same IRI, then we say they are giving two pieces of information about the same thing.
The edges of RDF graph are the data, because the RDF model doesn't allow you to make any further statements about a string, date, number, etc. used in a data field. You can make all the statements you want about things that are named with IRIs.
Does that help?
Regards,
Dave
  • asked a question related to RDF
Question
7 answers
I tried to calculate RDF using 'Compute rdf' command, but it returns zero in all distances.
I tried it for a simple periodic monatomic metal crystal but it returns zero too.
does rdf calculation depend on pair style(not in simulation but only in post calculations)?  
if yes , which pair style should i use?
Relevant answer
Answer
i guess you can use ovito to calculate radial distribution by CNA (commmon neighbour analysis), It gives you table and graphical representation
  • asked a question related to RDF
Question
5 answers
We are working on a new (contextual) enterprise semantics methodology, which appears more powerful and scalable than set theory-based methods, including RDF. I pose this question to check how new it really is.
Relevant answer
Answer
Hi Paul,
Even as an "RDF person", I understand your comment that some find it difficult to accept that RDF is not universal. There are certainly both philosophical stances in design and trade-offs in implementation and semantics.
There have been many, many papers published at the International Semantic Web Conference (ISWC, but not the wearable computing ISWC!) and the International World Wide Web Conference (WWW) over the last 10-12 years on logical extensions and different types of logics. 
The most advanced logic query engine for alternative logics that I know is Stardog (http://www.stardog.com).
  • asked a question related to RDF
Question
1 answer
I want to calculate the COM RDFs between my peptide and water for which I have used the -com option in gromacs.(Also tried with -com and -rdf res_com).
It generates a wierd RDF which does not look correct. I want to take the first minimum and use the -cn option to get the coordination number.
The images for the RDFs are attached herein. What could be possibly wrong? The atom-atom RDFs are coming out well but not the COM ones.
Additionally,
If I use the -cn option of gromacs, it shows the cumulative CN right? Which means if I see the number at 0.257(say that's the cut-off), the number corresponding to it would be the coordination number. Am I right?
What if I integrate the rdf using g_analyse -integrate option ?
Any ideas on how to do the integration in Gromacs?
Relevant answer
Answer
making an index file and showing it in calculation of RDF is much easiest way to do it. indicating whole protein while you are creating an index file leads to take COM automatically  in RDF calculations.
  • asked a question related to RDF
Question
3 answers
Hello all,
I have one question about RDF calculation in SMD simulation.
I was trying to perform SMD simulation of system containing a lactic acid and a cycle peptide nanotube, in which lactic acid moves through the nanotube with constant velocity. I have analyzed radial distribution function (RDF) of hydrogen atom of lactic acid in distance r from oxygen atoms of nanotube (there are 80 oxygen atoms along the nanotube). It indicates that the H atom has a first peck at 2 A with probability value is 60 and the second peak is appeared at 6 A with probability of 150. I did not suppose that the probability values would be too high.
Can anyone help me to interpret the high value of probabilities?
Thanks in advance,
Relevant answer
Answer
 Dear Marzieh
Knowledge is always for expansion. Pl follow it in life. When one learns the things from scratch it gives more enjoyment.
  • asked a question related to RDF
Question
9 answers
I have aligned nucleotide sequences in mega format, I need to construct Median Joining network in Network software. Please anyone can tell me how to make *.rdf file and how to construct MJ network in network software.
Relevant answer
Answer
You can open the FASTA file in DnaSP and generate haplotype data file in .rdf format for further analysis in NETWORK. 
  • asked a question related to RDF
Question
4 answers
Apart from economic frameworks like oracle or sql server, I am looking for academic business intelligence data to conduct some experimental researches such as applying data mining algorithms or optimization algorithms. Can suggest links for such data?
Regards,
Rafid Sagban 
Relevant answer
Answer
Hi Rafid,
I work with AI methods. They excellently apply on function generation and data prognosys. I am not familiar with any application based on this methods for Enterprise Business Application testing. My suggestion is to make one for youreself, based on AI methods (Fuzzy, c-slusters, neuron network, ANFIS).
good luck
  • asked a question related to RDF
Question
20 answers
Can you please recommend a good, possible native, TripleStore database for RDF?
Relevant answer
Answer
Carlo, I would also consider graph databases such as Titan or Neo4J. Briefly speaking, triple stores are suitable for reasoning, graph databases are suitable for graph operations and queries (find shortest path, etc.)...
  • asked a question related to RDF
Question
5 answers
I use RAP api for PHP to query my knowledge (RDF, OWL and etc.) but I cannot find the way to apply SWRL that I designed on Protege. Somebody suggests me to use Jena.
Relevant answer
Answer
In the near past I've used Jena to integrate a set of SWRL rules (pre-authored in Protege 3.4.4) for artificial intelligent and automated reasoning mechanisms. The programming language was -once more- Java and the only prerequisite was to include the Pellet reasoner. Both Apahe Framework and Pellet (now on GitHub) have extensive tutorials and examples, so give it a try instead of using PHP which has limited, or not at all, support in Semantic Web area...