Science topic
RDF - Science topic
Explore the latest questions and answers in RDF, and find RDF experts.
Questions related to RDF
Suppose their are different fertigation levels 50% RDF, 75% RDF, 100% RDF and 125% RDF
Fertilizer use efficiency for 50% was 115 kg/kg NPK, for 75% 102 kg/kg NPK, for 100% 95 kg/ kg NPK and for 125% 85 kg/kg NPK
Hello,
I'm trying to simulate the pyrolysis process of an RDF sample in vertical tube furnace.
I like to include this reaction in my simulation using Comsol Multi-physics :
Cm Hn Ol + (m/2 - l/2)O2 => mCO+n/2H2
Should I identify the (m,n,l) as variables, if so how can I do it ?!
Much appreciated
For the completion of RDF graphs. Do you recommend only leaving triples whose expressiveness corresponds only to RDF framework or another more expressive language such as RDF-S or OWL?
Hi,
I have a water simulation box with 116 water molecules. I used TIP4P water model. Now I need to calculate RDF of water-water, RDF of Oxygen-Oxygen and RDF of Hydrogen-Oxygen.
I tried to calculate RDF of water-water and ended up with the wrong plot which I did not expected.
My Index file order is like this:
O “System”
1 “Other”
2 “HO4”
I used following code to calculate RDF;
gmx rdf -f traj_comp.xtc -s run01.tpr -n index.ndx -o Water.xvg
and for the sel and ref I choosed option “O”
How can I used GMX RDF option to calculate RDF data correctly.
Cheers, Kal
I want to query from multi SPARQL endpoints in real time and visualize RDF triples as a graph.
How to visualize RDF well? Are there some better rdf visualizers?
- Open source.
- Support RDF triples, not just ontologies.
- The visualization effect is better.
- Can be used in the program, best to Java or HTTP.
- Supports more RDF triples.
I found some rdf visualizers, But they don't fit:
- [Apache ECharts](https://echarts.apache.org/en/): Free but not an RDF style.
- [Graphviz](https://graphviz.org/): Free but the style is not very attractive.
- [rdf-visualizer](https://issemantic.net/rdf-visualizer): Nice style but can't be used in program.
- [WebVOWL](http://vowl.visualdataweb.org/webvowl.html): Nice style but only the ontology can be shown not all RDF triples.
When we think about semantic software, descriptive RDF files of HTML pages and mapping of relational databases to RDF immediately come to mind, however, does semantic software development only include those aspects?
SPARQL property paths only return the start and end nodes of a path and do not allow variables in property path expressions.
SPARQL property paths don't return intermediate nodes neither triples.
SPARQL property paths cannot retrieve and naturally represented a path in the tabular result format of SPARQL.
Considering RDF-Star and SPARQL-Star current specifications, is it possible to bind the RDF triples from a property path query to embedded triples of RDF-Star triples?
For example
SELECT ?since ?t
WHERE {
ns:p3 ns:FRIENDS_OF+ ns:p2 AS path (?s ?p ?o)
<<?s ?p ?o>> ns:date_of_start ?since.
BIND(<<?s ?p ?o>> AS ?t)
FILTER (?since > 2016) .
}
Similar to the Cypher query bellow
WITH 2016 AS since
MATCH rels= (p1:Person) - [:FRIENDS_OF*1..2]->(p2:Person)
WHERE ALL(k1 in relationships(rels) WHERE k1.date_of_start > since)
RETURN rels;
RDF-star and SPARQL-star
Final Community Group Report 17 December 2021
Hi,Is there any way to count the number of gas adsorption on graphene?
I used RDF diagram but It was'nt useful?How can I count the number of gas adsorption on it by RDF ?
It has apparently been proven that SQL with cyclic tags is Turing-complete (see [1]).
There are also approaches that convert relational structures to OWL (e.g. [2], [3]).
Can one conclude that one can define any algorithm in OWL or in one of its derivatives?
Does anyone know a paper?
Thanks!
Best,
Felix Grumbach
For learning SPARQL it might be useful to have full control over both the query text and the data (RDF triples). While there are many public SPARQL endpoints available their data is typically read-only for obvious reasons. To actively apply SPARQL queries to ones own data, a local triple store might be useful, e.g. for reproducing the examples from https://www.w3.org/TR/rdf-sparql-query/.
However, setting up such an infrastructure with all its dependencies might be complicated.
Question: What is the simplest¹ way to setup a local triple store with SPARQL endpoint on a usual PC?
(¹: The meaning of "simplest" depends on ones system configuration and prior knowledge, which can be reflected by different answers.)
If one has already an up-to-date Python environment, then https://github.com/vemonet/rdflib-endpoint provides a simple solution with only two commands
- pip install rdflib-endpoint (run once)
- rdflib-endpoint serve <path_to_your_triple-file(s)>
- →Access the YASGUI SPARQL editor on http://localhost:8000
However, I am interested which alternative solutions there are.
The term "Semantic Web" goes back to at least 1999 and the idea – enable machines to "understand" information they process and enable them to be more useful – is much older. But still we do not have universal expert systems, despite that they would be very advantageous, especially in the context of (interdisciplinary) research and teaching.
My impression is, that from the beginning semantic web technologies was dominated by Java-based tools and libraries and that the situation barely changed until today (2022): E.g. most of practical ontology development/usage seems to happen inside Protegé or by using OWLAPI.
However, in the same time span we have seen a tremendous success of numerical AI (often called "machine learning") technologies and here we see a much greater diversity of involved languages and frameworks. Also, the numerical AI community has grown significantly in the last decade.
I think, to a large part this is, because it is simple to getting started with those technologies and Python (-Interfaces) and Jupyter-Notebook contribute significantly to this. Also, Python greatly simplifies programming (calling a library function and piping results to the next) for people who are not programmers by training such as physicists, engineers etc.
On the other hand getting started with semantic technologies is (in comparison) much harder: E.g. a lot of (seemingly) outdated documentation and the lack of user-friendly tools to achieve quick motivating results must be overcome in this process.
Therefore, so my thesis, having an ecosystem of low-threshold Python-based tools available could help to unleash the vast potential of semantic technologies. It would help to grow the semantics community and to enable more people to contribute contents such as (patches to) domain-ontologies, sophiticated queries and innovative applications e.g. combining Wikidata and SymPy.
Over the past months I collected a number of semantic-related Python projects, see https://github.com/pysemtec/semantic-python-overview. So the good news is: There is something to use and grow. However, the amount of collaboration between those projects seems to be low and something like a semantic-python-community (i.e. people who are interested in both semantic technologies and python programming) is still missing.
Because complaining alone rarely leads to improvement of the situation, I try to spawn such a community, see https://pysemtec.org.
What do you think, can Python help to generate more useful applications of semantic technology, especially in science? What has to happen for this? What are possible counter-arguments?
I have been working with ontologies (RDF/OWL) a lot of time, using mostly them as an engineer, because they permitted SPARQL and rules essencially.
It's only recently, this year, that I started to really pay attention to the theoretical grounding of OWL. This lead me to dive into the zoo of many Description Logic and their desirable or undesirable properties.
I think there is some serious issues in the multiplication of work on DL, which are almost never considered under the perspective of actual usefulness, of their ability to describe the specific structures that are at core of many domains (law, clinical science, computer science...).
Quite some of the theoretical work in DL and logic seems to formally study and prove property about language (DL are language) that nobody is speaking or will ever speak. This is quite salient when considering the very little number of working reasoners (which are covering only a small fragment of DL described formally).
It seems to me that, after the incredibly fecund periods that started with Frege, Russel, Tarski, Hilbert, Godel, Carnap... The theoretical work was somewhat considered to be done and less attention was focused on formal language for Domain Description.
On the other hand, questions related to problem solving (planner) became treated only as SAT problem needing optimisation. With almost no reference to first order logic and thus having poor link with DL.
Finally, on the third hand, modal logic, which has clearly deep link with first order logic (the square operator/diamond operator and the existential quantifier/universal quantifier in particular), has been abandoned by computer scientist and become, more or less explicitly, a field of philosophy.
I think this state of affairs isn't satisfying and that there is a work of conceptual clarification and of revision of the foundation of mathematics that would integrate these development.
To that end, something that does seem absolutely essential is to give each other an easy access to reasoners. By easy access, I don't mean a program written in some obscure language whose source must be compiled on a specific linux.
I mean an access to the reasoning service through a (loosely standardized) REST API. These service should be accompanied with websites giving relevant example of using the reaoner, with an "online playground".
I think this could be done for classic DL such as EL or SHOIQ but also for modal logic in it's various kind (epistemic, deontic), and that could also could be done for planification based on First Order Logic.
I'm currently cogitating about the engineering question that would raise from such a logical zoo, and about a grammar that would be usable for every reasoning problem description involving this kind of logic.
If you are interested by the question and/or have skills in modern full stack architecture and Dockerisation, I'd be interested to have your opinion about the current situtation and the feasability of such a logic zoo, which would be an useful tool for clarifying the domain.
According to RDF* specification is it possible to have the same tripla pattern with differents qualifiers?
Example
( << s1, p1, o1>>, q1, v1 )
( << s1, p1, o1>>, q1, v2 )
( <<SPinera, president, Chile>>, start, 2010)
( <<SPinera, president, Chile>>, end, 2014)
( <<SPinera, president, Chile>>, start, 2018)
RDF* definition
An RDF* triple is a 3-tuple that is defined recursively as follows:
1. Any RDF triple t ∈ (I∪B) ×I×(I∪B∪L) is an RDF* triple; and
2. Given RDF* triples t and t′, and RDF terms s ∈ (I ∪ B), p ∈ I and o ∈ (I ∪ B ∪ L), then the tuples (t,p,o), (s,p,t) and (t,p,t′) are RDF* triples.
Reference for RDF* definition
Hartig, Olaf. “Foundations of RDF⋆ and SPARQL⋆ (An Alternative Approach to Statement-Level Metadata in RDF).” AMW (2017).
I have a mixture of DMF and water in 300 K simulated with an OPLS-AA force field. The hydrogen bonds between oxygen atoms of DMF and H atoms of water have a very short length and it's around 1.26 Å. I know that H-bonds are typically < 3.5 Å but isn't 1.26 in the range of covalent bonds?
I want to know if that's possible or if there's something wrong with my force field parameters/charges?
The RDF curve of oxygen atoms of DMF around H atoms of water (H-bonds) is shown below.
Kind regards,
Ehsan
Construction of KG which model is better for RDF triplet or knowledge extraction
Hi everyone,
I have a question about calculating RDF by LAMMPS. I want to know that what exactly LAMMPS do for computing g(r), In fact, I mean on which atom, LAMMPS take the RDF. Are we able to recognize it in our sample?
I need to store my RDF multidimensional data and I have to select the best appropriate strategy for that. I tested some Existing solutions like AllegroGraph and GraphDB, etc. But they did not address the problem well.
The Gromacs manual has the following regarding exclusions:
"If a run input file is supplied (-s) and -rdf is set to atom, exclusions defined in that file are taken into account when calculating the RDF. The option -cut is meant as an alternative way to avoid intramolecular peaks in the RDF plot."
The -cut option isn't appropriate for my case as I have intermolecular peaks (which I want to keep) in the same intramolecular pair distances that I want to exclude. How exactly can I add RDF "exclusions" to my input file after? Or, are there other ways to filter out intramolecular pairs from my RDF curve? The command I am using:
gmx rdf -f file1.xtc file2.tpr -n index.ndx -o rdf.xvg -cn coord.xvg -b t_initial -e t_final -bin 0.01 -rmax 1 -ref base_atom_index -sel distributed_atom_index
Is an ontology required to create a knowledge graph or is the schema derived from the relationships in a graph. What I am wondering about is whether the predicate is defined in a knowledge graph, and if so, how? It seems that knowledge graphs can be created without defining relationships between the nodes in advance.
I was trying to calculate the radial distribution function by Lammps in between the moving particle (suppose an ion) and static particle ( suppose a wall). However, I couldn't find any result. I was wonder why RDF couldn't work although it was worked perfectly for the calculations of two moving particles ( suppose ion and water).
It would be really helpful if anybody help with explanations or references. Thanks.
I was searching for the endpoint of the ion's ( as like Na+ / Cl-) hydrated boundary. I realized maximum researchers were talked about the first hydration shell endpoint. Some even talk up to the 3rd hydration shell ( using the RDF curve). However, I was wondered where is the end of these hydration layers and up to which order these hydration shells can be formed?
I would be really grateful if anybody gives me any suggestions or give a research paper related to my question. Thank you.
Currently, I want to merge several SKOS files (.rdf files) into one, taking into account mappings between them, made previously.
Is there any tool that allows me to do this?
Dear researchers,
In literature, so far I am seeing that downdraft is usually used for small-scale purposes (10kW-10MW). One of the reasons behind is "they do not allow for uniform distribution of flow and temperature in the constricted area (throat)". What can be other reasons?
And the main question is: is there any way to upscale it? For instance, for 30 t/h feed rate? Is it feasible and possible?
I would be grateful for any insight!
While there are multiple Java implementation for managing semantic knowledge base (Hermit, Pellet...) there seems to be almost none in pure Javascript.
I would prefer to use JS than Java in my project since I find JS much more clean and practical and easy to maintain. Unfortunately, there seem to be almost nothing to handle RDF data together with rules inference in Javascript. Although there are some work to handle only RDF (https://rdf.js.org/), it's unclear what is the status of these works regarding to W3C specifications.
I designed and developed a domain ontology for solid waste collection management and have OWL/RDF version of my OntoWM domain ontology. Can I use quantitative or qualitative method to evaluate the OntoWM domain ontology? any sample thesis or article please.
Regards
Abdul
In an incubation study of soil with moisture content at field capacity, a combined treatment of NPK and vermicompost results in lower N and K availability as compared to the treatment with NPK alone, both at RDF. Can there be any relevant explanation to this?
Hi all,
Is there any software or webserver to read the molecular dynamics (MD) simulation trajectory which obtained from Ab initio MD simulation to calculate the atom-atom radial distribution function (RDF).
Thanks in advance.
I am trying to run a Radial Distribution Function (RDF) trajectory analysis between water (WAT) hydrogen atoms and ligand (UNN) nitrogen atoms using the production trajectory (prod1.crd) and the parameter file (solvated.prmtop).
When i use the rdf.in file as;
parm solvated.prmtop
trajin prod1.crd
radial rdf.xmgr 0.1 15.0 :WAT@H:UNN@N
The calculation ends in less than 1 second, and no output file gets generated.
However, if I modify the rdf.in to;
parm solvated.prmtop
trajin prod1.crd
trajout rdf.xmgr
radial rdf.xmgr 0.1 15.0 :WAT@H:UNN@N
and run the analysis using "cpptraj rdf.in", it starts to generate an output file (rdf.xmgr) that is 60+ Gb in size which cannot be read using xmgrace (Column count incorrect) as it seems to be an coordinates file (picture enclosed).
Will you please help me troubleshoot the cause and find a proper solution?
Dear all
I want to calculate coordination number of water around N+ group. I first calculated area of RDF to first minimum and it gave me a wrong coordination number( my RDF is correct) . Then I used this command
gmx rdf -f file.xtc -s file.tpr -b -e -cn coord.xvg -n index.ndx -ref “group N” -sel “group OW”
But, the out put was not coordination number of water around N+( as photo uploaded here)
What is the correct way to calculate coordination number?
Thanks in advance
I had modeled processes in my ontology like sale, purchase, etc. I want to implement these modeling constructs in OWL or RDF or any other semantic language. Can anybody suggest any language for the proper implementation of the aforementioned process?
Thanks
I have studied the structural changes of a protein in 3 different concentrations of ionic liquids (50 mM, 500 mM, and 1M). On analysis of the RDF plots, I find that the magnitude of the RDF plot for the anion at 0.35 nm from the protein followed the order 50 mM > 500 mM > 1M (as seen in the attached images). The magnitude of the plot is also very high at 50 mM (~ 60). However, on visualizing the trajectories, I can see that only at 1 M concentration, there is an increased concentration of the ions in the solvation shell of the protein, and at 50 mM, there are very few ions surrounding the protein. What could be the reason for such high g(r) values at lesser concentrations of 50 mM and 500 mM?
I used "gmx_mpi rdf com rdf mol_com -s em.tpr -f md_Dhp1M.xtc -cn -o ranion_Run2.xvg -tu ns -dt 100 -cut 0.35" for the analysis.
Thanks.
Hello,
I am trying to use the -surf option in gmx rdf in order to compute the rdf of certain molecules from the surface (nearest atoms) of my reference molecule .
My script is:
gmx rdf -s tpr.tpr -ref ‘resname URA’ -sel ‘resname OCS’ -f xtc.xtc -surf mol -seltype whole_mol_com -bin 0.03
Unfortunately, I get the following error:
“Inconsistency in user input:
-surf only works with -ref that consists of atoms”
Obviously my molecule consists of atoms. And different scripts that don’t have the -surf flag work perfectly fine.
Anyone has an idea what is the problem?
Here are things I’ve tried but didn’t change the error:
- I tried adding -selrpos whole_mol_com, same error.
- I tried changing the molecules that I put under “resname X” but for any X it’s the same error.
- I tried changing the order in which all my flags appear.
- I tried doing -surf mol and -surf res.
- I tried running this script on different gromacs versions.
Dear all,
i have a problem with Protege at the moment. Protege can only write a TURTLE or RDF/XML Sytnax with an OWL part in it, you can not "turn off" the OWL Syntax. My problem is that I'm not very good in coding, so Protege is very usefull with its graphical understanding but i really need the code in RDF/RDFS only
I you have any ideas for me, i'd love to here them.
Thanx you in advance.
I simulated a two-phase water-ionic liquid interface system for 10 ns. Then I calculated G(r) between O of water and C2 of ring of ionic liquid. I did it with Travis. However, I don't know how to interpret the resulting figures. Do you think that the data is valid? I guess I should calculate RDF in distinct regions. What are these regions, and with which program can I specify them...
I am trying to add rdf triples to Jena Fuseki Server. When running the code:
import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import re
from rdflib import Graph, Literal, URIRef
import rdflib
from rdflib.plugins.stores import sparqlstore
page = requests.get(url)
response = requests.get(url)
response.raise_for_status()
results = re.findall('\"Address ID: (GAACT[0-9]+)\"', response.text)
address1=results[0]
new_url=a+address1
r = requests.get(new_url).content
query_endpoint = 'http://localhost:3030/ds/query'
update_endpoint = 'http://localhost:3030/ds/update'
store = sparqlstore.SPARQLUpdateStore()
store.open((query_endpoint, update_endpoint))
g = rdflib.Graph()
g.parse(r, format='turtle')
store.add_graph(g)
I got the error like:
/Users/mac/anaconda3/lib/python3.6/site-packages/SPARQLWrapper-1.8.1-py3.6.egg/SPARQLWrapper/Wrapper.py:510: UserWarning: keepalive support not available, so the execution of this method has no effect
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-26-279fa93014e1> in <module>()
9
10 g = rdflib.Graph()
---> 11 g.parse(r, format='turtle')
12
13 store.add_graph(g)
~/anaconda3/lib/python3.6/site-packages/rdflib-4.2.2-py3.6.egg/rdflib/graph.py in parse(self, source, publicID, format, location, file, data, **args)
1032 source = create_input_source(source=source, publicID=publicID,
1033 location=location, file=file,
-> 1034 data=data, format=format)
1035 if format is None:
1036 format = source.content_type
~/anaconda3/lib/python3.6/site-packages/rdflib-4.2.2-py3.6.egg/rdflib/parser.py in create_input_source(source, publicID, location, file, data, format)
169 else:
170 raise Exception("Unexpected type '%s' for source '%s'" %
--> 171 (type(source), source))
172
173 absolute_location = None # Further to fix for issue 130
Exception: Unexpected type '<class 'bytes'>' for source 'b'@prefix dct: <http://purl.org/dc/terms/> .\n@prefix geo: <http://www.opengis.net/ont/geosparql#> .\n@prefix gnaf: <http://gnafld.net/def/gnaf#> .\n@prefix prov: <http://www.w3.org/ns/prov#> .\n@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .\n@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .\n@prefix xml: <http://www.w3.org/XML/1998/namespace> .\n@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .\n\n<http://gnafld.net/address/GAACT714846009> a gnaf:Address ;\n rdfs:label "Address GAACT714846009 of Unknown type"^^xsd:string ;\n gnaf:gnafType <http://gnafld.net/def/gnaf/code/AddressTypes#Unknown> ;\n gnaf:hasAddressSite <http://gnafld.net/addressSite/710446495> ;\n gnaf:hasDateCreated "2004-04-29"^^xsd:date ;\n gnaf:hasDateLastModified "2018-02-01"^^xsd:date ;\n gnaf:hasGnafConfidence <http://gnafld.net/def/gnaf/GnafConfidence_2> ;\n gnaf:hasLocality <http://gnafld.net/locality/ACT570> ;\n gnaf:hasNumber [ a gnaf:Number ;\n gnaf:gnafType <http://gnafld.net/def/gnaf/code/NumberTypes#FirstStreet> ;\n prov:value 4 ] ;\n gnaf:hasPostcode 2615 ;\n gnaf:hasState <http://www.geonames.org/2177478> ;\n gnaf:hasStreet <http://gnafld.net/streetLocality/ACT3884> ;\n geo:hasGeometry [ a gnaf:Geocode ;\n rdfs:label "Frontage Centre Setback"^^xsd:string ;\n gnaf:gnafType <http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback> ;\n geo:asWKT "<http://www.opengis.net/def/crs/EPSG/0/4283> POINT(149.03747828 -35.20190973)"^^geo:wktLiteral ] .\n\n<http://gnafld.net/def/gnaf/GnafConfidence_2> rdfs:label "Confidence level 2"^^xsd:string .\n\n<http://www.geonames.org/2177478> rdfs:label "Australian Capital Territory"^^xsd:string .\n\n''
It seems that the error comes from the graph parser (g.parse()). If anyone has idea about how to solve this, it would be highly appreciated. Thanks in advance.
I am simulating the SiC at different temperatures to study its structural properties but there occurred no chemical reaction(bond breakage or formation) in the final structure and since the radial distribution function is showing straight line instead of any peaks of homo and hetero bond (C-C, Si-C, Si-Si). so please tell me should i do equilibration before MD simulation or is there any problem with method used??
In the older versions of GROMACS (before 5.1.1) we could use the instruction gmx rdf -com -f traj.xtc -n index.ndx -o out.xvg. The -com flag could calculate RDF with respect to the center of mass of first group. I haven't found how to calculate it in the latest versions of GROMACS.
Hay all,
I have just started research ontologies. I need a good tutorial of ontologies development, OWL, RDF, RDFS, Description Logic and Machine learning role in ontologies. Is there any available online course or any kind of good material, please suggest to me?
Agronomists conduct trials on nutrient (NPK and others) management in which 150% and even 200% of RDF (recommended dose of fertilizer) are considered as treatment. Research fundings show higher productivity with nutrient levels at >RDF. Further, STCR based trials show higher dose of nutrients, sometimes >200% of RDF. My question is:
"Is it the right time to relook into fertilizer recommendation for crops for more precise nutrient management?"
Overall, I would like to know how much waste (if possible, depending on the material) is needed to make RDF production viable. Somebody knows where could I found this information?
Thanks
I'm trying to generate CGR and in order to do so I want to use the Marvin Sketch auto mapper function on a bunch of reactions, stored as reaction smarts. But in order to use the auto mapper function from Marvin sketch I have to pass the reactions as rxn files or as a RDF file.
Hi all?
I perform MD simulation to investigate the interaction of a ligand with the graphene surface by using Gromacs package.
How can I calculate the RDF between "surface" of graphene and center of mass (COM) of ligand?
A long term experiment was initiated in kharif 1985 with four fertility levels (0%, 50%, 100% and 150% of RDF) with 6 replications in RBD. However, after 10 cropping cycles there was severe deficiency of Zn in 150% RDF treated plots. Hence out of 6 replications four replications were superimposed with different Zn and OM treatments. Two replications were left as such. So, now the question is how to analyse the data for a valid conclusion.
Dear all,
I'm looking at a system composed of a CNT surrounding by ionic liquid.
I would like to look at the radial distribution function of the ionic liquid inside the carbon nanotube.
The atoms forming my CNT are not frozen so the shape of the CNT is not a cylinder.
Please find attached the pictures.
I do appreciate your help !
My simulation is described in briefly as follows. A water slab is 6*6*7 nm3 with two surfactant (SDS) monolayers in z direction, and then two vacuum regions with height of 10 nm were added below and above the slab to create the air/water surface. After 100 ns, the RDF of water oxygen around the sulfur atom in SDS head group was calculated, but the g(r) value approached to 2.5 rather than 1 after normalization. Could you give me some help or suggestions? Thank you.
There are several weighting factors which can be included into word embedding models in order to get useful and accurate semantic representations of terms. But when you have small data and synonyms or homonyms in your corpus you generate noisy results. You can leverage this problem by making use of already known semantic information as it is available in ontologies (RDF, OWL) or in terminological databases (TBX, SKOS). I would be interested to read your feedback on the best approaches to include existing semantic information into vector space algorithms (LSA, LDA) and models (GloVe, Word2Vec...) possibly using libraries like Gensim.
How we can apply dimensionality-reduction technique to a matrix of common-sense knowledge that has been built using RDF/XML e.g. SenticNet 3.?
Hi,
I am studying the stabilization of proteins using different solvents using GROMACS. Attached is the RDF of the cation (black) and anion (red) around the protein.
I used "gmx_mpi rdf com rdf mol_com -s em.tpr -f md_Dhp_2system.xtc -cut 0.5 -cn -o radialdist_cation_Dhp_2_com.xvg -tu ns" to calculate the RDF. The plot does not show any peaks. Does this mean that the cation and anion are uniformly distributed right from 0.5nm of the protein?
Thanks.
I've been searching for ontologies (RDF) representing EDXL standards, emphasis on CAP and SItRep.
The only one I found available comes from COncORDE project:
. Overview:
. File:
There are other projects, but I couldn't find their RDF files:
- The RESCUER project:
- EPISECC project:
- US Department of Defense:
This last one is quite interesting since it uses BFO as a foundational ontology. Anyone know if the RDF is available?
i wnat to use semantic web to make website about breast cancer
I need any new statistic showing the growth of semantic data and RDF triple stores like the one shown in this link: https://www.quora.com/How-fast-is-semantic-web-and-or-linked-data-growing-per-year
I have not been able to query RDF document using XPath since it does not work with RDF namespace, although RDF is basically an XML document. SPARQL is used for querying RDF document. What is difference between XPath and SPARQL?
As far as I know, for arriving at an RDF for a given crop scientists took hundreds of multi-location and multi-temporal crop trials with selected ranges of N, P and K. And, the NPK inputs corresponding to the highest physical yields were called RDF's.
Please guide further in this regard.
Thank you in advance.
MKD
Semantic web employs RDF, Ontology for storing structured data in contrast to HTML which does not have any structure. The data on the web is purely in HTML format. Further, there is no standardization. How to convert HTML data into standardized RDF format or ontology for integrating data from different existing resources?
Can anyone recommend me a peer reviewed scientific papers on a comparative study between available graph databases?
I am working on a project that focuses on mental health conversations between counsellors and at-risk individuals. Although there are multiple mediums being used (i.e. audio, SMS txt, email), I am focusing on audio conversations only. Once I have created transcriptions of the conversations, I want to be able to contextualize them and the best way I have discovered from my research thus far is to transpose the conversations and their utterances into knowledge graphs, using a combination of techniques and tools (e.g. ML, NLP, LIWC etc.).
However, when exploring what kind of graph to use, I come across two primary forms to choose from: 1) Property graphs (e.g Neo4j) and 2) RDF triple stores (e.g. GraphDB).
What I am trying to do is put the conversational data in a format that allows me to explore areas at greater depth with clinical psychologists such as emotional states, measuring temporal changes in moods, rough sentiment analysis across individuals from different localities, gaining greater clarity of the general influences that are creating states of depression across a population.
Given the nature of human conversation and the domain within which these conversations are taking place (e.g. a depression or suicide hotline), what guidance might you have as to which graph database approach to use and why?
Or if you have thoughts on other approaches or avenues to explore, let me know.
Ontologies such as PROV-O manage the provenance of the resources in an RDF Graph at an instance level, but I need to manage the provenance of each triple, is there any good reference to do this?
according to site
there are mentioned that:
the three types of RDF data that occur in triples: IRIs, literals and blank nodes.
1- when are we use IRI not ordinary triple?
I tried to calculate RDF using 'Compute rdf' command, but it returns zero in all distances.
I tried it for a simple periodic monatomic metal crystal but it returns zero too.
does rdf calculation depend on pair style(not in simulation but only in post calculations)?
if yes , which pair style should i use?
We are working on a new (contextual) enterprise semantics methodology, which appears more powerful and scalable than set theory-based methods, including RDF. I pose this question to check how new it really is.
I want to calculate the COM RDFs between my peptide and water for which I have used the -com option in gromacs.(Also tried with -com and -rdf res_com).
It generates a wierd RDF which does not look correct. I want to take the first minimum and use the -cn option to get the coordination number.
The images for the RDFs are attached herein. What could be possibly wrong? The atom-atom RDFs are coming out well but not the COM ones.
Additionally,
If I use the -cn option of gromacs, it shows the cumulative CN right? Which means if I see the number at 0.257(say that's the cut-off), the number corresponding to it would be the coordination number. Am I right?
What if I integrate the rdf using g_analyse -integrate option ?
Any ideas on how to do the integration in Gromacs?
Hello all,
I have one question about RDF calculation in SMD simulation.
I was trying to perform SMD simulation of system containing a lactic acid and a cycle peptide nanotube, in which lactic acid moves through the nanotube with constant velocity. I have analyzed radial distribution function (RDF) of hydrogen atom of lactic acid in distance r from oxygen atoms of nanotube (there are 80 oxygen atoms along the nanotube). It indicates that the H atom has a first peck at 2 A with probability value is 60 and the second peak is appeared at 6 A with probability of 150. I did not suppose that the probability values would be too high.
Can anyone help me to interpret the high value of probabilities?
Thanks in advance,
I have aligned nucleotide sequences in mega format, I need to construct Median Joining network in Network software. Please anyone can tell me how to make *.rdf file and how to construct MJ network in network software.
Apart from economic frameworks like oracle or sql server, I am looking for academic business intelligence data to conduct some experimental researches such as applying data mining algorithms or optimization algorithms. Can suggest links for such data?
Regards,
Rafid Sagban
Can you please recommend a good, possible native, TripleStore database for RDF?
I use RAP api for PHP to query my knowledge (RDF, OWL and etc.) but I cannot find the way to apply SWRL that I designed on Protege. Somebody suggests me to use Jena.