Science topic
Semantic Web - Science topic
Researching the Web of Data
Questions related to Semantic Web
Hello Everyone,
I have one task provided by our prof. We want to write down extended abstract on semantic network analysis. I am unable to understand how can i narrow down my topic for making posters and extended abstracts for this topic. If any one has experience in this topic please share your thoughts.
Thanks in advance.
Regards
Ashish Kumar
It has apparently been proven that SQL with cyclic tags is Turing-complete (see [1]).
There are also approaches that convert relational structures to OWL (e.g. [2], [3]).
Can one conclude that one can define any algorithm in OWL or in one of its derivatives?
Does anyone know a paper?
Thanks!
Best,
Felix Grumbach
The term "Semantic Web" goes back to at least 1999 and the idea – enable machines to "understand" information they process and enable them to be more useful – is much older. But still we do not have universal expert systems, despite that they would be very advantageous, especially in the context of (interdisciplinary) research and teaching.
My impression is, that from the beginning semantic web technologies was dominated by Java-based tools and libraries and that the situation barely changed until today (2022): E.g. most of practical ontology development/usage seems to happen inside Protegé or by using OWLAPI.
However, in the same time span we have seen a tremendous success of numerical AI (often called "machine learning") technologies and here we see a much greater diversity of involved languages and frameworks. Also, the numerical AI community has grown significantly in the last decade.
I think, to a large part this is, because it is simple to getting started with those technologies and Python (-Interfaces) and Jupyter-Notebook contribute significantly to this. Also, Python greatly simplifies programming (calling a library function and piping results to the next) for people who are not programmers by training such as physicists, engineers etc.
On the other hand getting started with semantic technologies is (in comparison) much harder: E.g. a lot of (seemingly) outdated documentation and the lack of user-friendly tools to achieve quick motivating results must be overcome in this process.
Therefore, so my thesis, having an ecosystem of low-threshold Python-based tools available could help to unleash the vast potential of semantic technologies. It would help to grow the semantics community and to enable more people to contribute contents such as (patches to) domain-ontologies, sophiticated queries and innovative applications e.g. combining Wikidata and SymPy.
Over the past months I collected a number of semantic-related Python projects, see https://github.com/pysemtec/semantic-python-overview. So the good news is: There is something to use and grow. However, the amount of collaboration between those projects seems to be low and something like a semantic-python-community (i.e. people who are interested in both semantic technologies and python programming) is still missing.
Because complaining alone rarely leads to improvement of the situation, I try to spawn such a community, see https://pysemtec.org.
What do you think, can Python help to generate more useful applications of semantic technology, especially in science? What has to happen for this? What are possible counter-arguments?
P1: Ontology + Data = Knowledge Graph (KG)
P2: If a KG is the sum of these two summands, it follows:
C: An ontology is a framework for a KG
As a framework, an Ontology consists of person data, classes, properties, relations and axioms.
Personal data: Tom
classes: Interim project manager
Properties: takes over the project
Relationships: Tom is the successor of Bernd
Axioms: Tom takes over the project from Bernd
KG: We ask a question and the knowledge graph makes connections to the individual elements of the Ontology. It brings them to life, so to speak.
If you compare the KG with our neural network, you can see similarities. If I ask Tom a question, he will use his neural network to answer this question and generate new ideas.
That's why Knowledge Graphs are also defined as a kind of semantic network.
Does the community agree with this approach? Please give feedback! Thanks!
In Linked Open Data there are many links between the resources and some resources are linked directly and others are linked indirectly through a third resource. some similarity measures of resources depend on a number of direct and indirect links and I am asking If there is any evidence or justification to depend on to give more importance (weight) to direct links over indirect links to find the similarity.
Actually, I am working in the solid waste management area. Although there are many indicators, data availability is a concern. Kindly help me with the indicators where I can easily work on with the available data. I am working on the smart cities of northeast India. Thank you.
Advancing Informatics for healthcare and healthcare applications has become an international research priority. There is increased effort to transform reactive care to proactive and preventive care, clinic-centric to patient-centered practice, training-based interventions to globally aggregated evidence, and episodic response to continuous well-being monitoring and maintenance.
ICSH 2015 (International Conference for Smart Health 2015) is intended to provide a forum for the growing smart health research community to discuss the principles, algorithms and applications of intelligent data acquisition, processing, and analysis of healthcare data.
The conference proceedings will be published by Springer Lecture Notes in Computer Science (LNCS). The electronic conference proceedings will be provided at the time of registration. The published proceedings will be sent to each registrant later by organizers. Selected papers will also be considered for IEEE Intelligent Systems and ACM Transactions on Management Information Systems.
---------------------------------------------------------------------------------------
Important Dates:
Paper Submission: SEPTEMBER 6, 2015(EXTENDED)
Notification of acceptance: SEPTEMBER 26, 2015
Conference dates: NOVEMBER 17-18, 2015
------------------------------------------------------------------------------------
Topics of Interest for the conference include, but are not limited to (see the conference website at http://icsh2015.org for additional suggestions):
I. Information sharing, integrating and extraction
♦ Patient education, learning and involvement
♦ Consumer and clinician health information needs, seeking, sharing, and use
♦ Healthcare knowledge abstraction, classification and summarization
♦ Effective Information retrieval for healthcare applications
♦ Natural language processing and text mining for biomedical and clinical applications, EHR, clinical notes, and health consumer texts
♦ Intelligent systems and text mining for electronic health records
♦ Health and clinical data integrity, privacy and representativeness for secondary use of data
II. Clinical practice and training
♦ Virtual patient modeling for learning, practicing and demonstrating care practices
♦ Medical recommender systems
♦ Text mining clinical text for innovative applications (patient monitoring, recommender systems for clinicians, adverse effects monitoring)
♦ Mental and physical health data integration
♦ Computer-aided diagnosis
♦ Computational support for patient-centered and evidence-based care
♦ Disease profiling and personalized treatment
♦ Visual analytics for healthcare
♦ Transdisciplinary healthcare through IT
III. Mining clinical and medical data
♦ Data augmentation and combination for evidence-based clinical decision making
♦ Biomarker discovery and biomedical data mining
♦ Semantic Web, linked data, ontologies for healthcare applications
♦ Software infrastructure for biomedical applications (text mining platforms, semantic web, workflows, etc)
♦ Intelligent Medical data management
♦ Computational intelligence methodologies for healthcare
IV. Assistive, persuasive and intelligent devices for medical care and monitoring
♦ Assistive devices and tools for individuals with special needs
♦ Intelligent medical devices and sensors
♦ Continuous monitoring and streaming technologies for healthcare
♦ Computer support for surgical intervention
♦ Localized data for improving emergency care
♦ Localization, persuasion and mobile approaches to increasing healthy life styles and better self-care
♦ Virtual and augmented reality for healthcare
V. Global systems and large-scale health data analysis and management
♦ Global spread of disease: models, tools and interventions
♦ Data analytics for clinical care
♦ Systems for Telemedicine
♦ Pharmacy informatics systems and drug discovery
♦ Collaboration technologies for healthcare
♦ Healthcare workflow management
♦ Meta-studies of community, national and international programs
------------------------------------------------------------------------------
MORE INFORMATION:
To make a survey for a particular fact such as the perception of a phenomenon, one often resorts to an interview questionnaire. In the digital age, tools such as Google Forms, Kobo collect and many others have been developed. From a scientific point of view, is it preferable to submit an interview questionnaire through the communication channels (sms, email, social networks ...) or, it is better to submit it in the ground ?
While there are multiple Java implementation for managing semantic knowledge base (Hermit, Pellet...) there seems to be almost none in pure Javascript.
I would prefer to use JS than Java in my project since I find JS much more clean and practical and easy to maintain. Unfortunately, there seem to be almost nothing to handle RDF data together with rules inference in Javascript. Although there are some work to handle only RDF (https://rdf.js.org/), it's unclear what is the status of these works regarding to W3C specifications.
I actually want to enable user to defined rules for his resources on Social Network. These rules will be used by proposed model to share users resources across different social network.
In modern day knowledge engineering solutions, what can be importance, necessity and feasibility of using ontology for protect cultural heritage?
CFP - Call for Papers for the 2nd Iberoamerican Knowledge Graph and Semantic Web Conference @KGSWC2020 @KGSWC
July 28-31, 2020,Mérida, Yucatán. Mexico
Do you know of research papers dealing with the whys of the semantic web failure or disinterest or shift of interest toward knowledge graphs ?
How to convert triple store into quad store.
how default graph converted into named graph.
Those knowledge representation forms can be considered traditional, because they are related with historically first times, mainly symbolic period of Artificial Intelligence. Are they applied so widely, to justify their inclusion in a general undergraduate course for Computer Science students?
I work on graph based knowledge representation. I would like to know, how we can apply Deep Learning on Resource Description Framework (RDF) data and what we can infer by this way ? Thanks in advance for your help!
In my academic research on "Estimating an optimal waiting time of insurance claims in customer retention using queuing models", i am stuck on where i can source data. The data i am looking for has to have dates the claims were submitted, settled and paid out and whether a client has cancelled the insurance policy or not. This assignment is due on the 14th of June, 2019. Is there any one who is willing to help with the data or links ?
Since 2000, beginning with Mizoguchi and Bourdeau seminal publication "Using Ontological Engineering to Overcome Common AI-ED Problems", and other researchers publications, ontologies and semantic web have been applied to Education, mainly in virtual learning environment. Have those applications reached the expected results? What is your opinion?
Hey all,
I am trying to finetune the Deconvnet (learning Deconvolutional network for semantic segmentation which is trained on 20 classes data on pascaL-VOC). While finetuning with my own data(4 classes), the validation accuracy achieved is around 99 percent. However, while inferencing/testing the trained model, the output image comes all black. The data is equally distributed for all classes available. I am doing this in caffe. Can I get help if anyone faced this issue?
I'm developing an Ontology for the OPC UA Standard and I would like to know if there are already existing versions of such Ontology.
working on semantic web for which generating triplet dataset in protege and load into eclipse using jena inference rule to query form data every thing goes well but didn't able to get results at interface while enter query in search box get the error of
WARN [AWT-EventQueue-0] (Log.java:80) - setResultVars(): no query pattern
As code id investigated carefully and didn't get any bug
while searching query it will only converted and matched back-end but output will not show
i wnat to use semantic web to make website about breast cancer
I need any new statistic showing the growth of semantic data and RDF triple stores like the one shown in this link: https://www.quora.com/How-fast-is-semantic-web-and-or-linked-data-growing-per-year
Semantic web employs RDF, Ontology for storing structured data in contrast to HTML which does not have any structure. The data on the web is purely in HTML format. Further, there is no standardization. How to convert HTML data into standardized RDF format or ontology for integrating data from different existing resources?
I want to extract the data from the internet with specific medical terms. Which are the available tools with best accuracy or output?
What do you think about it?
In my opinion, this is possible thanks to the semantic web but, to get this to work, you must first insert data so that you can process it, and usually these data are inserted by people, but what would happen if the computer itself wrote its own questions and manage to answer them correctly. It would be an evolutionary step of great magnitude.
Hi,
SIREn as Semantic Information Retrieval Search Engine was work in my university computer in Sept. to Nov., 2013, but afterthat it does not work due to the security and privacy issues. Please, I am wondering how can I resolve this privacy issue because the maintenance team can not do it on that time.
As all of you know from me in facebook in 2013 and today and here in researchgate about SIREn is Semantic Information Retrieval Search Engine using Lucene and Solar.
Thanks & Best Wishes
Osman
I am unable to run HiTeX with UMLS database using GATE GUI, any suggestions or external resources which I could refer will be helpful
Thanks in Advance...
I am using Protege 3.4 and I built my ontology and extended it with SWRL rules using SWRL and sqwrl built ins . The rules classifies instances of a class of the ontology ; the RHS is class assertion of an instance .It works successfully but when I change the values of the causes in the LHS no reclassification is done the instance is asserted to the same class
ex: SWRL rules:
Message(?m) hasInterest(?m,?i) hasCategory(?m,?c) sqwrl:makeset(?s1,?i) sqwrl:makeset(?s2,?c) sqwrl:intersection(?s3,?s1,?s2) sqwrl:size(?n,?s3) swrlb:greaterThan(?n,0) -> Ham(?m)
Message(?m) hasInterest(?m,?i) hasCategory(?m,?c) sqwrl:makeset(?s1,?i) sqwrl:makeset(?s2,?c) sqwrl:difference(?s3,?s1,?s2) sqwrl:size(?n,?s3) swrlb:greaterThan(?n,0) -> Spam(?m)
So once the message instance (m1) is classified as ham for example as i= sports and c=sports , whenever I change the values of i= movies ( interests) for the message instance (m1) it will always be ham . I understood that this is because the class type is asserted . So my question is Why does this happen ? How to reclassify instances as I need a dynamic way for message classification
I try to get all 'subclass-of' axioms of an ontology. I tried by using the following statement.
MyOntology.getAxioms(AxiomType.SUBCLASS_OF));
Effectively, it returns the ontology 'subclass-of' axioms, except for the first 'subclass-of' axiom which links OWL:Thing with my first ontology class.
I cannot understand why this link isn't taken into account in that case ?
Please, is there any way to get all 'subclass-of' axioms including those linking OWL:Thing with other classes ?
Are there any NLP libraries that is capable of extracting RDF style triples {subject-predicate-object} from text ?
Hello, if you have a user with a list of friends, a list of favorite resources, a list of evaluation of these resources, what is the datamining technique that allows me to answer these questions 1 * Does 2 friends have the same favorite resource list? 2 * is this two friends evaluating the same resource? Is the Apriori algorithm allows me to answer this?
If not then what is the technique that allows me to do this.
Thank you in advance for your answers
I want to modify the existing ontology in protege tool and then add individuals into it. After development of the framework in Protege i want to add lots of individuals to it (more than thousands).
So, for adding individuals i am planning to use RDF Jena Api.
Is it possible to extend the already build ontology from jena api?
Please provide some example..
Hi ,
I know GATE library has some support for Non-English ontology such as Arabic. Please, I am wondering if there is another library package for Arabic ontologies?
Arabic plugin
How do I create RDF with GATE? documentation
Thanks
How could we build a framework and check its performance and to ensure this framework is better than before? Is there specific criteria? My topic related to semantic web specially is in semantic annotation?
I am looking for metadata structures suitable to describe historical tattoos. Are you aware of any projects which developed a metadata schema for such a purpose? Many thanks for your help.
Does anyone know about some existing stop-word vocabularies ?
I am interested in doing some keyword text mining work and I was wondering if there are some existing stop-word vocabularies that I can use to reduce the noise from the data.
I want to test some efficiency of the web-based system. Some tasks would determine the result of the evaluation in post-test design.
I would also want to evaluate the system using TAM but not by survey (in post-test only design).
The web-based which I want to evaluate is attached below:
I'm building the system with handle million request from other vendors. The request from vendor, the system processing, call other services and then response.The problem I get that is our business rules are complex so our system down when getting too much request. Current ours can handle 20 ~ 25 request per second.
We using java vert.x, rabbitmq, memcache, mysql database, jooq flowing microservice.
Can you give some ideal to increase the request ?
I'm building an ontology and I need to create the same semantic relation (the name of the relation is the same as well as the meaning in the domain) between different classes of elements. For example:
o:ClassA o:hasSemanticRelation xsd:string
o:ClassB o:hasSemanticRelation xsd:string
o:ClassC o:hasSemanticRelation xsd:string
My first approach was to create multiple domains for the property but this actually means the intersection of the concepts which is not correct in the domain. My second approach was to have a super property
owl:Thing o:hasSemanticRelation xsd:string
o:hasSemanticRelationA owl:subPropertyOf o:hasSemanticRelation
o:ClassA o:hasSemanticRelationA xsd:string
Because of the meaning of the hasSemanticRelation I want that every time it is used it can be linked to the same property, i.e., o:hasSemanticRelation
Could anyone give ideas how can I best represent this situation?
I am looking for any ontologies/RDF schema that describe the capabilities/dynamic nature of QR codes.
I'm looking for any surveys related to Semantic relatedness/similarity algorithms/measures or methods for RDF graphs.
Thanks in advance,
I am in a very initial stages of learning the web semantic. I would like to create a standard eCommerce website via woo-commerce and then would like to apply semantic web into it. I was looking for a pre-built ontology for any product published online for example a shoe with the variations.
Hi i am trying to prepare program for ppswr sampling with distinct unit using cumulative total method for my P.hd work .pls help me to write this type of program.
Ontologies are recognized as a mean of knowledge modeling and representation in the Semantic Web. Moreover, they have been the subject of several works on the visualization of their content. My question concerns the way in which it would be possible to evaluate a data visualization system in general, and in particular ontology visualization system.
In the Semantic Web register, Big Data is recognized as a real big challenge. So, we have a pressing need to store and manage huge amounts of data through trillions of triples. It is more flexible to use triple-stores than classic data bases since that data are connected to each other in a form assimilated to a graph. In this context, Semantic Data Lake notion has emerged. My question is about a concrete definition of a Semantic Data Lake.
I am developing an ontology for semantic searching of products and services for a financial sector. I need to know how to embed the ontology with the search interface to verify the search result. More precisely, applying the ontology in web pages for a real time query.
I have a RDF graph stored in Fuseki. This graph has resources describing metadata fields from research articles such as title, authors, abstract, etc. Also, I have the corresponding PDF file for each resource in RDF graph. So, I need to build a query over the RDF graph and retrieve their corresponding PDF file or viceversa. Is there any way to do that?
PD: The RDF graph is stored in Fuseki, and the PDF files are indexed in Apache Solr.
Semantic web for e-learning systems.
Ontology for e-learning systems.
Is there any tool which can convert accumulated knowledge from the domain expert into machine readable format (rdf, owl)?
THANKS FOR SHARING YOUR VALUABLE INFORMATION
I'm working with Linked Data (DBpedia in my case) and want to know whether this is the best one among others such as edge counting, information content or maybe hybrid measures.
I need:
1- A set of natural language (NL) rules or a legal text made of NL sentences
2- The formal requirements extracted from these NL rules (or from the legal document)
3- Related resources (e.g terminologies, ontologies) required for the transformaltion NL -> formal expression
I am working on a problem where Semantic Web Services could possibly help, and was surprised to see how little has changed in this area since 2008. SWS used to be a relatively hot topic back then and now, it seems, it mostly fizzled out. I don't see a single paper at the upcoming ISWC conference related to SWS.
What is the maturity of existing tools for SWS (OWL-S in particular) composition/matchmaking? Is there any active user group/mailing list devoted to SWS?
Thanks,
Jakub
Hello!
Does anyone know a PaaS free of charge that can be used to develop semantic applications using CNL (Controlled Natural Language), OWL and RDF? And once developed, the application will be hosted on Cloud. Something similar with what Ontorion Fluent Editor provides but for Cloud. Basically my requirements are:
- users should be able to easy define IF/THEN rules from a web browser
- an API should be available so I will be able to load through a program about 100 000 instances in the Knowledge Base
- the existing ontology described in OWL or RDF can be imported
- an inference engine should exist and allow userst to easily run queries (SWRL, SPARQL etc) from the same web browser
- web interface for users can be developed and customized
- all these functionalities described above are available on Cloud and can be easily configured in terms of CPU's, RAM, etc
Thank you
Regards,
Sorin
Big data is a big challenge in semantic web. We need to store trillion of triples/quads in a store. It is more than data warehouse as data are connected to each other to form a graph. So, semantic data lake is coined. Then query/inferencing is a real challenge to achieve scalability. Hadoop is a good choice of achieving scalability using commodity hardware. But, my question focuses on the concrete technique of implementing semantic data lake using Hadoop.
Dear alll,
I am working on a research project related to semantic personalization.
In this context, I am wondering about the main differences in terms of technical strenghts and weaknesses of the two knowledge bases.
In other words, which knowledge base is more maturated than the other and which one has more available apis to be used and explored.
Thank you in advance !
I am doing project in composite web services for what I need to measure, monitor and report to improve the QoS for web services.
Keeping semantic heterogeneity in mind, what are all the possible approaches?
I would like to know alternative algorithms of semantic spreading activation that work in the same field of information retrieval.
As OWL is based on the open world assumption it will rather classify entities than validate them in the classic way, since it assumes a non complete knowledge base.
This characteristic had caused me great problems when considering to use OWL for MDE (Model Driven Engineering) to generate a form based knowledge management system.
With forms the user WANTS validation, which must be rather strict and in real-time. The domain and range conditions won't help:
"The fact that domain and range conditions do not behave as constraints and the fact that they can cause ‘unexpected’ classification results can lead problems and unexpected side effects." (A Practical Guide To Building OWL Ontologies Using Protégé 4 and CO-ODE Tools, p36)
I've found Pellet Integrity Constraints (http://clarkparsia.com/pellet/icv/) which adds stricter constraint features to OWL.
This is where the Semantic Web has lost me. I need to extend an already complex system in order to achive something as simple and common as classic validation?
Have I missed something obvious, or is OWL really unsuitable (or at least very painful) to model closed world systems?
I can see why OWL and the OWA work the way they do and that knowledge bases that aggregate information from different sources can benefit from this concept. However, most systems are (for good reasons) closed world.
How to generate key from web resource for linking with semantic approach. and how to convert normal web content like HTML, XML to RDF, OWL i need help to develop this semantic linking to web content using self key generating for this..... and how to implement this Paper? KD2R???
my code:
String sparqlQueryString8="prefix sumo:<http://www.ontologyportal.org/SUMO.owl#>" +
"prefix rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>"+
"select ?c where {?c rdf:type sumo:City }";
OntModel modelcity=ModelFactory.createOntologyModel();
modelcity.read("SUMO.owl");
Query querycity = QueryFactory.create(sparqlQueryString8);
QueryExecution qexeccity = QueryExecutionFactory.create(sparqlQueryString8, modelcity);
ResultSet resultcity = qexeccity.execSelect() ;
for (; resultcity.hasNext();) {
QuerySolution sltcity=resultcity.next();
System.out.println(sltcity.toString());
}
please help me.
Actually, I'm getting lost with domain and range semantics when a subsummption exists, in addition to restriction inheritance between class taxonomy members. Please see the following cases.
Let's consider
(1) hasCar Domain driver
(2) driver subClassOf human
Then, can we infer that
hasCar Domain human
Let's have hasCar (x, y) whatever x is
from (1): driver(x) & from (2): human(x)
then: whatever x is, if hasCar(x, y) => driver (x) =>
(3) hasCar Domain human
First Question: Is this conclusion correct? Why isn't Protege 5 with Hermit (neither Pellet, not even Jena with some reasoner) inferring that?
------------------------------------------------------------------------------------------------
Let's consider
(1) hasAudiCar Range AudiCar
(2) AudiCar subClassOf Car
In a similar fashion, we can infer that
(3) hasAudiCar Range Car
Second Question: Is this conclusion correct? Why isn't Protege 5 with Hermit (neither Pellet, not even Jena with some reasoner) inferring that?
-------------------------------------------------------------------------------------------------
Let's consider
(1) hasAudiCar Domain driver
(2) hasAudiCar Range audiCar
(3) driver hasAudiCar min 1 audiCar
(4) audiCar subClassOf car
Then, we can infer that
driver hasAudiCar min 1 car
Third Question: Is this conclusion correct? Why isn't Protege 5 with Hermit (neither Pellet, not even Jena with some reasoner) inferring that?
Surprisingly; Using Jena with the specification OntModelSpec.OWL_DL_MEM_RULE_INF gives my expected results!
Hi,
I have a set of ontologies related to Cultural Heritage domain created by technical experts and a textual corpus written by archaeological experts. My problem is that the ontologies need to be filled by archaeological knowledge (that I don't know big things), so I'll use the archaeological texts to try to extract the information needed.
I need your recommendations about methods of information extraction.
And for ontologies, is there any heuristics to fill an Ontology automatically? (I have the T-Box and I have to generate the A-Box)
Thank you for your interest,
Best regards.
How can I say that a particular tweet is rumor. I don't want to use any supervised knowledge to identify rumors
SUMO OWL=(T-Box)+(A-Box)
but, i want only T-box