Science topic

Semantic Web - Science topic

Researching the Web of Data
Questions related to Semantic Web
  • asked a question related to Semantic Web
Question
5 answers
Hello Everyone,
I have one task provided by our prof. We want to write down extended abstract on semantic network analysis. I am unable to understand how can i narrow down my topic for making posters and extended abstracts for this topic. If any one has experience in this topic please share your thoughts.
Thanks in advance.
Regards
Ashish Kumar
Relevant answer
Answer
Then I suggest starting a discussion. For example, your headline is a combination of semantic networks and network analysis. Both are distinct areas. Semantic networks have vertices and edges with assigned meaning, wheras network analysis deals with mathematical objects within a network, such as hubs, shortest paths, betweenness, cliques, clusters, percolation, random walks.
Regards,
Joachim
  • asked a question related to Semantic Web
Question
1 answer
It has apparently been proven that SQL with cyclic tags is Turing-complete (see [1]).
There are also approaches that convert relational structures to OWL (e.g. [2], [3]).
Can one conclude that one can define any algorithm in OWL or in one of its derivatives?
Does anyone know a paper?
Thanks!
Best,
Felix Grumbach
Relevant answer
Answer
hi,
The W3C Web Ontology Language (OWL) is a Semantic Web language designed to represent rich and complex knowledge about things, groups of things, and relations between things.
OWL Full allows free mixing of OWL with RDF Schema and, like RDF Schema, does not enforce a strict separation of classes, properties, individuals and data values. OWL DL puts constraints on the mixing with RDF and requires disjointness of classes, properties, individuals and data values.
Kindly refer this link:
Best wishes..
  • asked a question related to Semantic Web
Question
7 answers
The term "Semantic Web" goes back to at least 1999 and the idea – enable machines to "understand" information they process and enable them to be more useful – is much older. But still we do not have universal expert systems, despite that they would be very advantageous, especially in the context of (interdisciplinary) research and teaching.
My impression is, that from the beginning semantic web technologies was dominated by Java-based tools and libraries and that the situation barely changed until today (2022): E.g. most of practical ontology development/usage seems to happen inside Protegé or by using OWLAPI.
However, in the same time span we have seen a tremendous success of numerical AI (often called "machine learning") technologies and here we see a much greater diversity of involved languages and frameworks. Also, the numerical AI community has grown significantly in the last decade.
I think, to a large part this is, because it is simple to getting started with those technologies and Python (-Interfaces) and Jupyter-Notebook contribute significantly to this. Also, Python greatly simplifies programming (calling a library function and piping results to the next) for people who are not programmers by training such as physicists, engineers etc.
On the other hand getting started with semantic technologies is (in comparison) much harder: E.g. a lot of (seemingly) outdated documentation and the lack of user-friendly tools to achieve quick motivating results must be overcome in this process.
Therefore, so my thesis, having an ecosystem of low-threshold Python-based tools available could help to unleash the vast potential of semantic technologies. It would help to grow the semantics community and to enable more people to contribute contents such as (patches to) domain-ontologies, sophiticated queries and innovative applications e.g. combining Wikidata and SymPy.
Over the past months I collected a number of semantic-related Python projects, see https://github.com/pysemtec/semantic-python-overview. So the good news is: There is something to use and grow. However, the amount of collaboration between those projects seems to be low and something like a semantic-python-community (i.e. people who are interested in both semantic technologies and python programming) is still missing.
Because complaining alone rarely leads to improvement of the situation, I try to spawn such a community, see https://pysemtec.org.
What do you think, can Python help to generate more useful applications of semantic technology, especially in science? What has to happen for this? What are possible counter-arguments?
Relevant answer
Answer
@
J. Rafiee
Thanks for your reply. While "Python is just another language" it has from my experience some advantages which make it favorable in a scientific context:
- possibility of interactive usage (e.g. via Jupyter-Notebooks)
- plenty of science-relevant libraries (numpy, sympy, pytorch, ...)
- comparatively low entrance barrier to getting started (e.g. for undergraduate-students from subjects other than computer science)
To prevent misunderstandings: I do not claim that Python is the perfect language. It definitely has weaknesses (such as execution speed).
My point is: If there were more (and better supported) python-tools for semantic-related tasks available and the existing ones were better known, this would significantly foster the development and the applicability of semantic technologies such as ontologies, reasoners and rule-processors.
I expect the scientific world would benefit from such a development because suitable management of available knowledge is one of its core-challenges and having more and easier tools available to tackle that challenge would consequently result in more and better research results.
Even if we discount for "hype-factor": Machine Learning (or numerical AI) techniques have had tremendous success (both in quality and quantity) in many research disciplines and Python-based libraries and interfaces take a large part of the credit for that. I think a similar development could happen with semantic technologies (aka symbolic AI).
  • asked a question related to Semantic Web
Question
3 answers
P1: Ontology + Data = Knowledge Graph (KG)
P2: If a KG is the sum of these two summands, it follows:
C: An ontology is a framework for a KG
As a framework, an Ontology consists of person data, classes, properties, relations and axioms.
Personal data: Tom
classes: Interim project manager
Properties: takes over the project
Relationships: Tom is the successor of Bernd
Axioms: Tom takes over the project from Bernd
KG: We ask a question and the knowledge graph makes connections to the individual elements of the Ontology. It brings them to life, so to speak.
If you compare the KG with our neural network, you can see similarities. If I ask Tom a question, he will use his neural network to answer this question and generate new ideas.
That's why Knowledge Graphs are also defined as a kind of semantic network.
Does the community agree with this approach? Please give feedback! Thanks!
Relevant answer
Answer
" In computing, an ontology is then a concrete, formal representation — a convention—on what terms mean within the scope in which they are used (e.g., a given domain). Like all conventions, the usefulness of an ontology depends on how broadly and consistently it is adopted and how detailed it is. Knowledge graphs that use a shared ontology will be more interoperable. Given that ontologies are formal representations, they can further be used to automate entailment."
Extracted from
Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia D’amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, Axel-Cyrille Ngonga Ngomo, Axel Polleres, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan Sequeda, Steffen Staab, and Antoine Zimmermann. 2021. Knowledge Graphs. ACM Comput. Surv. 54, 4, Article 71 (May 2022), 37 pages. DOI:https://doi.org/10.1145/3447772
Ontology Rules can be used for reasoning in order to deduce implicity knowledge based on existing KG data.
But an ontology itself can contain data (called individuals or instances or concrete objects). So what is the difference between an ontology and an ontology-based KG? Maybe data size.
  • asked a question related to Semantic Web
Question
2 answers
In Linked Open Data there are many links between the resources and some resources are linked directly and others are linked indirectly through a third resource. some similarity measures of resources depend on a number of direct and indirect links and I am asking If there is any evidence or justification to depend on to give more importance (weight) to direct links over indirect links to find the similarity.
Relevant answer
Answer
Andre Valdestilhas Thank you for the answer. the properties that you mentioned are not always available.
An example to my question, If I need to know the related or similar movies to the initial movie <http://dbpedia.org/resource/The_Terminator>
there are direct links by WikiPageWikiLink property to other movies and other indirect links via a middle resource. so maybe give importance to existing of such direct links over indirect links better than depending on both links equally.
thanks.
regards.
  • asked a question related to Semantic Web
Question
3 answers
Actually, I am working in the solid waste management area. Although there are many indicators, data availability is a concern. Kindly help me with the indicators where I can easily work on with the available data. I am working on the smart cities of northeast India. Thank you.
Relevant answer
Answer
This scientific contribution will be useful in your research.
  • Selected socio-economic factors affecting the willingness to minimise solid waste in Dhaka city, Bangladesh
Abstract
This paper examines the factors that influence the waste generation and willingness to minimise solid waste in Dhaka city, Bangladesh. Information on waste generation, willingness to minimise, socio-economic characteristics, and behaviour of the households towards solid waste management were obtained from interviews with 402 households in Dhaka city. Of these, 103 households regularly practised recycling activities. Ordinary least square (OLS) regression and logistic regression analysis were used to determine the dominant factors that might influence the waste generation and households’ willingness to minimise solid waste, respectively. The results found that the waste generation of the households in Dhaka city was significantly affected by environmental consciousness, income groups, particularly the middle-income earners, and willingness to separate. The significant factors for willingness to minimise solid waste were environmental consciousness, income groups particularly the middle-income earners, young adults mainly those aged between 25 to 35 years and storage facility. Establishment of a solid waste management programme could be an effective strategy for implementing sustainable waste management in Bangladesh. For this strategy to succeed, however, active partnership between the respondents and waste management service department is required. The respondents’ behaviour toward solid waste management practices should be taken into consideration, as should the results of this study, which are important indicators of respondents’ positive attitudes toward sustainable waste management in Dhaka city.
  • asked a question related to Semantic Web
Question
3 answers
 Advancing Informatics for healthcare and healthcare applications has become an international research priority. There is increased effort to transform reactive care to proactive and preventive care, clinic-centric to patient-centered practice, training-based interventions to globally aggregated evidence, and episodic response to continuous well-being monitoring and maintenance.
ICSH 2015 (International Conference for Smart Health 2015) is intended to provide a forum for the growing smart health research community to discuss the principles, algorithms and applications of intelligent data acquisition, processing, and analysis of healthcare data.
The conference proceedings will be published by Springer Lecture Notes in Computer Science (LNCS). The electronic conference proceedings will be provided at the time of registration. The published proceedings will be sent to each registrant later by organizers. Selected papers will also be considered for IEEE Intelligent Systems and ACM Transactions on Management Information Systems.
---------------------------------------------------------------------------------------
Important Dates:
Paper Submission: SEPTEMBER 6, 2015(EXTENDED)
Notification of acceptance: SEPTEMBER 26, 2015
Conference dates: NOVEMBER 17-18, 2015
------------------------------------------------------------------------------------
Topics of Interest for the conference include, but are not limited to (see the conference website at http://icsh2015.org for additional suggestions):
I. Information sharing, integrating and extraction
♦       Patient education, learning and involvement
♦       Consumer and clinician health information needs, seeking, sharing, and use
♦       Healthcare knowledge abstraction, classification and summarization
♦       Effective Information retrieval for healthcare applications
♦       Natural language processing and text mining for biomedical and clinical applications, EHR, clinical notes, and health consumer texts
♦       Intelligent systems and text mining for electronic health records
♦       Health and clinical data integrity, privacy and representativeness for secondary use of data
II. Clinical practice and training
♦       Virtual patient modeling for learning, practicing and demonstrating care practices
♦       Medical recommender systems
♦       Text mining clinical text for innovative applications (patient monitoring, recommender systems for clinicians, adverse effects monitoring)
♦       Mental and physical health data integration
♦       Computer-aided diagnosis
♦       Computational support for patient-centered and evidence-based care
♦       Disease profiling and personalized treatment
♦       Visual analytics for healthcare
♦       Transdisciplinary healthcare through IT
III. Mining clinical and medical data
♦       Data augmentation and combination for evidence-based clinical decision making
♦       Biomarker discovery and biomedical data mining
♦       Semantic Web, linked data, ontologies for healthcare applications
♦       Software infrastructure for biomedical applications (text mining platforms, semantic web, workflows, etc)
♦       Intelligent Medical data management
♦       Computational intelligence methodologies for healthcare
IV. Assistive, persuasive and intelligent devices for medical care and monitoring
♦       Assistive devices and tools for individuals with special needs
♦       Intelligent medical devices and sensors
♦       Continuous monitoring and streaming technologies for healthcare
♦       Computer support for surgical intervention
♦       Localized data for improving emergency care
♦       Localization, persuasion and mobile approaches to increasing healthy life styles and better self-care
♦       Virtual and augmented reality for healthcare
V. Global systems and large-scale health data analysis and management
♦       Global spread of disease: models, tools and interventions
♦       Data analytics for clinical care
♦       Systems for Telemedicine
♦       Pharmacy informatics systems and drug discovery
♦       Collaboration technologies for healthcare
♦       Healthcare workflow management
♦       Meta-studies of community, national and international programs
------------------------------------------------------------------------------
MORE INFORMATION:
Relevant answer
Answer
19th Annual IEEE International Conference on Intelligence and Security Informatics (ISI)
We are excited to welcome participants to the 19th Annual IEEE International Conference on Intelligence and Security Informatics (ISI). This year's conference will be held in San Antonio, Texas USA on November 2-3, 2021.
IMPORTANT DATES:
Submission Deadline: August 7, 2021
Notification of Acceptance to Authors: October 1, 2021
Camera-ready Deadline: October 15, 2021​
​Conference: November 2-3, 2021
Intelligence and security informatics (ISI) is an interdisciplinary field involving academic researchers in information technology, social and behavioral sciences, computer science, law, and public policy. The field also includes industry consultants, practitioners, security managers, and chief information security officers who support physical and cybersecurity missions at the individual, organizational, national, and international levels (e.g., anticipation, interdiction, prevention, preparedness, and response to threats).
Over the past 18 years, the IEEE ISI Conference has evolved from its traditional orientation towards the intelligence and security domain to a more integrated alignment of multiple domains, including technology, humans, organizations, and security. The scientific community has increasingly recognized the need to address intelligence and security threats by understanding the interrelationships between these different components, and by integrating recent advances from different domains. ​
Submission Guidelines
We invite academic researchers in the field of Intelligence and Security Informatics and related areas as well as IT, security, and analytics professionals, intelligence experts, and industry consultants and practitioners in the field to submit papers and workshop proposals. ISI 2021 submissions may include empirical, behavioral, systems, methodology, test-bed, modeling, evaluation, and policy papers. Research should be relevant to informatics, organizations, public policy, or human behavior in applications of security or protection of local/national/international security in the physical world, cyber-physical systems, and/or cyberspace.
All papers must be original and not simultaneously submitted to another journal or conference. Each manuscript must clearly articulate their data (e.g., key metadata, statistical properties, etc.), analytical procedures (e.g., representations, algorithm details, etc.), and evaluation set up and results (e.g., performance metrics, statistical tests, case studies, etc.). Making data, code, and processes publicly available to facilitate scientific reproducibility is not required, but is strongly encouraged.
We accept three types of paper submissions on proposed tracks and related topics: long paper (maximum of 6 pages), short paper (maximum of 3 pages), and poster (1 page). The submission format is PDF. The accepted paper will be presented by an IEEE member.
Consult the IEEE publications page at https://www.ieee.org/conferences/publishing/templates.html for formatting information.
Papers will be submitted through the EasyChair submission and review system. The accepted papers from ISI 2021 and its affiliated workshops will be published by the IEEE Press in formal proceedings.
List of Topics
Security Analytics and Threat Intelligence
Threat pattern models and modeling tools
Real-time situational awareness
Intrusion and cybersecurity threat detection and analysis
Cyber-physical-social system security and incident management
Computing and networking infrastructure protection
Crime analysis and prevention
Forecasting threats and measuring the impact of threats
Surveillance and intelligence through unconventional means
Information security management standards
Information systems security policies
Mobile and cloud computing security
Big data analytics for cybersecurity
Machine learning for cybersecurity
Artificial Intelligence for cybersecurity
Resilient Cyber Infrastructure Design and Protection
Data science and analytics in security informatics
Data representation and fusion for security informatics
Criminal/intelligence information extraction
Data sharing and information visualization for security informatics
Web-based intelligence monitoring and analysis
Spatial-temporal data analysis for crime analysis and security informatics
Criminal/intelligence machine learning and data mining
Cyber attack and/or bio-terrorism tracking, alerting, and analysis
Digital forensics and computational criminology
Financial and accounting fraud analysis
Consumer-generated content and security-related social media analytics
Security-related social network analysis (radicalization, fund-raising, recruitment, conducting operations)
Authorship analysis and identification
Security-related analytical methodologies and software tools
Human Behavior and Factors in Security Applications
Behavior issues in information systems security
HCI and user interfaces of relevance to intelligence and security
Social impacts of crime, cybercrime, and/or terrorism
Board activism and influence
Measuring the effectiveness of security interventions
Citizen and employee education and training
Understanding user behavior such as compliance, susceptibility, and accountability
Security risks related to user behaviors/interactions with information systems
Human behavior modeling, representation, and prediction for security applications
Organizational, National, and International Security Applications
Best practices in security protection
Information sharing policy and governance
Privacy, security, and civil liberties issues
Emergency response and management
Disaster prevention, detection, and management
Protection of transportation and communications infrastructure
Communication and decision support for research and rescue
Assisting citizens' responses to cyber attacks, terrorism, and catastrophic events
Accounting and IT auditing and fraud detection
Corporate governance and monitoring
Election fraud and political use and abuse
Machine learning for the developing world
Artificial Intelligence methods and applications for humanitarian efforts​
  • asked a question related to Semantic Web
Question
10 answers
To make a survey for a particular fact such as the perception of a phenomenon, one often resorts to an interview questionnaire. In the digital age, tools such as Google Forms, Kobo collect and many others have been developed. From a scientific point of view, is it preferable to submit an interview questionnaire through the communication channels (sms, email, social networks ...) or, it is better to submit it in the ground ?
Relevant answer
Answer
Always first hand data collection is preferred as the researcher has the options to get more information through questions which are appropriate but are not though of before. also, you are sure to get response.
In the digital form, a questionnaire must be well thought, well prepared and exhaustive. Of course even then it is difficult to get response from all the addresses.
The first one is time consuming, requires a lot of effort, but is more scientific and secured than the second one. Although the second approach is preferred now a days because of time constraint and ease of acess.
  • asked a question related to Semantic Web
Question
3 answers
While there are multiple Java implementation for managing semantic knowledge base (Hermit, Pellet...) there seems to be almost none in pure Javascript.
I would prefer to use JS than Java in my project since I find JS much more clean and practical and easy to maintain. Unfortunately, there seem to be almost nothing to handle RDF data together with rules inference in Javascript. Although there are some work to handle only RDF (https://rdf.js.org/), it's unclear what is the status of these works regarding to W3C specifications.
Relevant answer
Answer
Prasath Sivasubramanian Thank you for your answer, it seems like all Javascript tools listed on w3.org aren't maintained anymore though. Like Hercules last released was in 2009, OAT is not reachable anymore...
The Eye (https://github.com/josd/eye) engine looks quite interesting I wonder if it's comparable to https://github.com/ucbl/HyL ?
  • asked a question related to Semantic Web
Question
5 answers
I actually want to enable user to defined rules for his resources on Social Network. These rules will be used by proposed model to share users resources across different social network.
Relevant answer
This link maybe useful for you:
  • asked a question related to Semantic Web
Question
4 answers
In modern day knowledge engineering solutions, what can be importance, necessity and feasibility of using ontology for protect cultural heritage?
Relevant answer
Answer
Dear, Deepjyoti Kalita from these reflections that you propose, it would also be convenient to ask about the role of philosophy in relation to heritage in current times and in the post-pandemic era
  • asked a question related to Semantic Web
Question
1 answer
CFP - Call for Papers for the 2nd Iberoamerican Knowledge Graph and Semantic Web Conference @KGSWC2020 @KGSWC
July 28-31, 2020,Mérida, Yucatán. Mexico
Relevant answer
Answer
Thanks for sharing!
  • asked a question related to Semantic Web
Question
5 answers
Do you know of research papers dealing with the whys of the semantic web failure or disinterest or shift of interest toward knowledge graphs ?
Relevant answer
Answer
Just heard in a podcast about a database that maps words in their semantic context between different languages: CLICS, Cross-Linguistic Colexifications. I think, with such mappings it could be made possible to combine networks from different companies.
Regards,
Joachim
  • asked a question related to Semantic Web
Question
5 answers
How to convert triple store into quad store.
how default graph converted into named graph.
Relevant answer
Answer
Apache Jena also supports quads.
  • asked a question related to Semantic Web
Question
9 answers
Those knowledge representation forms can be considered traditional, because they are related with historically first times, mainly symbolic period of Artificial Intelligence. Are they applied so widely, to justify their inclusion in a general undergraduate course for Computer Science students?
Relevant answer
Answer
Greetings professor and great question!
I do hope to see interesting answers here. I will contribute based on my brief experience in the area.
I think that we can see several classical knowledge representation forms in modern applications, for example semantic networks is really popular. In the other hand, vector representation seems the most widespread representation form for machine learning. Vectorial knowledge representation is a different (bottom-up) approach compared to semantic networks. Fussing the top-down approach of semantic networks with the conventional bottom-up approach of machine learning seems to be a current-to-future trend in the area.
No matter what, I think that is more important to discuss the limitations of
vectorial data representation despite its widespread. Maybe that would be a good opportunity to explain how this is one of the core issues in pattern recognition and the role of the sub discipline of learning representations from data (and the incredible hypo that deep learning have had in this area due to its ability to learn several layers of representation from raw sensorial data). The current trend is clear when enough data is available: skip manually handcrafted features and learn the representation automatically.
  • asked a question related to Semantic Web
Question
8 answers
I work on graph based knowledge representation. I would like to know, how we can apply Deep Learning on Resource Description Framework (RDF) data and what we can infer by this way ? Thanks in advance for your help!
Relevant answer
Answer
I always concerns why we need deep learning on RDF or OWL-based data. As the natural of the semantic web, the graph data with specific relations actually presents the outcome of learning. Thus, the query language or rule query language should enable directly answer the questions.
  • asked a question related to Semantic Web
Question
2 answers
In my academic research on "Estimating an optimal waiting time of insurance claims in customer retention using queuing models", i am stuck on where i can source data. The data i am looking for has to have dates the claims were submitted, settled and paid out and whether a client has cancelled the insurance policy or not. This assignment is due on the 14th of June, 2019. Is there any one who is willing to help with the data or links ?
Relevant answer
Answer
Thanks a lot for the insight. I hope the provided link will
help me more. Otherwise, i have happened to find a claim's dataset on kaggle which i might start working on. For a young Mathematician like me, the information in the link will open my eye to what happens in the insurance in as far as policy lapsing is concerned. Thanks Dr. Gomathy.
  • asked a question related to Semantic Web
Question
3 answers
Since 2000, beginning with Mizoguchi and Bourdeau seminal publication "Using Ontological Engineering to Overcome Common AI-ED Problems", and other researchers publications, ontologies and semantic web have been applied to Education, mainly in virtual learning environment. Have those applications reached the expected results? What is your opinion?
Relevant answer
Answer
That is a very interesting question. In my opinion ontologies have a wide variety of applications, not only in virtual learning. Although, i have not seen a major change in the way that ontologies have been used in the literature, on the other hand, it is a very promising research line, in the sense of what other technoloy or techniques (e.g. from AI) you use side by side with ontologies. For example the topic of context-aware knowledge acquisition for automated planning and execution applications, in that sense using ontologies is very promising for the direction of goal driven autonomy (GDA) and to provide robots that are executing a plan with a higher level of autonomy (bolstered autonomy) to dynamically generate goals and extend the robot domain model with relevant new objects of new types that the robot does not know (was not included in its design). This has two implications, i. if the robot task is a cognitive task / a non physical task basically this would mean that the robot can learn no functionalities (brand new action schemas). On the other hand, ii. if the task of the robot is physical and heavily depends on its physical operating capabilities, the interest would be is to extend its domain with only relevant and operable new objects of brand new types that are strictly manageable by the agent/robot capabilities. I would like to refer you to this paper: with respect to bolstering agents autonomy respecting physical capabilities, and also, this paper: with respect to bolstering autonomy of agents of non specialised physical task.
  • asked a question related to Semantic Web
Question
6 answers
Semantic Web Data in real time
Relevant answer
Answer
Hi,
if it's a research question you can see see how to use RFD and NoSQL, this is an interested work :
If about the most used existing tools you can see : https://www.xenonstack.com/blog/tags/semantic-analysis/
  • asked a question related to Semantic Web
Question
3 answers
Hey all,
I am trying to finetune the Deconvnet (learning Deconvolutional network for semantic segmentation which is trained on 20 classes data on pascaL-VOC). While finetuning with my own data(4 classes), the validation accuracy achieved is around 99 percent. However, while inferencing/testing the trained model, the output image comes all black. The data is equally distributed for all classes available. I am doing this in caffe. Can I get help if anyone faced this issue?
Relevant answer
Answer
verify your threshold pleas
  • asked a question related to Semantic Web
Question
10 answers
I'm developing an Ontology for the OPC UA Standard and I would like to know if there are already existing versions of such Ontology. 
Relevant answer
Answer
From a production automation point of view:
As already explained in the article published on OPC foundation, the mere vocabulary of the OPC-UA alone are not sufficient to automate the orchestration of different methods offered by the various OPC-UA servers of manufacturing assets in the factory shop-floor. The orchestration engine needs to take care of pre- and post-conditions of the method execution.
Apart from ensuring data consistency, the reasoning engine is also required to operate on the data in real time to understand the context of the manufacturing scenario. We need semantic technologies here to model the axioms and custom rulesets that needs to hold good to determine the next course of action.
I would be willing to collaborate/help/share if this work benefits both of us.
Cheers, Badari
  • asked a question related to Semantic Web
Question
2 answers
working on semantic web for which generating triplet dataset in protege and load into eclipse using jena inference rule to query form data every thing goes well but didn't able to get results at interface while enter query in search box get the error of
WARN [AWT-EventQueue-0] (Log.java:80) - setResultVars(): no query pattern
As code id investigated carefully and didn't get any bug
while searching query it will only converted and matched back-end but output will not show
Relevant answer
Answer
Get this error there are 2 reasons
1. getresultVar function is missing in code
2. As dataset include triples so when query pass in will converted into triples but when match with dataset didnot find words( like to, the, is etc..)which are omitted during converstion of triples .
As i put all the words into txt file which is added into source code also but didn’t get why it is not working...
if you understand my comment and know about some thing please guide me.
  • asked a question related to Semantic Web
Question
3 answers
i wnat to use semantic web to make website about breast cancer
Relevant answer
Answer
thank u a lot Gilles-Antoine Nys.
But these material aren't useful for me as it interested with the breast cancer itself .
while I want make website about breast cancer that get the information from others website that support semantic .
can u help me ?
  • asked a question related to Semantic Web
Question
4 answers
I need any new statistic showing the growth of semantic data and RDF triple stores like the one shown in this link: https://www.quora.com/How-fast-is-semantic-web-and-or-linked-data-growing-per-year
Relevant answer
Answer
Apart from the LOD Cloud, which primarily consists of datasets published in Linked Data format, by contributors to the Linking Open Data community project via individuals or organisations, there exists other sources/use-cases of Linked Data which are private.
Many industries fuelled by scenarios including, finance, blockchain, automation and wireless sensor network are adopting semantic technologies for smart applications.
  • asked a question related to Semantic Web
Question
9 answers
Semantic web employs RDF, Ontology for storing structured data in contrast to HTML which does not have any structure. The data on the web is purely in HTML format. Further, there is no standardization. How to convert HTML data into standardized RDF format or ontology for integrating data from different existing resources?
Relevant answer
Answer
@Ivan: There is more on this earth (or better said web) than just Google ;-)
Of course, Google makes visual use of its knowledge graph, but I wouldn't call it "semantic web", since it does not follow the principles of the semantic web, i.e. to use (defacto) standards and URIs. You can easily inspect the box you mentioned and what you see there appears to be proprietary. Thus, in order to make this content machine-processable you have to understand it and hack your own interpreter.
However, sites which make use of the semantic web, often do it under the hood without any visual hint for human users. Inspect for example the source code of the pages:
Hence, the semantic web is actually imperceptible and that's probably the problem, why it has difficulties to be adopted more broadly: Its use is not directly visible. Therefore I found the statistic about the use of RDF and microdata, I mentioned above, very helpful.
Of course you are right, transforming HTML into RDF/microformats isn't the problem, the problem is - as you pointed out - understanding the content. But I am pretty sure, that sites which use the semantic web under the hood today, do not interpret their content. With high probability they have integrated the semantic web markup already into their CMS or HTML generation algorithms.
I would be happy to receive any confirmation about my last assumption.
  • asked a question related to Semantic Web
Question
7 answers
I want to extract the data from the internet with specific medical terms. Which are the available tools with best accuracy or output?
Relevant answer
Answer
The term data harvesting, or web scraping, has always been a concern for website operators and data publishers. Data harvesting is a process where a small script, also known as a malicious bot, is used to automatically extract large amount of data from websites and use it for other purposes. As a cheap and easy way to collect online data, the technique is often used without permission to steal website information such as text, photos, email addresses, and contact lists.
One method of data harvesting targets databases in particular. The script finds a way to cycle through the records of a database and then download each and every record in the database.
Aside from obvious consequence of data loss, data harvesting can also be detrimental to businesses in other ways:
  • Poor SEO Ranking: If your website content is scraped, reproduced and used on other sites, this will significantly affect the SEO ranking and performance for your website on search engines.
  • Decreased Website Speed: When used repeatedly, scraping attacks can lower the performance of your websites and affect the user experience.
  • Lost Market Advantages: Your competitors may use data harvesting to scrape valuable information such as customer lists to gather intelligence about your business.
  • I think if you refer it for research then concentrate on techniques rather than tools.
There are also a lot of other Data Mining techniques but these seven are considered more frequently used by peoples.
  • Statistics
  • Clustering
  • Visualization
  • Decision Tree
  • Association Rules
  • Neural Networks
  • Classification
  • asked a question related to Semantic Web
Question
15 answers
What do you think about it?
In my opinion, this is possible thanks to the semantic web but, to get this to work, you must first insert data so that you can process it, and usually these data are inserted by people, but what would happen if the computer itself wrote its own questions and manage to answer them correctly. It would be an evolutionary step of great magnitude.
Relevant answer
Answer
Data never sleep. Data are everywhere. You ask about AI-based engines . Just take a look over twitter. Data can help to advance humanity by solving the world’s hardest problems. So, data are power. By harnessing technology to address core human needs, data aim to drive a bottom-up redistribution of power, capital and opportunity. Answers made by AI-based engines can help to create, store, modify, exchange, and exploit information. Of course, that increases the capacity to process data. But, also may lead to data disruptions.
  • asked a question related to Semantic Web
Question
1 answer
Hi, 
SIREn as Semantic Information Retrieval Search Engine was work in my university computer in Sept. to Nov., 2013, but afterthat it does not work due to the security and privacy issues. Please, I am wondering how can I resolve this privacy issue because the maintenance team can not do it on that time.
As all of you know from me in facebook in 2013 and today and here in researchgate about SIREn is Semantic Information Retrieval Search Engine using Lucene and Solar. 
Thanks & Best Wishes
Osman
Relevant answer
Answer
Note: Just change the Lucene analyser variables to target any language used in SIREn source code (Lucene contains stemmer and tokenizer for English, Arabic and other languages )
  • asked a question related to Semantic Web
Question
1 answer
I am unable to run HiTeX with UMLS database using GATE GUI, any suggestions or external resources which I could refer will be helpful
Thanks in Advance...
Relevant answer
Answer
Hi, may be this link help you
  • asked a question related to Semantic Web
Question
8 answers
I am using Protege 3.4 and I built my ontology and extended it with SWRL rules using SWRL and sqwrl built ins . The rules classifies instances of a class of the ontology ; the RHS is class assertion of an instance .It works successfully but when I change the values of the causes in the LHS no reclassification is done the instance is asserted to the same class
ex: SWRL rules:
 Message(?m) hasInterest(?m,?i) hasCategory(?m,?c) sqwrl:makeset(?s1,?i) sqwrl:makeset(?s2,?c) sqwrl:intersection(?s3,?s1,?s2) sqwrl:size(?n,?s3) swrlb:greaterThan(?n,0) -> Ham(?m)
Message(?m) hasInterest(?m,?i) hasCategory(?m,?c) sqwrl:makeset(?s1,?i) sqwrl:makeset(?s2,?c) sqwrl:difference(?s3,?s1,?s2) sqwrl:size(?n,?s3) swrlb:greaterThan(?n,0) -> Spam(?m)
So once the message instance (m1) is classified as ham for example as i= sports and c=sports , whenever I change the values of i= movies ( interests) for the message instance (m1)  it will always be ham . I understood that this is because the class type is asserted . So my question is Why does this happen ? How to reclassify instances  as I need a dynamic way for message classification 
Relevant answer
Answer
Yes I start the process of mapping the asserted facts ; classes and instances  and the rules( so the fact that m1 is ham is mapped  as the type of the individual already to the engine ) and then I run the Jess engine and then map the result back to the ontology . Ok I will share it with you . When  I searched I found that this is because of non monotonic reasoning of OWL and SWRL  and that the SWRl rule facts are asserted not inferred so I posted the question to check and find an alternative such that the asserted fact are inferred .
  • asked a question related to Semantic Web
Question
3 answers
I try to get all 'subclass-of' axioms of an ontology. I tried by using the following statement.
MyOntology.getAxioms(AxiomType.SUBCLASS_OF));
Effectively, it returns the ontology 'subclass-of' axioms, except for the first 'subclass-of' axiom which links OWL:Thing with my first ontology class.
I cannot understand why this link isn't taken into account in that case ?
Please, is there any way to get all 'subclass-of' axioms including those linking OWL:Thing with other classes ?
Relevant answer
Answer
Hi,
There is another issue here: the code that you are using is retrieving only the explicitly asserted subclassOf relations (that's why when you use Protegé to assert them, you can retrieve them). If you want to retrieve the inferred taxonomy, you should use a reasoner, and work with the OWLReasoner instead (or along with) of the OWLOntology. The DL reasoner infers the actual hierarchy that the ontology is modeling, and allows you to navigate through it . 
  • asked a question related to Semantic Web
Question
8 answers
Are there any NLP libraries that is capable of extracting RDF style triples {subject-predicate-object} from text ?
Relevant answer
Answer
There are several services that can analyze text to and get RDF from content for concepts, entities, keywords, categories, sentiment, emotion, relations, and semantic roles, based on natural language understanding. Some of those services are free, and others are commercial, but you can use it for free for a small amount of calls.
  1. DBpedia Spotlight, this is an annotation service based on DBpedia.http://demo.dbpedia-spotlight.org/
  2. Open Calais from Thomson Reuters. http://www.opencalais.com/. There is a demo site so you can test it.  http://www.opencalais.com/opencalais-demo/
  3. Natural Language Understanding service from IBM Bluemix Watson services. This service is the continuation of Alchemy API and is one of the best services.  https://www.ibm.com/watson/developercloud/natural-language-understanding.html
FRED: Machine Reading for the Semantic Web, it is a service that can to parse natural language text in 48 different languages and transform it into linked data. http://wit.istc.cnr.it/stlab-tools/fred. And you can read the paper about FRED here: http://semantic-web-journal.org/system/files/swj1379.pdf
  1. You can also look at this project presented in the paper "Extracting knowledge from text using SHELDON, a Semantic Holistic framEwork for LinkeD ONtology data".  http://speroni.web.cs.unibo.it/publications/reforgiato-recupero-2015-extracting-knowledge-from.pdf
  • asked a question related to Semantic Web
Question
1 answer
Hello, if you have a user with a list of friends, a list of favorite resources, a list of evaluation of these resources, what is the datamining technique that allows me to answer these questions 1 * Does 2 friends have the same favorite resource list? 2 * is this two friends evaluating the same resource? Is the Apriori algorithm allows me to answer this?
If not then what is the technique that allows me to do this.
Thank you in advance for your answers
Relevant answer
Answer
Hi Nassima,
I don't think that Apriori algorithm is useful for your task. You probably need to work with "cluster" technique.
HTH.
Samer
  • asked a question related to Semantic Web
Question
3 answers
I want to modify the existing ontology in protege tool and then add individuals into it. After development of the framework in Protege i want to add lots of individuals to it (more than thousands).
So, for adding individuals i am planning to use RDF Jena Api. 
Is it possible to extend the already build ontology from jena api?
Please provide some example..
Relevant answer
Answer
Hello, 
typically, you need to 
1) Load an existing ontology into the memory
2) Add some classes, relations or individuals (assuring uniqueness of URIs)
3) Save back the model into a file
Multiple examples are in Jena API documentation. You may also take a look at 
Best regards
  • asked a question related to Semantic Web
Question
2 answers
Hi ,
I know GATE library has some support for Non-English ontology such as Arabic. Please, I am wondering if there is another library package for Arabic ontologies?
Arabic plugin
How do I create RDF with GATE? documentation
Thanks
Relevant answer
  • asked a question related to Semantic Web
Question
6 answers
How could we build a framework and check its performance and to ensure this framework is better than before? Is there specific criteria? My topic related to semantic web specially is in semantic annotation?
Relevant answer
Answer
So, since the problem is with regard to searching Arabic text, one way to determine whether your framework improves on the current situation is to examine whether, by using semantic annotations, the results are more accurate or formulating a search query is easier.
  • asked a question related to Semantic Web
Question
6 answers
I am looking for metadata structures suitable to describe historical tattoos. Are you aware of any projects which developed a metadata schema for such a purpose? Many thanks for your help.
Relevant answer
Answer
I believe this would depend on why and how the tattoos are being stored as archived objects. Is it meant to be stored as it is on the skin, or as a surrogate photograph (or something else)? Is the body viewed as a custodian or as a canvas? Is the tattooist viewed as a artist or a technician? Is the tattoo viewed as a cultural, historical or art object or replication/representation of another object? There are many questions to be answered before the core question can be answered.
I am sure you have already seen this:
There is also this Tattoo schema:
Maybe you will find this project interesting: 
  • asked a question related to Semantic Web
Question
8 answers
Does anyone know about some existing stop-word vocabularies ?
I am interested in doing some keyword text mining work and I was wondering if there are some existing stop-word vocabularies that I can use to reduce the noise from the data.
Relevant answer
Answer
Hi,
Have a look at this link:
HTH.
Samer
  • asked a question related to Semantic Web
Question
4 answers
I want to test some efficiency of the web-based system. Some tasks would determine the result of the evaluation in post-test design.
I would also want to evaluate the system using TAM but not by survey (in post-test only design).
The web-based which I want to evaluate is attached below:
Relevant answer
Answer
i believe  it can.   Some of my  works are related to TAM. You can try searching for  my name or my sister's .. Penjira Kanthawongs, Ph.D
  • asked a question related to Semantic Web
Question
10 answers
I'm building the system with handle million request from other vendors. The request from vendor, the system processing, call other services and then response.The problem I get that is our business rules are complex so our system down when getting too much request. Current ours can handle 20 ~ 25 request per second.
We using java vert.x, rabbitmq, memcache, mysql database, jooq flowing microservice.
Can you give some ideal to increase the request ?
Relevant answer
Answer
The problem can be handled by building a cluster of computers and using it (queries are independent so parallel processing is useful). Such a cluster can be viewed as a private cloud. It also may be easier to use a public cloud services, get consulting how to do it and move your software to the cloud (Amazon, Google, Microsoft provide such services, for example).
  • asked a question related to Semantic Web
Question
7 answers
I'm building an ontology and I need to create the same semantic relation (the name of the relation is the same as well as the meaning in the domain) between different classes of elements. For example:
o:ClassA o:hasSemanticRelation xsd:string
o:ClassB o:hasSemanticRelation xsd:string
o:ClassC o:hasSemanticRelation xsd:string
My first approach was to create multiple domains for the property but this actually means the intersection of the concepts which is not correct in the domain. My second approach was to have a super property
owl:Thing o:hasSemanticRelation xsd:string
o:hasSemanticRelationA owl:subPropertyOf o:hasSemanticRelation
o:ClassA o:hasSemanticRelationA xsd:string
Because of the meaning of the hasSemanticRelation I want that every time it is used it can be linked to the same property, i.e., o:hasSemanticRelation
Could anyone give ideas how can I best represent this situation?
Relevant answer
Answer
Have you considered owl:unionOf to define a class extension that contains all individuals of the class descriptions in the list? For example:
o:hasSemanticRelation rdfs:domain [
  a owl:Class ;
  owl:unionOf ( o:ClassA o:ClassB o:ClassC )
] .
Note that owl:unionOf is not part of OWL Lite.
  • asked a question related to Semantic Web
Question
1 answer
I am looking for any ontologies/RDF schema that describe the capabilities/dynamic nature of QR codes.
Relevant answer
Answer
not that i know... worth double checking though
  • asked a question related to Semantic Web
Question
5 answers
Folksonomy, ontology
Relevant answer
Answer
Hello, 
my answer would be Linked Open Data Project started in 2008 by the developer of  WWW in 2008, Here you can  find details of all..
  • asked a question related to Semantic Web
Question
4 answers
I'm looking for any surveys related to Semantic relatedness/similarity algorithms/measures or methods for RDF graphs.
Thanks in advance,
Relevant answer
Answer
 Here is the pdf, may be it is useful to you..
  • asked a question related to Semantic Web
Question
4 answers
I am in a very initial stages of learning the web semantic. I would like to create a standard eCommerce website via woo-commerce and then would like to apply semantic web into it. I was looking for a pre-built ontology for any product published online for example a shoe with the variations. 
Relevant answer
Answer
@Goran Pavlov, 
The idea is not only to create such site, my research requires this as an initial set-up. Traffic has nothing to do with it. 
  • asked a question related to Semantic Web
Question
1 answer
Hi i am trying to prepare program for ppswr sampling with distinct unit using cumulative total method for my P.hd work .pls help me to write this type of program.
Relevant answer
Answer
You can find the function in package pps (see ftp://cran.r-project.org/pub/R/web/packages/pps/pps.pdfhttp://www.inside-r.org/packages/cran/pps/docs/ppswr)    I guess this is what you need.
  • asked a question related to Semantic Web
Question
3 answers
Ontologies are recognized as a mean of knowledge modeling and representation in the Semantic Web. Moreover, they have been the subject of several works on the visualization of their content. My question concerns the way in which it would be possible to evaluate a data visualization system in general, and in particular ontology visualization system.
Relevant answer
Answer
  • asked a question related to Semantic Web
Question
1 answer
In the Semantic Web register, Big Data is recognized as a real big challenge. So, we have a pressing need to store and manage huge amounts of data through trillions of triples. It is more flexible to use triple-stores than classic data bases since that data are connected to each other in a form assimilated to a graph. In this context, Semantic Data Lake notion has emerged. My question is about a concrete definition of a Semantic Data Lake.
Relevant answer
Answer
Dear Marouen,
As you said, Semantic Data Lake is a new notion. It has zero h-index in Google Scholar. FYI, Semantic Networks has h-index > 50 and Data Lake has h-index=39.
In my opinion, the definition of Semantic Data Lake equals to the definition of Semantic Data, because the definition of Semantic Data doesn't concern with the size of the data.
Best regards,
  • asked a question related to Semantic Web
Question
8 answers
I am developing an ontology for semantic searching of products and services for a financial sector. I need to know how to embed the ontology with the search interface to verify the search result. More precisely, applying the ontology in web pages for a real time query.
Relevant answer
Answer
Nazmul,
You can use an ontology tool like Protege to build a custom finance ontology, or find one online (example: http://www.fadyart.com/ontologies/data/Finance.owl). You can then download the ontology in RDF or OWL. These will be XML files with tags you can parse to extract the terms you want.
You can then add ontology terms to the query to expand it or narrow your results; or to validate the search terms prior to submitting the query. Since the ontology is typically a hierarchy, you can move up or down the hierarchy tree to expand or narrow the the results respectively as necessary.
You can also scan the returned pages for financial terms using the ontology terms by comparing the text in the search engine results pages with the ontology to validate your search results.
The simplest way to start would be to just extract and match the terms.
E.g. from the Finance ontology:
<CFIGuarantee rdf:about="#T-GovernmentTreasury">
<rdfs:comment rdf:datatype="&xsd;string"
>Secured by the governement treasury.
A government treasury is the office issuing Treasury bills, notes and bonds for account of a government.</rdfs:comment>
</CFIGuarantee>
You can then extract and add the rules (axioms).
Eg. from the Finance ontology:
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// General axioms
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<rdf:Description>
<rdf:type rdf:resource="&owl;AllDifferent"/>
<owl:distinctMembers rdf:parseType="Collection">
<rdf:Description rdf:about="#J-Junior"/>
<rdf:Description rdf:about="#V-GovernmentAgency"/>
<rdf:Description rdf:about="#B-Subordinated"/>
<rdf:Description rdf:about="#T-GovernmentTreasury"/>
<rdf:Description rdf:about="#E-Senior"/>
<rdf:Description rdf:about="#S-Secured"/>
<rdf:Description rdf:about="#G-Guaranteed"/>
</owl:distinctMembers>
</rdf:Description>
Look at the URL file example and at some resources explaining RDF and OWL to figure out the things you want to extract from the files.
  • asked a question related to Semantic Web
Question
4 answers
I have a RDF graph stored in Fuseki. This graph has resources describing metadata fields from research articles such as title, authors, abstract, etc. Also, I have the corresponding PDF file for each resource in RDF graph. So, I need to build a query over the RDF graph and retrieve their corresponding PDF file or viceversa. Is there any way to do that?
PD: The RDF graph is stored in Fuseki, and the PDF files are indexed in Apache Solr.
Relevant answer
Answer
Thanks Nazmul, I will check this suite. Best regards.
  • asked a question related to Semantic Web
Question
5 answers
Semantic web for e-learning systems.
Ontology for e-learning systems.
Relevant answer
Answer
Hi there, Semantic Web technologies have many applications in e-learning. Feel free to take a look at the following papers of ours:
Feel free to contact me if you need further details.
Best,
Stratos
  • asked a question related to Semantic Web
Question
10 answers
Is there any tool which can convert accumulated knowledge from the domain expert into machine readable format  (rdf, owl)?
THANKS FOR SHARING YOUR VALUABLE INFORMATION
Relevant answer
Answer
Hi there,
what you are looking for is called "ontology learning" and it's the process of (semi)automatically building an ontology from textual resources. There are several tools that analyse natural language text and try to extract an ontology (plus instances) from that, although this is still an active research field and there are still many open research issues left.
  • asked a question related to Semantic Web
Question
6 answers
I'm working with Linked Data (DBpedia in my case) and want to know whether this is the best one among others such as edge counting, information content or maybe hybrid measures.
Relevant answer
Answer
If you are working with DBpedia or Wikipedia, you can use word2vec algorithm (https://code.google.com/p/word2vec/). It gives good results for finding semantic similarity between words (concepts).
  • asked a question related to Semantic Web
Question
2 answers
I need:
1- A set of natural language (NL) rules or a legal text made of NL sentences
2- The formal requirements extracted from these NL rules (or from the legal document)
3- Related resources (e.g terminologies, ontologies) required for the transformaltion NL -> formal expression
Relevant answer
Answer
The Object Management Group maintains datasets that formalise the "Semantics Of Business Vocabulary And Rules" (SBVR). You can find them here: http://www.omg.org/spec/SBVR/
  • asked a question related to Semantic Web
Question
6 answers
I am working on a problem where Semantic Web Services could possibly help, and was surprised to see how little has changed in this area since 2008. SWS used to be a relatively hot topic back then and now, it seems, it mostly fizzled out. I don't see a single paper at the upcoming ISWC conference related to SWS.
What is the maturity of existing tools for SWS (OWL-S in particular) composition/matchmaking? Is there any active user group/mailing list devoted to SWS?
Thanks,
Jakub
Relevant answer
Hello Jakub,
I have the same problem with OWL-S. I found a single API to OWL-S, she is old, because she uses depreciated Java APIs, and don't has native WADL or WSDL2 Grounding. I found one solution in iServe. iServe is one Service Semantic description repository. iServe implement matchmaking for semantic service description languagens, include OWL-S. http://iserve.kmi.open.ac.uk/.
I'm begginer in iServe, but It looks promising use and read the publications that they did about SWS. For example, they have this publication, " An Integrated Semantic Web Service Discovery and Composition Framework".
If you find something interesting about composition, share with me.
Best regards
  • asked a question related to Semantic Web
Question
4 answers
Hello!
Does anyone know a PaaS free of charge that can be used to develop semantic applications using CNL (Controlled Natural Language), OWL and RDF? And once developed, the application will be hosted on Cloud. Something similar with what Ontorion Fluent Editor provides but for Cloud. Basically my requirements are:
- users should be able to easy define IF/THEN rules from a web browser
- an API should be available so I will be able to load through  a program about 100 000 instances in the Knowledge Base
- the existing ontology described in OWL or RDF can be imported
- an inference engine should exist and allow userst to easily run queries (SWRL, SPARQL etc) from the same web browser
- web interface for users can be developed and customized
- all these functionalities described above are available on Cloud and can be easily configured in terms of CPU's, RAM, etc
Thank you
Regards,
Sorin
Relevant answer
Answer
Hi João,
Do you know for a custom semantic application developed and deployed in Sofia 2 Cloud, is any way that the user can set / configure the number of nodes that are allocated for that applications? For example, if I want to test how my app performs in a 5 nodes cluster vs. a 20 nodes, is this possible? How is done the resource allocations on Sofia 2? 
Thanks
Sorin
  • asked a question related to Semantic Web
Question
4 answers
Big data is a big challenge in semantic web. We need to store trillion of triples/quads in a store. It is more than data warehouse as data are connected to each other to form a graph. So, semantic data lake is coined. Then query/inferencing is a real challenge to achieve scalability. Hadoop is a good choice of achieving scalability using commodity hardware. But, my question focuses on the concrete technique of implementing semantic data lake using Hadoop.
Relevant answer
Answer
Hi Haneef, 
Hadoop can definitely help you out in this.
Please go through this link - This article talks about possible solution.
  • asked a question related to Semantic Web
Question
11 answers
Dear alll,
I am working on a research project related to semantic personalization.
In this context, I am wondering about the main differences in terms of technical strenghts and weaknesses of the two knowledge bases.
In other words, which knowledge base is more maturated than the other and which one has more available apis to be used and explored.
Thank you in advance !
Relevant answer
  • asked a question related to Semantic Web
  • asked a question related to Semantic Web
Question
3 answers
I am doing project in composite web services for what I need to measure, monitor and report to improve the QoS for web services.
Relevant answer
Answer
ApacheBench Version 2.0.40- dev <$Revision: 1.146 $>Apache-2.0
  • asked a question related to Semantic Web
Question
5 answers
Keeping semantic heterogeneity in mind, what are all the possible approaches?
Relevant answer
Answer
Yes, more information would be required in order to answer your question effectively. However, my intuition tells me that you're probably looking for the Simple Knowledge Organisation System (SKOS):
http://www.w3.org/2004/02/skos/SKOS allows you to define things like broader and narrower relations between various concepts.
However, WordNet RDF/OWL representation (as referenced by Samir above), is a more linguistic approach.
Happy to keep the dialogue going! Let me know if that helps!
Daniel Lewis
  • asked a question related to Semantic Web
Question
2 answers
I would like to know alternative algorithms of semantic spreading activation that work in the same field of information retrieval.
Relevant answer
Answer
Random walk. Variants are possible. For example
this one: start at a certain vertex multiple
times. Choose a random neighbor, but when a
neighbor is chosen that has been visited
before in the same run, start again.
The logarithm of the count of visits
can be used to measure the semantic distance.
Attached is a sample random walk on
cancer-diagnosis from the semantic net
Regards,
Joachim
  • asked a question related to Semantic Web
Question
10 answers
As OWL is based on the open world assumption it will rather classify entities than validate them in the classic way, since it assumes a non complete knowledge base.
This characteristic had caused me great problems when considering to use OWL for MDE (Model Driven Engineering) to generate a form based knowledge management system.
With forms the user WANTS validation, which must be rather strict and in real-time. The domain and range conditions won't help:
"The fact that domain and range conditions do not behave as constraints and the fact that they can cause ‘unexpected’ classification results can lead problems and unexpected side effects." (A Practical Guide To Building OWL Ontologies Using Protégé 4 and CO-ODE Tools, p36)
I've found Pellet Integrity Constraints (http://clarkparsia.com/pellet/icv/) which adds stricter constraint features to OWL.
This is where the Semantic Web has lost me. I need to extend an already complex system in order to achive something as simple and common as classic validation?
Have I missed something obvious, or is OWL really unsuitable (or at least very painful) to model closed world systems?
I can see why OWL and the OWA work the way they do and that knowledge bases that aggregate information from different sources can benefit from this concept. However, most systems are (for good reasons) closed world.
Relevant answer
Answer
Hello Simon,
Although ontologies are in essence OWA, they are described as a set of restrictions. IMO the concrete language ( OWL ) may perfectly describe CWA. The question is an interpretation. Obviously reasoners are based on OWA and because this is the only correct way to work with ontologies, getting CWA is a hard task.
In my OWL2XS tool I had to make a lot of logical querying based on negation to get CWA. The simplest example is: Monkeys have a tail and they are mammals. Do humans have a tail? In the OWA they "might". Any OWA reasoner will return an existential quantification hasTail for both mammals and humans unless you specifically negate it. To check CWA one should explicitly try to negate it for the class and this is very computationally intensive.
Probably more interesting would be to create a CWA reasoner - it should be very close to an OWL parser!
So my answer yes you can. Nothing prevents to see your ontology as a closed world.
The question is in which scenario would we need it? In my case I was interested in Web services generation (XML).
Kind regards,
Dmitry
  • asked a question related to Semantic Web
Question
2 answers
How to generate key from web resource for linking with semantic approach. and how to convert normal web content like HTML, XML to RDF, OWL i need help to develop this semantic linking to web content using self key generating for this..... and how to implement this Paper? KD2R???
Relevant answer
Answer
  • asked a question related to Semantic Web
Question
1 answer
my code:
String sparqlQueryString8="prefix sumo:<http://www.ontologyportal.org/SUMO.owl#>" +
"select ?c where {?c rdf:type sumo:City }";
OntModel modelcity=ModelFactory.createOntologyModel();
modelcity.read("SUMO.owl");
Query querycity = QueryFactory.create(sparqlQueryString8);
QueryExecution qexeccity = QueryExecutionFactory.create(sparqlQueryString8, modelcity);
ResultSet resultcity = qexeccity.execSelect() ;
for (; resultcity.hasNext();) {
QuerySolution sltcity=resultcity.next();
System.out.println(sltcity.toString());
}
please help me.
Relevant answer
Answer
First of all, there is a small syntax error in your SPARQL, remove the ";" at the end of the first line. Second, SUMO does not specify a proper URL as ontology identifier (look at SUMO.owl, it contains the following: <owl:Ontology rdf:about="SUMO">). In Jena, the proper prefix then is the file URL, e.g.:
prefix sumo:<file:///some/path/SUMO.owl#>
If that's not what you want, you could work around using Jena's prefix mapping capabilitites (use something like ParameterizedSparqlString).
  • asked a question related to Semantic Web
Question
6 answers
Actually, I'm getting lost with domain and range semantics when a subsummption exists, in addition to restriction inheritance between class taxonomy members. Please see the following cases.
Let's consider
(1) hasCar Domain driver
(2) driver subClassOf human
Then, can we infer that
hasCar Domain human
Let's have hasCar (x, y) whatever x is
from (1): driver(x) & from (2): human(x)
then: whatever x is, if hasCar(x, y) => driver (x) =>
(3) hasCar Domain human
First Question: Is this conclusion correct? Why isn't Protege 5 with Hermit (neither Pellet, not even Jena with some reasoner) inferring that?
------------------------------------------------------------------------------------------------
Let's consider
(1) hasAudiCar Range AudiCar
(2) AudiCar subClassOf Car
In a similar fashion, we can infer that
(3) hasAudiCar Range Car
Second Question: Is this conclusion correct? Why isn't Protege 5 with Hermit (neither Pellet, not even Jena with some reasoner) inferring that?
-------------------------------------------------------------------------------------------------
Let's consider
(1) hasAudiCar Domain driver
(2) hasAudiCar Range audiCar
(3) driver hasAudiCar min 1 audiCar
(4) audiCar subClassOf car
Then, we can infer that
driver hasAudiCar min 1 car
Third Question: Is this conclusion correct? Why isn't Protege 5 with Hermit (neither Pellet, not even Jena with some reasoner) inferring that?
Surprisingly; Using Jena with the specification OntModelSpec.OWL_DL_MEM_RULE_INF gives my expected results!
Relevant answer
Answer
Median,
First of all, Protege does not show by default all the inferences that you may be interested in seeing, but you can easily activate different inference types through the Reasoner->Configure... menu (in your case you want to check the "Domains" and "Ranges" checkboxes under the "Displayed Object Property Inferences" section).
Second, although I am not a logics expert I think, that the answer to your first and second question is in fact "No", that is why I believe Protege does not show them, even if you activate the domain and range inferences in the configuration. Here is my reasoning, in case of your first question:
Although you correctly infer that 
whatever x is, if hasCar(x, y) => driver (x) => human(x),
x will still belong to the subset of those members of the human class that are also members of the driver class (i.e. they will never be member of the complement of the driver class, such as the non-driver humans - which BTW, I think is a inappropriate modeling in your example, since we know that non-driver persons can own a car in reality), and as such the the domain of the hasCar property will remain the driver class and cannot be extended to the entire human class. If you would say, for the sake of the argument, that there is no human that cannot drive (i.e. the human class is equivalent  with the driver class), than protege would show that the domain of the hasCar property is not just driver but also human.
Same reasoning goes for the range in your second question.
The answer to your third question is "True", but possibly not for the reasons that you would think of. The key statement that produce that inference are the range/domain statements, not the subclass relationship between audiCar and car
You may wonder why this inferable statement is not shown in Protege, even if you check all the possible reasoner configuration options. Simply, because there are infinite number of statements that can be entailed from an ontology, and Protege shows only a certain subset of those, which are commonly of interest to people who are building or using an ontology.  And this particular type of inference did not fall in that category.
However, if you write this statement into the DL Query plugin, and you check the "Equivalent classes" checkbox, you will see that driver class is returned as an equivalent class to the "driver hasAudiCar min 1 car" expression, and you can even get a nice explanation of why this is the case, if you click on the question mark on the right of the driver class in the query results.
I hope this helps.
  • asked a question related to Semantic Web
Question
3 answers
Hi,
I have a set of ontologies related to Cultural Heritage domain created by technical experts and a textual corpus written by archaeological experts. My problem is that the ontologies need to be filled by archaeological knowledge (that I don't know big things), so I'll use the archaeological texts to try to extract the information needed.
I need your recommendations about methods of information extraction.      
And for ontologies, is there any heuristics to fill an Ontology automatically? (I have the T-Box and I have to generate the A-Box)  
Thank you for your interest,
Best regards.
Relevant answer
Answer
  • asked a question related to Semantic Web
Question
3 answers
How can I say that a particular tweet is rumor. I don't want to use any supervised knowledge to identify rumors
Relevant answer
Answer
Dear Abhishek,
you can start by trying to understand the factors that yield to rumors.
Some works that can interest you are
Best Regards,
Andreas
  • asked a question related to Semantic Web
Question
1 answer
This resource is SUMO!؟
Relevant answer
Answer
Dear Kazem, please check this link below, it might be helpful
Good luck.
  • asked a question related to Semantic Web
Question
2 answers
SUMO OWL=(T-Box)+(A-Box)
but, i want only T-box
Relevant answer
Answer
  • asked a question related to Semantic Web