Science topics: LinguisticsSemantics
Science topic
Semantics - Science topic
Explore the latest questions and answers in Semantics, and find Semantics experts.
Questions related to Semantics
Real-World Applications and Research Trends of Computer Vision
In an era where visual data is growing exponentially, computer vision is transforming industries, from healthcare and retail to autonomous driving and security. Check out my latest video to dive into how computer vision is reshaping our world with cutting-edge applications and inspiring research advancements.
🔍 Topics Covered:
Practical applications: healthcare, surveillance, autonomous vehicles, and beyond
Emerging trends: deep learning innovations, object detection, semantic segmentation, and more
Future possibilities and research directions in computer vision.
👉 Watch here: https://www.youtube.com/watch?v=bkjqDcprRVs
Let’s unravel the potential of visual intelligence and see how it’s revolutionizing our interaction with technology! #ComputerVision #MachineLearning #AI #DeepLearning #Innovation #TechTrends #Research
how semantic anomalies can be detected in the ESL context and what are the reasons behind those anomalies at the undergraduate level
Please check my recent progress with DIKWP Artificial Consciousness and Please send me precious feedback. Here it is our work through visiting ChatGPT link: https://chatgpt.com/share/e7bf95de-3027-4b3a-8b42-0ec8331873c4
1. a. Whoj knows whok heard what stories about himselfk?
b. John does (=John knows whok heard what stories about himselfk).
2. a. Whoj knows what stories about himselfj whok heard?
b. John does (=John knows what stories about himselfj whok heard
/John knows whok heard what stories about hisj own)
The examples (1a) and (2a) ask questions about the matrix subject 'who', with 'John' italicized in (1b) and (2b) corresponding to the wh-constituents that are being answered. I am curious about the binding relations in these examples, particularly in (2). Can example (2a) be construed as a question target matrix subject 'who' with 'himself' bound by the matrix subject?
I need to adapt a research questionnaire for my study but I really cannot find anything yet. My study is a qualitative study.
Quels sont les mécanismes et les tests syntaxiques, lexicaux et sémantiques du figement des locutions adverbiales?
How does one ethically deal with typos? Why? I would first follow tradition(traditional meanings), secondly risk analysis(risks of interpretation) and then thirdly skin in the game(the right to opine depends on the price the person pays for incorrectness). On a side note, those ethics lead me to negative utilitarianism for an open society. The virtues depend on enlightenment instead of goodness, thus they may be empathy, common sense, and symmetry. StimulI:
Politics:
General Ethics:
Metaphysics:
Linguistics:
this is what they say on etymoline.com:
"late 14c., auctorisen, autorisen, "give formal approval or sanction to," also "confirm as authentic or true; regard (a book) as correct or trustworthy," from Old French autoriser, auctoriser "authorize, give authority to" (12c.) and directly from Medieval Latin auctorizare, from auctor (see author (n.))."
Are there studies that analyze the most frequent errors of ChatGPT in generating outputs in the Spanish language (such as grammatical, syntactic, semantic errors, etc.)?
CASE GRAMMAR: A MERGER OF SYNTAX AND SEMANTICS
Charles Fillmore’s Deep Cases are determined not by syntax, but rather by semantics. Rather than having Subject, Indirect Object and Direct Object, Fillmore uses such terms as Agent, Experiencer, Instrument, and Patient.
The semantic features often occur in contrasting pairs, like Animate vs. Inanimate, and Cause vs. Effect. Thus:
Agent: Animate Cause
Experiencer: Animate Effect
Instrument: Inanimate Cause
Patient: Inanimate Effect
In an Active Sentence the most active Deep Case is eligible to become the Subject and the least active is eligible to become the Direct Object.
In a Passive Sentence the least active Deep Case is eligible to become the Subject and the most active case becomes an Object of the Preposition “by.”
Normally, the most active deep case is selected as the subject of the sentence:
The Actor if there is one
If not, the Instrument if there is one
If there is no Actor or Instrument, the Object becomes eligible. Therefore we have the following:
The boy opened the door with the key.
The key opened the door.
The door opened.
Is Case Grammar an effective method for showing the interrelationships between syntax and semantics?
What are rhetorical stylistic considerations for speakers or writers to negotiate "distance...with regards to a question or a problem"? I'm grateful to Nick Turnbull at The University of Manchester, for neatly describing this perspective of rhetoric offered by Michel Meyer. [In Turnbull, Nick (2006) "Problematology and Contingency in the Social Sciences; (2017) "Political Rhetoric and its Relationship to Context: a new theory of the rhetorical situation, the rhetorical and the political"]
One example is mentioned in my book chapter, "Reform Advocacy of Michael Kirby." Link at:
Chapter Reform Advocacy of Michael Kirby
Associate Justice Scalia of the United States Supreme Court was politely but firmly invited to probe a broader view of originalism as long ago as 2010 when he visited Australia – by The Hon Michael Kirby AC, CMG, international human rights jurist and former justice of the High Court of Australia (1996-2009).
It appears the current propagators of originalism must rely on some willful blindness to conveniently overlook the recorded suggestions from the Founders of the United States that the Constitution would need to be interpreted, adjusted, or changed to accommodate unforeseen or unforeseeable circumstances.
This is just one of the ways that Kirby imaginatively uses language to invite openness to new understandings.
Other thoughts?
I can find one of my papers titled 'Mission Statement Analysis of Selected Public Sector and Private Sector Banks in India' cited in three works as per semantic scholar, but google scholar does not highlight the same. Don't know why. Could anyone help?
Same to the topic,I am so confuse when reading papers and writing my own paper,hope someone could lead a way.
The word 'representation' has been defined by Derrida as 'reproduction of presentations'. Taking the prefix 're' to mean 'again'.
When representation can only reproduce presences again, how can it reach into the future? Is that not problematic, as the future is not present and can therefore not be reproduced again?
Why is this relevant? The German word 'Vorstellung' (as e.g. in Schopenhauer's Die Welt als Wille und Vorstellung') is made up of 'vor' and 'stellung': for, in front of (or 'fore' as in 'foreground') and to place, to put. Whereas 'Vorstellung' directs someone's orientation foreward, representation casts someone back into the past. Alway. Inescapably, But a Vorstellung will always be in front of you, wherever you turn.
It is easy to relate Vorstellung to the future. As that will be any account, image, presentation of the future to be placed in front of anyone. (This is also why it is different from 'imagination', as you imagine something by yourself, in your head as it were. A Vorstellung kan be imaginary, but also, similar to a play or a performance (also 'Vorstellung') physically outside of yourself in front of you. Imagination cannot).
So sematically Vorstellung can cover times ahead. But can representation?
And what does that mean for communication theory, when its dominant defining concept cannot address the future?
#representation #communication theory #cultural studies #Stuart Hall #lingusitic relativity #vorstellung
If any researcher interested in this topic and likes to participate as a guest editor in a special issue to be published next year, he is kindly requested to let me know.
Introduction
Change is one law of life along the history of man. The conditions of an individual’s life, his customs, traditions, and values are constantly changing, and consequently the referents of many words in language and the contexts in which they are used are subject to change in the course of time. Semantic change is an evitable process which many people find the most interesting. The interest stems from its connection with life, literature, and culture of communities.
I’m looking for studies that investigate the semantic change that had occurred in the languages of Mesopotamia and Levan by following the changes from the ancient eras of Sumer and Akkad till the contemporary time. To do that, the researcher needs to compile a lexicon of at least 100 words in the Sumerian and Akkadian languages or any other ancient languages such as Aramaic. The semantic change in these words should be linguistically and socioculturally examined and evaluated both quantitatively and qualitatively.
Historical Background
While the two ethnic groups of Sumer and Akkad had been interacting religiously, culturally, and linguistically to create the history of ancient Iraq, the Arameans had been developing their historical status in the northwest (modern Syria) in what some historians call the “land of Aram”. Apart from the tense relations and continuous wars between the new Mesopotamian citizens at post-Akkadian era in Assur (2012-605 B.C.) and Babylon (1670-320 B.C.) on the eastern side of Euphrates and the Arameans on the western side, Aramaic language had got involved in the linguistic society of Mesopotamia.
It's important to notice that the relations between the Arameans and Arabs appeared for the first time in some Assyrian inscriptions at (880 B.C.) in which there is a reference to a rebellion of an Aramaic city-state (Bait Zemani) against the Assyrian king Assurbanipal. The Arabs of Hijaz supported the Arameans due to several linguistic and religious mutuals. Thus, the Syriac-Aramaic language was the most popular language in the Fertile Crescent during the first years of Islam in 7th century A.D. (Thuwainy, 2013:162-63).
Example: Sumerian [Da.Ab.E (n.)]- Akkadian [Adapu (n.)]- Arabic[Adeb (n.)]- Iraqi Arabic [adeb (n.) & te’deb].
The original Sumerian meaning related to speech was transferred to the Akkadian language to refer to music attached with the meaning of wisdom. In classical Arabic, the meaning is associated with education and righteous behavior, whereas it is more associated with the different genres of literature in standard Arabic. In the Iraqi dialect, the meaning is usually associated with giving orders or threats to an inferior person to behave himself/herself. Possible causes of change: Cultural causes, degree of formality (the Iraqi-dialect usage is bound to informal contexts).
For more details have a look at my paper:
How do semantic segmentation methods (UNet), and instance segmentation methods (mask R-CNN) rely on convolutional operations to learn spatial contextual information?
Do you like papers on semantics, framing, argumentation and rhetoric?
What is the difference between semantic consistency and logical consistency?
Decolonial Computing is an innovative concept to distinguish the benefits of artificial intelligence in providing economic diversity and inclusion. For this to happen, the world of data analytics is constantly evolving and the tools and techniques that data analysts use to extract insights from data are continually changing. I look for Semantic Cluster Analysis or some.
I am currently working on a project, part of which is for presentation at JK30 this year in March hosted at SFU, and I have been extensively searching for a part of speech (POS) segmenter/tagger capable of handling Korean text.
The one I currently have access to and could make execute is relatively outdated and requires many modifications to execute runs on the data.
I do not have a strong background in Python and have zero background in Java and my operating system is Windows.
I wonder if anyone may be able to recommend how may be the best way to go about segmenting Korean text data so that I can examine collocates with the aim of determining semantic prosody, and/or point me in the direction of a suitable program/software.
Stoic logic and in particular the work of Chrysippus (c. 279 – c. 206 BC) has only come down to us in fragments. To my knowledge the most accessible account is given in Sextus' Outlines of Pyrrhonism. Stoic logic certainly contained an axiomatic-deductive presentation of what we call today the 'propositional calculus'. The deductive system was based on both axioms and rules and appears to have been similar to Gentzen's sequent calculus. Certain accounts (by Cicero, if I am not mistaken) suggest that it included the analog of the 'cut rule'. There are tons of remaining questions. Was this propositional calculus classical or intuitionistic ? What type of negation did it employ ? Was it closer to relevance logics and many-valued logics or even to linear logic ? How did the Stoics treat modality ? What about the liar paradox ? How did they deal with quantification ? Was it in combinatory logic style or algebra of relations style ?
I noticed that there is a structural similarity between the syntactic operations of Bealer's logic (see my paper "Bealer's Intensional Logic" that I uploaded to Researchgate for my interpretation of these operations) and the notion of non-symmetric operad. However for the correspondence to be complete I need a diagonalisation operation.
Consider an operad P with P(n) the set of functions from the cartesian product X^n to X.
Then I need operations Dij : P(n) -> P(n-1) which identify variables xi and xj.
Has this been considered in the literature ?
When we use bounding boxes, there are irrelevant information in case of bounding box. I want to use the semantic labeling in Yolo v7. It can be possible or not?
Some articles or books are not available, you need to send a request. I'm interested in 'Political Correctness: A History of Semantics and Culture' by Geoffrey Hughes.
What will happend when I send a request? Will I get the access to the book? If so, is it for free?
Thank you and best,
Weronika
Do you prefer Google Scholar, Semantic Scholar, Internet Archive Scholar or others for finding information?
How can we use semantic segmentation for handwritten character recognition?
I am struggling to find an appropriate methode to investigate frames in resignation speeches for my bachelor paper? It is supposed the be in the field of Semantics (if possible suitable for Fillmore´s definition of frames).
Any idea ist highly appreciated.
Hello,
In an fMRI study, I will use a categorization test where the competition between prime and target is important. What do you think the prime and target durations should be?
Best regards
Hello friends,
I encountered a problem regarding the evaluation of semantic segmentation. Qualitatively the ground truth and prediction are quite similar but dice shows a small number about 0.56, How's this possible? If anyone has encountered the same problem so far, I would greatly appreciate it if you can share your experience and recommendation with me.
For example, "when mom dressed the baby spit on the bed", is the misunderstanding of this garden path sentence resulted from semantics?
When we think about semantic software, descriptive RDF files of HTML pages and mapping of relational databases to RDF immediately come to mind, however, does semantic software development only include those aspects?
Dear colleagues,
On behalf of the research team I head, I ask you to act as an expert in the framework of our international expert study "Possibilities and features of the formation of a worldview in the digital environment". Your expert opinions are extremely important to us when conducting this study.
The main goal of the project is to study the fundamental structural and substantive features of the formation of a modern worldview in the digital environment in the context of global technological transformations. The study is aimed at determining the potential of the influence of modern digital technologies on the value and semantic foundations of the traditional worldview, as well as at studying the value and semantic neutrality of digital actors, technologies, algorithms, and the digital space itself.
Based on the results of the work, our research team will organize a dissemination seminar for the experts who took part in the study in November 2022, within the framework of which the results of the study will be presented. We hope to see you among the participants of the seminar.
In advance, I express my deep gratitude on behalf of our research team for the time you have spent!
To participate in the survey, you can follow the link:
Due to the presence of open questions in the questionnaire, we recommend that you use a desktop computer or laptop in your work, as filling out answers to open questions from a mobile phone can cause you inconvenience.
The desirable deadline for filling out the questionnaire is September 30, 2022.
If you have any questions, you can always contact us by e-mail sergey@volodenkov. ru
Sincerely,
Sergey Volodenkov
Head of the scientific project,
Doctor of Political Sciences,
Professor of the Public Policy Department,
Faculty of Political Science,
Lomonosov Moscow State University
Dear Researchers,
Could you please give your ideas and share resources about how document verification may be achieved using semantic analysis? Is there any tool or technique? Suggestions including simple and easy techniques would be great. Thanks.
In my research, I have used mini-IPIP questionnaire. Now that I want to clean and analyze the data, I see that number of respondents who have answered negative-worded items and positive-worded items (semantic antonyms) in the same direction, or have responded to positive-worded items of one variable in opposite directions is too much (more than 30 percent).
The mini-IPIP scoring key sums up all the items of each variable, does it mean that I don’t have to consider these as careless responses?
I also have used Roberts, 1996 “Perceived Consumer Effectiveness” items and have the same problem. Although two items are negatively worded, Roberts has not mentioned the reverse question in the main article.
Recently, a new term 'goal-oriented communications' has appeared especially, in the research on semantic communication for wireless networks. Is this basically another term for end-to-end communication? Or, there is more to this?
What difference between nodes level attention and semantic level attention methods based on meta-paths in graphs?
Which one technique is more applicable for real-time measurement of semantic similarity and semantic clustering? For example, classifying students' answers during the online session into different clusters based on their similarities. Sentences could be from any domain.
Suppose two sentences are there I want to split the two sentences for understanding or find the semantic of the sentences. But there is no delimiter between two sentences. Now the issue is how the system will identify the sentence and how to split the sentences without using delimiter.
If anyone have idea or gone through this issue suggest me the solution or tell me is there any tool is available
I would like to discuss about semantic density and semantic gravity related to physics concepts.
Referential and model-theoretic semantics has wide applications in linguistics, cognitive science, philosophy and many other areas. These formal systems incorporate the notion - first introduced by the father of analytic philosophy Gottlob Frege more than a century ago - that words correspond to things. The term ‘2’ denotes or refers to the number two. The name ‘Peter’ refers to Peter, the general term ‘water’ refers to H2O and so on. This simple idea later enabled Alfred Tarski to reintroduce the notion of ‘Truth’ into formal logic in a precise way, after it had been driven out by the logical positivist. Willard van Orman Quine, one of the most important analytic philosophers of the last century devoted most of his carer to understanding this notion. Reference is central to the work of people such as Saul Kripke, David Lewis and Hilary Putnam and many others.
Furthermore, the idea of a correspondence between whole expressions between, sentences or propositions and states of the world or facts drive the recent developments in philosophy of language and metaphysics under the label of ‘Grounding’ and ‘Truthmaking’ where a state of the world or a fact is taken to “make true” a sentence or a proposition. For example, the sentence “Snow is white.” is made true (or is grounded in) the fact that snow is white obtains. [1]
Given that this humble notion is of such importance to contemporary analytic philosophy, one may wonder why the father of modern linguistics - and a driving force in the field ever since the (second) cognitive revolution in the nineteen fifties - has argued for decades that natural language has no reference. Sure, we use words to refer to things, but usage is an action. Actions involve things like intentions, believes, desires etc. And thus, actions are vastly more complicated then the semantic notion of reference suggests. On Chomsky’s view then, natural language (might) not have semantics, but only syntax and pragmatics.
On Chomsky’s account, syntax is a formal representation of physically realized processes in the mind-brain of an organism. Which allows him to explain why semantics yields such robust results (a fact that he now acknowledges). What we call ‘semantics’ is in fact a formal representation of physically realized processes in the mind-brain of an organism – us. [2]
Chomsky has argued for this for a very long time and, according to him, to no avail. In fact, I only found discussion about this by philosophers long after I learned about his work. No one in a department that sides heavily on philosophy of language, metaphysics and logic ever mentioned Chomsky’s views on this core notion to us students. To be fair, some in the field seem to begin to pay attention. For instance, Kit Fine, one of the leading figures in contemporary metaphysics, addresses Chomsky’s view in a recent article (and rejects it). [3]
The main reason why I open this thread is that I came recently across an article that provides strong independent support to Chomsky’s position. In their article Fitness Beats Truth in the Evolution of Perception, Chetan Parakash et al. use evolutionary game theory to show that the likelihood for higher organisms to have evolved to see the world as it is (to have veridical perception) is exceedingly small. [4]
Evolutionary game theory applies the formalism originally developed by John von Neumann to analyze economic behavior and applies it in the context of natural selection. Thus, an evolutionary game is a game where at least two types of organisms compete over the same resources. By comparing different possible strategies, one can compute the likelihood for a stable equilibrium. [5]
Parakash et al. apply this concept to the evolution of perception. Simplifying a bit, we can take a veridical perception to be a perceptual state x of an organism such that x corresponds to some world state w. Suppose there are two strategies. One where the organism estimates the world state that is most likely to be the true state of the world. And another where the organism estimates which perceptual state yields the highest fitness. Then, the first strategy is consistently driven into extinction.
Now, compare this with reference: Some word (here taken to be a mental state) refers to a thing or a state of the world such that there is a one-to-one correspondence between the word and the world. It seems that this is an analogous situation. And thus, it should be equally unlikely that we have evolved to have reference in natural language. Any such claim needs empirical evidence and this is what Chomsky provides.
Chomsky’s main evidence comes from a test. I frame the test in terms of truthmaking. Consider the basic idea again:
- The sentence A is made true (or grounded in) the fact that A obtains.
Now, if this is true, then one would expect that the meaning of A changes because the world changes. We take a fact to be something that our best scientific theories can identify. In other words we take the objective reality to be whatever science tells us it is. Then we systematically vary physically identifiable aspects of the world and see how the meaning of a term that is supposed to pic out these aspects changes. The hypothesis is that if there is reference or correspondence, then the changes on one side should be correlated with changes on the other side. If this is not the case, then there is no one-to-one correspondence between words and things, and thus, natural language is not related to the physical world.
I give three examples, often discussed by Chomsky, to illustrate how this works: Consider the term ‘water’, embedded in the sentence “The water flows in the river.” Then, what flows in the river should be H2O. Suppose there is a chemical plant upstream and suppose there is an accident. There may be very few H2O molecules left, but it is still a river, it’s still water. So, we have enormous change in the world, but no change in meaning.
Or suppose you put a teabag into a cup of water. The chemical change may be undetectable small, but if you order tea and you get water, you wouldn’t be amused. So, virtually no change in the physical world and clear change in meaning.
Last, consider a standard plot of a fairy tale. The evil witch turns the handsome prince into a frog, the story continuous and at the end, the beautiful princess kisses the frog and turns him back into the prince. Any child knows that the frog was the princess all along. All physical properties have changed, but no child has any difficulty to track the prince. What this suggests is that object permanence does not depend on the physical world, but on our mind-internal processes.
This test has been carried out for a large number of simple concepts, in all cases, there is no correlation between physically identifiable aspects of the world and words. Notice that the test utilizes a dynamic approach. Only if we look at changes we see what is going on.
So, counterintuitive as this may seem, the evidence from the test supports the argument from evolutionary biology that developing concepts that correspond to the world is no advantage at all. And so, we shouldn’t be surprised that this is what we find, once we look closely.
On the other hand, does this conclusively prove that there is no relation between our concepts and the physical world? Not really, after all, the logical structure of language is there, but it suggests that we should look at the mind for a connection between words and the world. If we want to show that language has reference in the technical sense.
Sven Beecken
- https://www.researchgate.net/publication/338557376_Ground_and_Truthmaker_Semantics
- Chomsky, Noam (2016). What Kind of Creatures are We? Columbia Themes in Philosophy. Columbia University Press.
- https://www.researchgate.net/publication/338549555_The_Identity_of_Social_Groups
- http://cogsci.uci.edu/~ddhoff/FitnessBeatsTruth_apa_PBR
- https://plato.stanford.edu/entries/game-evolutionary/
Im trying to analyze cognitive information that was measured on a questionnaire by a 5-item semantic scale and a 2-item Likert scale. Can I combine both into one variable? how can I transform them into a common scale?
I've ben trying to research all over and can't find the answer :(
My hypothesis is comparing emotional bond and cognitive information and stating that emotional bond has a stronger impact on purchase intention.
In a new project I want to capture emotions in texts written by students during their studies.
I assume that the majority of these texts are factual and contain few emotions.
- Am I wrong, do student texts contain emotions from a semantic or psycholinguistic point of view?
- Is there any literature on semantic, psycholinguistic text analyses or sentiment analyses of student texts written during their studies?
My essay is an attempt to answer the following : « Is the data economy, then, destined to benefit only a few elite firms? » Apparently that would be the issue till now. What are available tools to avoid this false target ? Reference to my essay on Stochastic Models in particular the section « Handling human social technical dimension; in particular man-system interface including positioning technology at man services » you may find guidelines to produce these tools and make BIG DATA exploitable by large majority of users : 1. Engine should trace “player” behaviour, evaluate its capabilities and quickly meet its needs. 2. Immersion generated by simulation enables training and experimentation of behaviour strategies, in particular learning “by doing”. 3. Engine should use following resources : 3.1. Tools to be customized by trainers. 3.2. Applied standards. 3.3. New learning approaches discovery through obtained results, whether these approaches are positive or negative, in the sense of improving technology performance of assembled prototypes. 4. How SPDF (Standard Process Description Format) may produce a universal engine to run the stochastic model ? 4.1. SPDF consists of two parts : 4.1.1. Message structured-data part (including semantics) and, 4.1.2. Process description part (with higher level of semantics). 4.2. Two key outputs of the SPDF research will be a process description specification and framework for the extraction of semantics from legacy systems. 4.2.3. Note that : a)The more we may have semantic rules the more unpredictable events are controlled. b) Acquired knowledge to elaborate semantic rules for unpredictable events requires many occurrences of the stochastic model. c) Convergence shall not be reached until getting more qualitative semantic rules. d) Performing dynamically a given scenario is the goal of the proposed messaging system.
Dear Researchers,
We are trying to implement semantic Geospatial data infrastructure and want to use OWL files with Geonetwork.
Any hint on how to link ontology files with Geonetwork will be greatly appreciated.
Thank you very much for your time.
Regards
Ali Madad
Hi, I'm looking for the normative data of the semantic task "clothes" ("ropa") from the Spanish Verbal Fluency Assessment in a sample of young adults (20-49 years old). If someone has it, please, tell me, it would be extremely helpful for my current research. Thank you so much.
Is there any open-source algorithm/software/tool available for manually labelling fashion images for semantic segmentation in an end-to-end manner?
I'm looking for a method, a function or an API which checks whether a character string has a semantics or not (represents a word that has a meaning or a random letter string)
I know what contrastive learning is, and I know what other traditional segmentation losses are. What I understand is that the goal of contrastive loss is basically to pull similar things together and push dissimilar things apart. But I want to know how this can guide a segmentation pipeline (e.g. semantic segmentation)? My question is pretty basic. Blogs/video links are more welcome than research paper links.
I have built a semantic segmentation network using the segnet layers (on MATLAB) to identify circular and psuedo-circular objects in a series of grayscale images.
I have trained the model with the training dataset stored as an imageDatastore (imds), and would now like to test it with the testdata stored as an imds as well.
Could anyone tell me how do I do that?
I got inspired by the reading of a short paper written by Jonathan Tennant, entitled “Web of Science and Scopus are not global databases of knowledge” (2020). There I heard about some databases for the first time, like the Garuda portal or African Journals Online. It got me wondering: what else is out there and I do not know because my point of view is limited to the languages I speak and the place I live in?
So, my idea here is that we can share academic databases that we are familiar with and perhaps are not very well known in other countries or continents. Where do you do your research?
Here is a list of interesting links that I have collected, without strict criteria, reflecting my point of view as a Brazilian researcher in the field of Psychology.
Portal de Periódicos CAPES
Biblioteca Digital Brasileira de Teses e Dissertações (BDTD)
Portal brasileiro de publicações científicas em acesso aberto
Emerging Research Information (Preprints)
Sumários de revistas brasileiras
Scientific Electronic Library Online (SciELO) internacional
Scientific Electronic Library Online (SciELO) Brasil
SciELO Livros
SciELO Preprints
Periódicos Eletrônicos de Psicologia (PePSIC)
Biblioteca Virtual de Psicologia
Base de datos de Psicología (PSICODOC)
Biblioteca Virtual em Saúde (BVS)
Literatura Latino-americana e do Caribe em Ciências da Saúde (LILACS)
The Directory of Open Access Journals (DOAJ)
Red de Revistas Científicas de América Latina y el Caribe, España y Portugal (Redalyc)
Consejo Latinoamericano de Ciencias Sociales (CLACSO)
Directory of Open Access Books
OpenEdition
Open Book Publishers
JURN
Bielefeld Academic Search Engine (BASE)
Science Open
Lens
Dimensions
Semantic Scholar
Repositórios científicos de acesso aberto de Portugal
J-Stage
African Journals Online
Directory for Arabian Journals
Iraqi Scientific Journals
Garuda
CAIRN
Érudit
Persée
HAL Archives ouvertes
Perspectivia
In the 1980s Bealer wrote Quality and Concept which presented a type-free first-order approach
to intensional logic to compete with other higher-order, type-theoretic and modal approaches.
The presentation (both in the book and in a published article) is very sketchy (some non-trivial lemmas are merely stated) and the presentation is not easy to follow.
I was so impressed and intrigued by Bealer's philosophical arguments based on his system that I took it upon myself to clarify the presentation of his intensional logic and to furnish detailed proofs of the soundness and completeness results, which I hope might interest a larger audience. I wrote a paper containing this material which gives a general philosophical motivation and points out some open problems. I was interested in being sure of the correctness of these results before advancing to purely philosophical discussions on the advantage of this approach.
What would be a good journal to submit this paper to ?
Hello ReseachGate Community,
I would like to know what are the different techniques to maintain the semantic aspect of the knowledge representation (apart from ontologies).
Best regards,
In her English Verb Classes and Alternations: A Preliminary Investigation, Levin (1993) proposes classifications for verbs in English. Is this classification of a syntactic nature , semantic one or both?
Hi everyone. recently I designed a customized semantic segmentation network with 31 layers and SGDM optimation to segment plant leaf regions from complicated backgrounds. can anyone help me how to explain this with mathematical expressions using image processing. thank you
I could not find how to add semantics to my questions on Al-Quran.
MA thesis is going to be conducted on the syntax and semantics of food/fruit idioms in English.
Any source and previous studies will be highly appreciated
thanks in advance
I'm beginning to think that this distinction is not as clear-cut as it has traditionally been taken for granted. Consider the following example: "She may like this one" (uttered by a friend who is helping you find a dress for your girlfriend). Many would say that this is a case of epistemic modality (no speaker's commitment to the truth of the modalized proposition). However, in this context, the utterance of "She may like this one" counts as a suggestion, this notion falling, in my view, within the domain of deontic modality.
Hello,
I believe the "sentence processing" is a topic discussed in Psycholinguistics (I am not a Linguist, so please bear with me) .
In Psycholinguistics, what are the general steps in how a sentence is processed by human?
For example, from what I gather from google search, the general procedures in human sentence processing seem to be in the following order:
1. Syntactic analysis of a sentence
2. Shallow semantic processing of the sentence
3. Deep (?) semantic processing of the sentence
....
Is there any paper that talks about such procedures?
Thank you,
I am looking at second language development for children through play activities. I can see a a lot of second language use through the child's monologue with herself while playing but need to find research on the subject.
While there are multiple Java implementation for managing semantic knowledge base (Hermit, Pellet...) there seems to be almost none in pure Javascript.
I would prefer to use JS than Java in my project since I find JS much more clean and practical and easy to maintain. Unfortunately, there seem to be almost nothing to handle RDF data together with rules inference in Javascript. Although there are some work to handle only RDF (https://rdf.js.org/), it's unclear what is the status of these works regarding to W3C specifications.
Please help me to prove the code to solve the following problem;
Problem: "Semantic segmentation of humans and vehicles in images".
Following are the given information related to solve this problem;
Experimental study:
using a learning machine model: SVM, KNN, or another model
Using a deep learning model :
either Semi-dl: resNet, VGg, inception (Google net) or others
full DL site: Yolo, unet, CNN family (CNN, RCNN, faster RCNN), or others
Evaluation of the two models in the learning phase
Evaluation of both models with test data
Exploration & descriptions & analysis of the results obtained (confusion matrix, specificity, accuracy, FNR)
In both Japanese and Korean, the verbs meaning 'hear/listen' is homophonous with the verb meaning 'be effective/work' as in 'The medicine works' or (having castigated someone) 'That worked'. Because this same situation obtains in two (not closely related) languages, I assume that there must be some semantic linkage between these two semantic notions, and I would therefore expect to see the two notions being represented by the same word in some other languages. Do you know of any other languages where the word for 'hear/listen' also means 'be effective'?
Hello,
it is about creating a dataset for semantic segmentation with three classes. The problem is that one class dominates with >90% and one class is <2%.
- Is there a criterion for minimum class label participation? If so, how to satisfy it.
- What are the algorithm and benchmarking for validation of labeling quality?
I would appreciate if someone could share their experience and expertise.
Natural language understanding
i have 2 models on same data and on same validation split,i want to know which one is better?
model 1 : validation Dice score = close to 0.67, Validation IoU = close to 0.31
model 2 : validation Dice score = close to 0.60, Validation IoU = close to 0.35
which one is better and why?
We are developing a test for ad-hoc (ad-hoc) and scalar implicatures (SI) and are showing 3 images (of similar nature) to the participants: image, image with 1 item, image with 2 items.
Eg. Plate with pasta, a plate with pasta and sauce, a plate with pasta, sauce and meatballs.
A question for an ad-hoc is: My pasta has meatballs, which is my pasta?
Q. for an SI is: My pasta has sauce or meatballs, which is my pasta? (pasta with sauce is the target item since we are testing pragmatic implicatures, where 'or' means 'not both'.
The item that causes many difficulties in making up questions is the image without any items, ie. plate with pasta. How do we phrase the question so that it elicits this image as a target response, without using too complex syntax?
Negation; "My plate has no sauce or meatballs", "My plate has only pasta, no sauce and no meatballs", seems like a complex structure to introduce as a counterbalance to the other type of items.
Has anyone tested something similar, without negation? We would be grateful for any kind of tips and hints.
In this question, we assume we have a health dataset with many triplets of dummy variables. The dataset looks like this:
(existence_of_symptomA (1/0), symptomA_chronic (1/0), symptomA_persistent (1/0), existence_of_symptomB (1/0), symptomB_chronic (1/0), symptomB_persistent (1/0).......)
Each line represents a patient, and, the data are dummy because multiple symptoms may coexist per patient.
The outcome of interest is a dummy variable "hospital death" (1/0).
If you take a look at the data structure, you will notice that semantically the "existence_of_symptom" variables are the main ones, while the "symptom_chronic" and "symptom_persistent" describe characteristics of the "main" dummy variable.
If one wants to study the odds for death solely based on the existence of symptoms (just the existence_of_symptom variables) this would be a multiple binary logistic regression problem. This would create a model with the odds for death, for each symptom.
Here is the question: What would be the best approach to study the predictive contribution of the two extra "symptom_chronic" and "symptom_persistent" dummy variables per symptom? Would you simply add everything together into the list of IVs to run the logistic regression?
Wouldn't this approach be incorrect?
To begin with, everyone without a symptom will always have values of 0 to the chronic and to the persistent variables as well! Also how will the model recognize and "account for" the fact that data should be seen as triplets?
Any insights?
There are the following well-known ontology evaluation methods of computational ontology
1. Evaluation by Human
2. Evaluation using ontology-based Application
3. Data-driven evaluation
4. The Gold Standard Evaluation
We designed and developed a domain ontology and implemented it in OWL semantic language. How to evaluate it?
I have article citations existing in Semantic Scholar and did not exist in the Research Gate and Google Scholar. Also, i have article citations existing in Research Gate and did not exist in the Semantic Scholar and Google Scholar. So, I want to link these sites together.
I had modeled processes in my ontology like sale, purchase, etc. I want to implement these modeling constructs in OWL or RDF or any other semantic language. Can anybody suggest any language for the proper implementation of the aforementioned process?