Questions related to Semantics
On behalf of the research team I head, I ask you to act as an expert in the framework of our international expert study "Possibilities and features of the formation of a worldview in the digital environment". Your expert opinions are extremely important to us when conducting this study.
The main goal of the project is to study the fundamental structural and substantive features of the formation of a modern worldview in the digital environment in the context of global technological transformations. The study is aimed at determining the potential of the influence of modern digital technologies on the value and semantic foundations of the traditional worldview, as well as at studying the value and semantic neutrality of digital actors, technologies, algorithms, and the digital space itself.
Based on the results of the work, our research team will organize a dissemination seminar for the experts who took part in the study in November 2022, within the framework of which the results of the study will be presented. We hope to see you among the participants of the seminar.
In advance, I express my deep gratitude on behalf of our research team for the time you have spent!
To participate in the survey, you can follow the link:
Due to the presence of open questions in the questionnaire, we recommend that you use a desktop computer or laptop in your work, as filling out answers to open questions from a mobile phone can cause you inconvenience.
The desirable deadline for filling out the questionnaire is September 30, 2022.
If you have any questions, you can always contact us by e-mail sergey@volodenkov. ru
Head of the scientific project,
Doctor of Political Sciences,
Professor of the Public Policy Department,
Faculty of Political Science,
Lomonosov Moscow State University
Could you please give your ideas and share resources about how document verification may be achieved using semantic analysis? Is there any tool or technique? Suggestions including simple and easy techniques would be great. Thanks.
With the influx of text data due to the advancement of the Internet and relevant technology, which classification approach works better Statistical methods such as information about term occurrences or semantic practices?
In my research, I have used mini-IPIP questionnaire. Now that I want to clean and analyze the data, I see that number of respondents who have answered negative-worded items and positive-worded items (semantic antonyms) in the same direction, or have responded to positive-worded items of one variable in opposite directions is too much (more than 30 percent).
The mini-IPIP scoring key sums up all the items of each variable, does it mean that I don’t have to consider these as careless responses?
I also have used Roberts, 1996 “Perceived Consumer Effectiveness” items and have the same problem. Although two items are negatively worded, Roberts has not mentioned the reverse question in the main article.
Which one technique is more applicable for real-time measurement of semantic similarity and semantic clustering? For example, classifying students' answers during the online session into different clusters based on their similarities. Sentences could be from any domain.
Suppose two sentences are there I want to split the two sentences for understanding or find the semantic of the sentences. But there is no delimiter between two sentences. Now the issue is how the system will identify the sentence and how to split the sentences without using delimiter.
If anyone have idea or gone through this issue suggest me the solution or tell me is there any tool is available
I would like to discuss about semantic density and semantic gravity related to physics concepts.
I am looking for languages in the industry 4.0 field of predictive analytics, maintenance, smart services. A language that is like a semantic method for communication between maschines and objects. The semantic language approach should ease the exchange of analytical duties and maintenance duties.
What languages are out there? What are the standards? Can you give me some basic publications for a starter to gain an understanding of the topic. I did find the so called "I4.0-Sprache" and "SemAnz40 – Semantische Allianz für Industrie 4.0" the semantic alliance for industry 4.0. But not sure if I am on the right track.
I lack the background since this is a very new field for me.
Your efforts and endeavors are highly appreciated.
Thank you very much.
Referential and model-theoretic semantics has wide applications in linguistics, cognitive science, philosophy and many other areas. These formal systems incorporate the notion - first introduced by the father of analytic philosophy Gottlob Frege more than a century ago - that words correspond to things. The term ‘2’ denotes or refers to the number two. The name ‘Peter’ refers to Peter, the general term ‘water’ refers to H2O and so on. This simple idea later enabled Alfred Tarski to reintroduce the notion of ‘Truth’ into formal logic in a precise way, after it had been driven out by the logical positivist. Willard van Orman Quine, one of the most important analytic philosophers of the last century devoted most of his carer to understanding this notion. Reference is central to the work of people such as Saul Kripke, David Lewis and Hilary Putnam and many others.
Furthermore, the idea of a correspondence between whole expressions between, sentences or propositions and states of the world or facts drive the recent developments in philosophy of language and metaphysics under the label of ‘Grounding’ and ‘Truthmaking’ where a state of the world or a fact is taken to “make true” a sentence or a proposition. For example, the sentence “Snow is white.” is made true (or is grounded in) the fact that snow is white obtains. 
Given that this humble notion is of such importance to contemporary analytic philosophy, one may wonder why the father of modern linguistics - and a driving force in the field ever since the (second) cognitive revolution in the nineteen fifties - has argued for decades that natural language has no reference. Sure, we use words to refer to things, but usage is an action. Actions involve things like intentions, believes, desires etc. And thus, actions are vastly more complicated then the semantic notion of reference suggests. On Chomsky’s view then, natural language (might) not have semantics, but only syntax and pragmatics.
On Chomsky’s account, syntax is a formal representation of physically realized processes in the mind-brain of an organism. Which allows him to explain why semantics yields such robust results (a fact that he now acknowledges). What we call ‘semantics’ is in fact a formal representation of physically realized processes in the mind-brain of an organism – us. 
Chomsky has argued for this for a very long time and, according to him, to no avail. In fact, I only found discussion about this by philosophers long after I learned about his work. No one in a department that sides heavily on philosophy of language, metaphysics and logic ever mentioned Chomsky’s views on this core notion to us students. To be fair, some in the field seem to begin to pay attention. For instance, Kit Fine, one of the leading figures in contemporary metaphysics, addresses Chomsky’s view in a recent article (and rejects it). 
The main reason why I open this thread is that I came recently across an article that provides strong independent support to Chomsky’s position. In their article Fitness Beats Truth in the Evolution of Perception, Chetan Parakash et al. use evolutionary game theory to show that the likelihood for higher organisms to have evolved to see the world as it is (to have veridical perception) is exceedingly small. 
Evolutionary game theory applies the formalism originally developed by John von Neumann to analyze economic behavior and applies it in the context of natural selection. Thus, an evolutionary game is a game where at least two types of organisms compete over the same resources. By comparing different possible strategies, one can compute the likelihood for a stable equilibrium. 
Parakash et al. apply this concept to the evolution of perception. Simplifying a bit, we can take a veridical perception to be a perceptual state x of an organism such that x corresponds to some world state w. Suppose there are two strategies. One where the organism estimates the world state that is most likely to be the true state of the world. And another where the organism estimates which perceptual state yields the highest fitness. Then, the first strategy is consistently driven into extinction.
Now, compare this with reference: Some word (here taken to be a mental state) refers to a thing or a state of the world such that there is a one-to-one correspondence between the word and the world. It seems that this is an analogous situation. And thus, it should be equally unlikely that we have evolved to have reference in natural language. Any such claim needs empirical evidence and this is what Chomsky provides.
Chomsky’s main evidence comes from a test. I frame the test in terms of truthmaking. Consider the basic idea again:
- The sentence A is made true (or grounded in) the fact that A obtains.
Now, if this is true, then one would expect that the meaning of A changes because the world changes. We take a fact to be something that our best scientific theories can identify. In other words we take the objective reality to be whatever science tells us it is. Then we systematically vary physically identifiable aspects of the world and see how the meaning of a term that is supposed to pic out these aspects changes. The hypothesis is that if there is reference or correspondence, then the changes on one side should be correlated with changes on the other side. If this is not the case, then there is no one-to-one correspondence between words and things, and thus, natural language is not related to the physical world.
I give three examples, often discussed by Chomsky, to illustrate how this works: Consider the term ‘water’, embedded in the sentence “The water flows in the river.” Then, what flows in the river should be H2O. Suppose there is a chemical plant upstream and suppose there is an accident. There may be very few H2O molecules left, but it is still a river, it’s still water. So, we have enormous change in the world, but no change in meaning.
Or suppose you put a teabag into a cup of water. The chemical change may be undetectable small, but if you order tea and you get water, you wouldn’t be amused. So, virtually no change in the physical world and clear change in meaning.
Last, consider a standard plot of a fairy tale. The evil witch turns the handsome prince into a frog, the story continuous and at the end, the beautiful princess kisses the frog and turns him back into the prince. Any child knows that the frog was the princess all along. All physical properties have changed, but no child has any difficulty to track the prince. What this suggests is that object permanence does not depend on the physical world, but on our mind-internal processes.
This test has been carried out for a large number of simple concepts, in all cases, there is no correlation between physically identifiable aspects of the world and words. Notice that the test utilizes a dynamic approach. Only if we look at changes we see what is going on.
So, counterintuitive as this may seem, the evidence from the test supports the argument from evolutionary biology that developing concepts that correspond to the world is no advantage at all. And so, we shouldn’t be surprised that this is what we find, once we look closely.
On the other hand, does this conclusively prove that there is no relation between our concepts and the physical world? Not really, after all, the logical structure of language is there, but it suggests that we should look at the mind for a connection between words and the world. If we want to show that language has reference in the technical sense.
- Chomsky, Noam (2016). What Kind of Creatures are We? Columbia Themes in Philosophy. Columbia University Press.
Im trying to analyze cognitive information that was measured on a questionnaire by a 5-item semantic scale and a 2-item Likert scale. Can I combine both into one variable? how can I transform them into a common scale?
I've ben trying to research all over and can't find the answer :(
My hypothesis is comparing emotional bond and cognitive information and stating that emotional bond has a stronger impact on purchase intention.
In a new project I want to capture emotions in texts written by students during their studies.
I assume that the majority of these texts are factual and contain few emotions.
- Am I wrong, do student texts contain emotions from a semantic or psycholinguistic point of view?
- Is there any literature on semantic, psycholinguistic text analyses or sentiment analyses of student texts written during their studies?
My essay is an attempt to answer the following : « Is the data economy, then, destined to benefit only a few elite firms? » Apparently that would be the issue till now. What are available tools to avoid this false target ? Reference to my essay on Stochastic Models in particular the section « Handling human social technical dimension; in particular man-system interface including positioning technology at man services » you may find guidelines to produce these tools and make BIG DATA exploitable by large majority of users : 1. Engine should trace “player” behaviour, evaluate its capabilities and quickly meet its needs. 2. Immersion generated by simulation enables training and experimentation of behaviour strategies, in particular learning “by doing”. 3. Engine should use following resources : 3.1. Tools to be customized by trainers. 3.2. Applied standards. 3.3. New learning approaches discovery through obtained results, whether these approaches are positive or negative, in the sense of improving technology performance of assembled prototypes. 4. How SPDF (Standard Process Description Format) may produce a universal engine to run the stochastic model ? 4.1. SPDF consists of two parts : 4.1.1. Message structured-data part (including semantics) and, 4.1.2. Process description part (with higher level of semantics). 4.2. Two key outputs of the SPDF research will be a process description specification and framework for the extraction of semantics from legacy systems. 4.2.3. Note that : a)The more we may have semantic rules the more unpredictable events are controlled. b) Acquired knowledge to elaborate semantic rules for unpredictable events requires many occurrences of the stochastic model. c) Convergence shall not be reached until getting more qualitative semantic rules. d) Performing dynamically a given scenario is the goal of the proposed messaging system.
We are trying to implement semantic Geospatial data infrastructure and want to use OWL files with Geonetwork.
Any hint on how to link ontology files with Geonetwork will be greatly appreciated.
Thank you very much for your time.
Hi, I'm looking for the normative data of the semantic task "clothes" ("ropa") from the Spanish Verbal Fluency Assessment in a sample of young adults (20-49 years old). If someone has it, please, tell me, it would be extremely helpful for my current research. Thank you so much.
I know what contrastive learning is, and I know what other traditional segmentation losses are. What I understand is that the goal of contrastive loss is basically to pull similar things together and push dissimilar things apart. But I want to know how this can guide a segmentation pipeline (e.g. semantic segmentation)? My question is pretty basic. Blogs/video links are more welcome than research paper links.
I have built a semantic segmentation network using the segnet layers (on MATLAB) to identify circular and psuedo-circular objects in a series of grayscale images.
I have trained the model with the training dataset stored as an imageDatastore (imds), and would now like to test it with the testdata stored as an imds as well.
Could anyone tell me how do I do that?
I got inspired by the reading of a short paper written by Jonathan Tennant, entitled “Web of Science and Scopus are not global databases of knowledge” (2020). There I heard about some databases for the first time, like the Garuda portal or African Journals Online. It got me wondering: what else is out there and I do not know because my point of view is limited to the languages I speak and the place I live in?
So, my idea here is that we can share academic databases that we are familiar with and perhaps are not very well known in other countries or continents. Where do you do your research?
Here is a list of interesting links that I have collected, without strict criteria, reflecting my point of view as a Brazilian researcher in the field of Psychology.
Portal de Periódicos CAPES
Biblioteca Digital Brasileira de Teses e Dissertações (BDTD)
Portal brasileiro de publicações científicas em acesso aberto
Emerging Research Information (Preprints)
Sumários de revistas brasileiras
Scientific Electronic Library Online (SciELO) internacional
Scientific Electronic Library Online (SciELO) Brasil
Periódicos Eletrônicos de Psicologia (PePSIC)
Biblioteca Virtual de Psicologia
Base de datos de Psicología (PSICODOC)
Biblioteca Virtual em Saúde (BVS)
Literatura Latino-americana e do Caribe em Ciências da Saúde (LILACS)
The Directory of Open Access Journals (DOAJ)
Red de Revistas Científicas de América Latina y el Caribe, España y Portugal (Redalyc)
Consejo Latinoamericano de Ciencias Sociales (CLACSO)
Directory of Open Access Books
Open Book Publishers
Bielefeld Academic Search Engine (BASE)
Repositórios científicos de acesso aberto de Portugal
African Journals Online
Directory for Arabian Journals
Iraqi Scientific Journals
HAL Archives ouvertes
In the 1980s Bealer wrote Quality and Concept which presented a type-free first-order approach
to intensional logic to compete with other higher-order, type-theoretic and modal approaches.
The presentation (both in the book and in a published article) is very sketchy (some non-trivial lemmas are merely stated) and the presentation is not easy to follow.
I was so impressed and intrigued by Bealer's philosophical arguments based on his system that I took it upon myself to clarify the presentation of his intensional logic and to furnish detailed proofs of the soundness and completeness results, which I hope might interest a larger audience. I wrote a paper containing this material which gives a general philosophical motivation and points out some open problems. I was interested in being sure of the correctness of these results before advancing to purely philosophical discussions on the advantage of this approach.
What would be a good journal to submit this paper to ?
Hello ReseachGate Community,
I would like to know what are the different techniques to maintain the semantic aspect of the knowledge representation (apart from ontologies).
In her English Verb Classes and Alternations: A Preliminary Investigation, Levin (1993) proposes classifications for verbs in English. Is this classification of a syntactic nature , semantic one or both?
Hi everyone. recently I designed a customized semantic segmentation network with 31 layers and SGDM optimation to segment plant leaf regions from complicated backgrounds. can anyone help me how to explain this with mathematical expressions using image processing. thank you
MA thesis is going to be conducted on the syntax and semantics of food/fruit idioms in English.
Any source and previous studies will be highly appreciated
thanks in advance
I'm beginning to think that this distinction is not as clear-cut as it has traditionally been taken for granted. Consider the following example: "She may like this one" (uttered by a friend who is helping you find a dress for your girlfriend). Many would say that this is a case of epistemic modality (no speaker's commitment to the truth of the modalized proposition). However, in this context, the utterance of "She may like this one" counts as a suggestion, this notion falling, in my view, within the domain of deontic modality.
I believe the "sentence processing" is a topic discussed in Psycholinguistics (I am not a Linguist, so please bear with me) .
In Psycholinguistics, what are the general steps in how a sentence is processed by human?
For example, from what I gather from google search, the general procedures in human sentence processing seem to be in the following order:
1. Syntactic analysis of a sentence
2. Shallow semantic processing of the sentence
3. Deep (?) semantic processing of the sentence
Is there any paper that talks about such procedures?
I am looking at second language development for children through play activities. I can see a a lot of second language use through the child's monologue with herself while playing but need to find research on the subject.
Please help me to prove the code to solve the following problem;
Problem: "Semantic segmentation of humans and vehicles in images".
Following are the given information related to solve this problem;
using a learning machine model: SVM, KNN, or another model
Using a deep learning model :
either Semi-dl: resNet, VGg, inception (Google net) or others
full DL site: Yolo, unet, CNN family (CNN, RCNN, faster RCNN), or others
Evaluation of the two models in the learning phase
Evaluation of both models with test data
Exploration & descriptions & analysis of the results obtained (confusion matrix, specificity, accuracy, FNR)
In both Japanese and Korean, the verbs meaning 'hear/listen' is homophonous with the verb meaning 'be effective/work' as in 'The medicine works' or (having castigated someone) 'That worked'. Because this same situation obtains in two (not closely related) languages, I assume that there must be some semantic linkage between these two semantic notions, and I would therefore expect to see the two notions being represented by the same word in some other languages. Do you know of any other languages where the word for 'hear/listen' also means 'be effective'?
it is about creating a dataset for semantic segmentation with three classes. The problem is that one class dominates with >90% and one class is <2%.
- Is there a criterion for minimum class label participation? If so, how to satisfy it.
- What are the algorithm and benchmarking for validation of labeling quality?
I would appreciate if someone could share their experience and expertise.
i have 2 models on same data and on same validation split,i want to know which one is better?
model 1 : validation Dice score = close to 0.67, Validation IoU = close to 0.31
model 2 : validation Dice score = close to 0.60, Validation IoU = close to 0.35
which one is better and why?
We are developing a test for ad-hoc (ad-hoc) and scalar implicatures (SI) and are showing 3 images (of similar nature) to the participants: image, image with 1 item, image with 2 items.
Eg. Plate with pasta, a plate with pasta and sauce, a plate with pasta, sauce and meatballs.
A question for an ad-hoc is: My pasta has meatballs, which is my pasta?
Q. for an SI is: My pasta has sauce or meatballs, which is my pasta? (pasta with sauce is the target item since we are testing pragmatic implicatures, where 'or' means 'not both'.
The item that causes many difficulties in making up questions is the image without any items, ie. plate with pasta. How do we phrase the question so that it elicits this image as a target response, without using too complex syntax?
Negation; "My plate has no sauce or meatballs", "My plate has only pasta, no sauce and no meatballs", seems like a complex structure to introduce as a counterbalance to the other type of items.
Has anyone tested something similar, without negation? We would be grateful for any kind of tips and hints.
In this question, we assume we have a health dataset with many triplets of dummy variables. The dataset looks like this:
(existence_of_symptomA (1/0), symptomA_chronic (1/0), symptomA_persistent (1/0), existence_of_symptomB (1/0), symptomB_chronic (1/0), symptomB_persistent (1/0).......)
Each line represents a patient, and, the data are dummy because multiple symptoms may coexist per patient.
The outcome of interest is a dummy variable "hospital death" (1/0).
If you take a look at the data structure, you will notice that semantically the "existence_of_symptom" variables are the main ones, while the "symptom_chronic" and "symptom_persistent" describe characteristics of the "main" dummy variable.
If one wants to study the odds for death solely based on the existence of symptoms (just the existence_of_symptom variables) this would be a multiple binary logistic regression problem. This would create a model with the odds for death, for each symptom.
Here is the question: What would be the best approach to study the predictive contribution of the two extra "symptom_chronic" and "symptom_persistent" dummy variables per symptom? Would you simply add everything together into the list of IVs to run the logistic regression?
Wouldn't this approach be incorrect?
To begin with, everyone without a symptom will always have values of 0 to the chronic and to the persistent variables as well! Also how will the model recognize and "account for" the fact that data should be seen as triplets?
There are the following well-known ontology evaluation methods of computational ontology
1. Evaluation by Human
2. Evaluation using ontology-based Application
3. Data-driven evaluation
4. The Gold Standard Evaluation
We designed and developed a domain ontology and implemented it in OWL semantic language. How to evaluate it?
I have article citations existing in Semantic Scholar and did not exist in the Research Gate and Google Scholar. Also, i have article citations existing in Research Gate and did not exist in the Semantic Scholar and Google Scholar. So, I want to link these sites together.
I had modeled processes in my ontology like sale, purchase, etc. I want to implement these modeling constructs in OWL or RDF or any other semantic language. Can anybody suggest any language for the proper implementation of the aforementioned process?
I have a question on a framing effect-like issue. Well, everyone of us has the immediate feeling that there's a huge difference between saying - for instance - "you should respect the environment" and "we should respect the environment", or also "the environment should be respected".
The difference might lie in how such sentences are interpreted by our minds and of course it affects the compliance to the described behavior (i.e., "respect the environment").
I'm convinced that I'm no genius and there must be a huge literature behind such an effect; but I'm not skilled in these themes, so I'm calling for help. Any clues?
P.S.: I know that nudge units and behavioral interventions teams in general promote the "make it personal" magic recipe to increase compliance, but I wonder where such strategies come from. I'm particularly interested in understanding the differences between "you should /we should", that is how grammatical phrasing (i.e., switching the person in the phrase) affects the interpretation and the relative compliance.
thanks in advance for any help
all the best,
I have a text with the names of 27 football players. I want to create an ontology of this text automatically, with concepts player, team, captain, goalkeeper, and so forth. is there any semantic tool that creates the said ontology from text automatically ?
What is a dialogue? What are its aesthetic and semantic approaches in the drawings of Pablo Picasso?
"OMSI (Optically Masked Size Increase): Einstein’s Fourth 1907 Miracle exposed"
appeared in Biomedical:
The size increase stands not alone. It goes hand in hand with a proportional reduction of rest mass ans charge.
An overlooked miracle indeed.
Oct. 4, 2020
Saint Francis Day
I'm working on semantic universals as my course project and find that really interesting.
I would be happy if anyone can guide me to find more interesting and challenging ideas.
Thank you :)
I am researching about the method for extract semantic information from a picture (Use computer vision techniques)
This main idea is build a social machine that can perform that information and do socially-based information.
Dear RG members
To increase the visibility of the work, the researcher aims to add it in more than one scientific media platforms. One of them is the Semantic Scholar. I try to add my recent articles on this platform by contacting this website but the articles not being added. Is there any way to do that?
I synthesized the Mn3O4 nanocubes (~ 6 nm) according to the literature. I called the nanocubes as zero-dimensional (0D) nanoparticles. However, the reviewer has a question on semantics. He said: "Can the Mn3O4 nanocubes really be called 0D? As cubes, they have width, length, and height (3D). 0D would only be the case if the electronic bandstructure of the Mn3O4 follows 0D behavior. Is that the case here?"
So, how to define the 0D, and any suggestions for this question? Thanks a lot.
We have always thought of broader research and joining hands in research and analysis of electronics and computer science. As a computer science expert you may be a seasoned programmer and a thinker. I would like to introduce the different domains where electronics and computer science can merge and play a role
Applied Electromagnetics & RF Circuits: Applied electromagnetics (EM) plays an essential role in areas such as wireless technologies, the environment, life sciences, transportation, and more. Faculty and students perform research in all aspects of applied EM, including Microwave and Millimeter-Wave Circuits, MEMS Circuits, Antennas, Wave Propagation Studies for Wireless Applications, Scattering, Computational Electromagnetics, Active and Passive Microwave Remote Sensing, Plasma Electrodynamics, and EM Metamaterials.
Computer Vision: Research goals include: i) the semantic understanding of materials, objects, and actions within a scene; ii) modeling the spatial organization and layout of the scene and its behavior in time. The algorithms developed in this area of research enable the design of machines that can perform real-world visual tasks such as autonomous navigation, visual surveillance, or content-based image and video indexing.
Control Systems: The development of sophisticated computer aided design software has enabled analysis and controller design for complex multivariable systems. The needs of society for improved transportation safety and a cleaner environment have posed challenges that can only be solved with feedback control.
Embedded Systems: Designing embedded systems is a huge challenge because they have so many requirements: they often need to be tiny, high-performance, inexpensive, reliable, and last a long time on poor power sources, all while sensing and influencing their surroundings. Faculty and students are applying their skills to the entire “stack,” from transistors and circuits to operating systems and applications.
ECE Education Research: ECE Education Research is a rigorous, interdisciplinary field in which scholars focus on and apply research methods from education, learning sciences, and social-behavioral sciences to address a variety of issues pertaining to: teaching and learning; college access and persistence; workforce development; and other issues critical to the success of the field of engineering. Scholars in the subfield of ECE Education Research focus on issues pertinent to the discipline of electrical and computer engineering.
Integrated Circuits and VLSI: Research in Very-large-scale integration (VLSI) digital circuits includes microprocessor and mixed signal (microcontroller) circuits, with emphasis on low-power and high-performance; computer-aided design, including logic synthesis, physical design, and design verification; testing and design for testability; advanced logic families and packaging; integrated circuit micro-architectures; and system integration.
MEMS and Microsystems: Devices such as micromachined neural probes for implantable prostheses, ultra-miniature low-power pressure sensors for catheters, tactile sensors arrays for fingerprint analysis, infra-red imagers for manufacturing process control, and micro gas chromatography systems for environmental monitoring are some of the past contributions of this program.
Network, Communication & Information Systems (NCIS): Communication networks are collections of receiving and transmitting stations that may relay information from one station to another by means of other stations acting as relays. There are many components in the process of transmitting information in a communication system. One component is information representation in minimal form, that is data compression. A second aspect of communication is modulation; the process whereby information is mapped into waveforms suitable for propagation. A third aspect is error control coding; the method by which errors made in receiving information can be corrected. The performance of a communication system is usually measured in terms of the probability of incorrectly decoding the information or the distortion between the original information-bearing signal and the reconstruction, and the energy used.
Optics & Photonics: Specific areas presently under investigation include nonlinear optics, optical MEMS (coupling optical fields to mechanical motion), ultrafast optics, semiconductor quantum optoelectronics, Terahertz generation and applications, fiber and integrated photonics and lasers, high-power fiber lasers, x-ray and EUV generation, quantum optics and quantum computing, optical microcavities, nanophotonics, spectroscopy of single quantum dots, biophotonics, and biophysical studies of biomolecular structure.
Plasma Science & Engineering: PSE has incredibly broad and strategic impact on national and economic security, and providing societal benefit. Modern microelectronic devices could not be fabricated in the absence of plasma etching, deposition and cleaning processes. Thin film solar cell technologies depend upon plasma deposition to be economically viable. Fabrication of biotechnology devices depends on plasma processes to harden artificial joints and prepare biocompatible surfaces on tissue scaffolding. Interplanetary probes are powered by plasma thrusters.
Power and Energy: Faculty are investigating energy conversion systems where enhanced performance of electrical machines and power electronics is being exploited to develop a variety of novel applications, from automotive propulsion systems to wind generators. Power systems research is seeking new tools and techniques for improving grid efficiency and robustness.
Quantum Science & Technology: Quantum mechanics has played an important role in many areas of engineering for decades now, fueling an increasing number of fundamental breakthroughs, as available devices become smaller and individual particles can be precisely controlled in the lab. Newly observed phenomena are often best explained using quantum theory, facilitating new technologies and applications. In particular, accounting for quantized energy levels and the Fermi nature of electrons in semiconductors has lead to more accurate modeling and optimization of CMOS transistors, as well as new results on capacitively-coupled quantum dots.
Robotics & Autonomous Systems: We also use artificial intelligence techniques for dealing with planning and uncertainty, localization and mapping, sensor processing and classification, and continuous learning.
Signal & Image Processing and Machine Learning: Signal processing is a broad engineering discipline that is concerned with extracting, manipulating, and storing information embedded in complex signals and images. Methods of signal processing include: data compression; analog-to-digital conversion; signal and image reconstruction/restoration; adaptive filtering; distributed sensing and processing; and automated pattern analysis.
Solid-State Devices and Nanotechnology: Research in organic and molecular electronics includes organic field-effect transistors, integrated circuits and light-emitting devices on glass and plastic substrates, hydrogenated amorphous silicon thin-film transistors and active-matrix arrays on glass and plastic substrates for flat panel displays and sensors, and active-matrix organic light-emitting display technology.
What are the other areas where there can be similar synergy and let us discuss further
Wikipedia describes Physics, lit. 'knowledge of nature' , as the natural science that studies matter, its motion and behavior through space and time, and the related entities of energy and force
But isn’t this definition a redundancy? Any visible object is made of matter and its motion is a consequence of energy applied. We might as well say, study of stuff that happens. But then, what does study entail?
Fundamentally, ‘physics’ is a category word, and category words have inherent problems. How broad or inclusive is the category word, and is the ordinary use of the category word too restrictive?
Is biophysics a subcategory of biology? Is econophysics a subcategory of economics? If, for example, biophysics combines elements of physics and biology, does one predominate as categorization? If, as in biophysics and econophysics and astrophysics there are overlapping disciplines, does the category word ‘physics’ gives us insight about what physics studies or obscure what physics studies?
Is defining what physics does more a problem of semantics (ascribing meaning to a category word) than of science?
Might another way of looking at it be this? Physics generally involves the detection of patterns common to different patterns in phenomena, including those natural, emergent, and engineered; if possible detecting fundamental principles and laws that model them, and when possible using mathematical notation to describe those principles and laws; if possible devising and implementing experiments to test whether hypothesized or observed patterns provide evidence for or give clues to fundamental principles and laws.
Maybe physics more generally just involves problem solving and the collection of inferences about things that happen.
The modeling language has its corresponding abstract syntax and semantics. But the semantics are only expressed in the form of textual description. If you want to use some standard mathematics and logic language to define its semantics, what are the alternative expressions?
I'm planning to use a fluency task but I hesistate on which one I can use. We investigate the mental flexibility of bilingual subjects.
So, I am reading a paper and I seem to be confused. In this paper they did a ChIP assay to observe the promoter region of p21 to determine if there was an interaction with p53. They claim their results prove a dependency of p53 on p21 expression. While this is not unheard of and I don't doubt the accuracy of this statement, I am a bit perplexed at some of the language used to describe this interaction and the explanation of the experiment used. From my understanding of ChIP, you use specific primers to target the promoter region of interest, then to determine the interaction that region may have with other genes or transcription factors you use a specific antibody. The output of data should tell you the level of expression of that particular antibody correct?
So they looked at the p21 promoter using specified primers then probed with the p53 antibody. In their description of results for the ChIP assay, they say they found increases in p21 expression as a result of p53. In the previous section they do report increases in p21 mRNA expression which makes me think it's a matter of semantics or I am understanding the experiment wrong. If the antibody was specific to p53, would they not have found increases in p53 expression at the promoter of p21 thereby showing a relationship with the two and possibly a dependency?
They do show increases in p53 protein expression along side p21 which supports their line of thinking, and I think it makes sense to associate elevated levels of p21 with elevated levels of p53 (wt) since p53 is interacting with p21 at its promoter. Elevated levels of p53 at the p21 promoter would make sense, and would not change the conclusions of this particular study.
I'm simply wanting to ensure I understand how the experiment works.
What are lexico semantics features of Nigerian English and how do they affect the spoken English of secondary school students?
I'm looking for a method, a function or an API which checks whether a character string has a semantics or not (represents a word that has a meaning or a random letter string)
In the Romance languages, many syntactic contexts, especially in subordinate clauses, require a subjunctive mood on the verb, not leaving any romm for mood choice. In other contexts, however, there is a choice between subjunctive and indicative depending on the semantics a speaker may want to convey. I'm interested in research on the statistics of these two types. What is the percentage of syntactically triggered vs. modal subjunctive use in Spanish (or other Romance languages)? What are the percentages in the written language as opposed to spontaneous spoken discourse? Is there published research on this? On what criteria could such a statistical analysis be based, given that there is an important intermediate group of cases, where many speakers might already consider the subjunctive obligatory while others might still see some room for variation?
I have the following situation: I have a paper X about topic Y. For paper X I did a forward search with Web of Science (checking all new papers which cite paper X). Then I have downloaded all articles I have identified via forward search (approx. 1'000 Papers). Now I would like to sort these papers according to the frequency of specific keywords used.
For example: I have found paper Z via forward search (so paper Z cites paper X which is about topic Y). Now I want to check if paper Z is also concerned about topic Y or if it just refers to it in passing. For that I search for specific keywords which correspond to topic Y. According to the frequency of the specific keywords mentioned in paper X, I want to classify it in the category "relevant" or "not relevant". Now, how can I determine the threshold for the keywords? That is, if paper X only uses the specific keyword once it is most probably not relevant to topic Y. But if it mentions the specific keyword 20 times it is probably relevant for topic Y.
Is there a recognized methodology to determine or approximate a threshold for the keyword frequency which allows to distinguish if a paper is relevant to topic Y or not?
With this approach I hope to reduce the 1'000 papers to those which are about topic Y.
Actually, I want to know what are the factors to notice in determining semantic features of different parts of speech.
I recognize some of them like +/- animate, +/- human, +/- male, and +/- young.
But are there any other features? For example, how can I identify different semantic features for words such as book and notebook?
Actually, for every text to be translated there is a kind of translation that's very suitable for that text. For example, scientific texts should be semantically translated because the accuracy of meaning in such a case is of top priority.Literary Texts, on the other hand, can be communicatively managed. In other words, the meaning in this case is not of top priority but it goes hand in hand with the form which is very important too.
It could be added that political discourse is characterized by a lot of playing on words' meanings. In other words, politicians, in most situations, try to use certain words and expressions with opposite meanings. Such being the case, the pragmatic approach should be depended on when it comes to rendering political discourse.
The levels of processing effect is a robust phenomenon whereby 'deep' processing of studied material (e.g. making a decision about the meaning of a word) results in better performance on subsequent tests of recall, relative to 'shallow' processing (e.g. counting the number of letters in a word).
My question is this: has any work to date compared deep and/or shallow processing to mere exposure (i.e. simply viewing a to-be-remembered word but not making any decisions about its orthographic/semantic content)?
Please, I have a question:
How to build the Bayesian network and conditional probabilistic tables of the following correlation: x1 → x5 ꓦ x6 ꓦ x7
As in paper "A. Malki, D. Benslimane, S.-M. Benslimane, M. Barhamgi, M. Malki, P. Ghodous, and K. Drira, “Data Services with uncertain and correlated semantics,” World Wide Web, vol. 19, no. 1, pp. 157–175, 2016.".
Given data or satellite image are in TIFF format and consist of 4 bands. So how can I prepare labelled training dataset for deep learning remote sensing semantic segmentation.
I want to make a multi-resolution CNN to make a semantic segmentation and I would like to know if there is a particular architecture that could be better to use?
We are working on Hindi which does not have rich lexical resource. There are 20 broad semantic relations which we have identified for domain-independent nominal compounds.