Science topics: LinguisticsSyntax
Science topic

Syntax - Science topic

In linguistics, syntax is "the study of the principles and processes by which sentences are constructed in particular languages".
Questions related to Syntax
  • asked a question related to Syntax
Question
3 answers
Need details on this please!
Relevant answer
Answer
You can "turn on" syntax and it will generate the written versions of the commands that you activate through the drop-down menus etc. That way, you can see the equivalent syntax for what you are already doing.
  • asked a question related to Syntax
Question
1 answer
GROMACS version: 2022.4 Hi, I am working with a GPCR system embedded in lipid bilayer and water. I want to select every water (group 31 in index file) that is situated below 4A from a particular residue of the protein (group 43), so the command has been used as I have seen being used in earlier queries: gmx select -f input.xtc -s input.tpr -n input.ndx -select ‘group 31 and within 0.4 of group 43’ -on output.ndx
But I get multiple syntax errors: Error in user input: Invalid command-line options In command-line option -select Invalid selection ‘and’ Near ‘and’ syntax error In command-line option -select Invalid selection ‘within’ While parsing ‘within’ ‘within’ should be followed by a value/expression ‘of’ is missing In command-line option -select Invalid selection ‘0.4’ Near ‘0.4’ syntax error In command-line option -select Invalid selection ‘group’ Near ‘group’ syntax error In command-line option -select Near ‘�’ syntax error Where am I going wrong? I have tried few other combinations too, but those were also followed up similar syntax errors? Thanks in advance
  • asked a question related to Syntax
Question
3 answers
Dear researchers, I am trying to assess specific indirect effects in my model with three moderators. However, AMOS always gives a syntax error and my estimand could not run. When I try it on R studio (with lavaan and psych packages), I could not assign parameters to calculate specific indirect effects. Could you please help me identify problems and solutions for this?
Below is the code in R studio:
library(psych)
library(lavaan)
# I already input my CSV data so now I just describe it
describe(my.data)
A =~ A2+ A3 + A4 + A5 + A7 + A8
MS =~ MS1 + MS2 + MS3 + MS4 + MS6 + MS7+ MS8
M =~ M1 + M2 + M4 + MA8
IM =~ IM1 + IM2 + IM3 + IM4
FLA =~ Listen + Speak + Read + Write
# Regression paths from IV to mediators
M ~ a1*IM
A ~ a2*IM
MS ~ a3*IM
# Regression paths from mediators to DV (FLA)
FLA ~ b1*M + b2*A + b3*MS + c1*IM
#From this moment, I tried to assign parameters to calculate specific indirect effects. However, none of the below functions works!
direct : c1
Error: object 'direct' not found
direct:= c1
Error in `:=`(direct, c1) : could not find function ":="
direct<-c1
Error: object 'c1' not found
direct=c1
Error: object 'c1' not found
Relevant answer
Answer
As far as I recall, AMOS by default does not report indirect effects along individual paths when there is more than one indirect path between two factors/variables (e.g., in a parallel mediation model). Did you use user-defined estimands to get the estimates of the three indirect effects? If yes, maybe the syntax error is in the code defining these estimands, not in the model.
  • asked a question related to Syntax
Question
1 answer
“Machine learning tells us nothing!!!!  It [human language] is too rich. … It [language] works as well with impossible languages as with regular languages. … [AI] is as useful as a bulldozer is useful. …  It tells you nothing about mind and thoughts. …  Computational complexity is what accounts for language. …  [AI] will never provide [an] explanation to language. …  Syntax gives language meaning…and syntax is totally independent of the external world [and is controlled by the brain].” (Chomsky, N., 2022, On Theories of Linguistics (Part 2), Dec. 30, 42.32 minutes, Youtube).
If Noam Chomsky is correct that his Universal Grammar, as controlled by the brain, is independent of the external world, then ChatGPT—which is based on vacuuming large amounts of information from the external world—will never be a suitable metaphor for human language.  Furthermore, AI was not designed to provide meaning to its output and that it is merely a transmitter of an output that underwent a previous reconfiguration (i.e., a Factor Analysis that can act on its input). Therefore, the creativity attributed to AI—with its high energy cost as compared to that of the biological brain—will, at best, remain a tool in the hands of humans.  This of course is good news, since humans need to be held responsible for all actions (good and bad) generated by AI (Harari 2024).
Relevant answer
Answer
The statement critiques AI’s limitations in understanding human language, arguing that machine learning merely processes data without understanding the mind or meaning. Noam Chomsky’s view suggests that AI, relying on external world data, cannot replicate human language’s universality and independence from external stimuli. AI is framed as a tool lacking true creativity or comprehension, reinforcing that humans remain responsible for AI’s outcomes.
  • asked a question related to Syntax
Question
5 answers
I want to adopt an embedded Research Design for my PhD thesis and proposal. Please guide me and help me develop an analysis plan for my Research Design along with the suggested target population and sampling techniques so that I can help significantly through artificial intelligence the teachers speed up the ASL and ESL language learners' learning to combat common teaching barriers such as unfriendly behavior and unengaging behavior of the teachers. The respondents can be comparatively taken online ( i.e., schools, colleges, universities, or others) through the AIs respondents and manually. How can be all five chapters of my thesis and proposal if my title is," From Syntax to Semantics: Exploring AI-Enhanced Teaching Tools for English and Arabic Language Learners" and also develop the research questions and questionnaire items both in English and Arabic.
Relevant answer
Answer
Title: From Syntax to Semantics: Exploring AI-Enhanced Teaching Tools for English and Arabic Language Learners
Introduction:
In an increasingly globalized world, language learning has become an essential skill, not only for communication but also for personal and professional growth. English, as a global lingua franca, and Arabic, with its rich cultural and historical significance, are two of the most widely taught and learned languages. However, both languages present unique challenges to learners due to their complex syntax, semantics, and cultural contexts. Traditional methods of language teaching, though effective, can be time-consuming and may not always meet the needs of every learner.
Artificial Intelligence (AI) is revolutionizing education, offering promising tools that can enhance language learning experiences. AI-powered tools, from natural language processing (NLP) applications to intelligent tutoring systems, provide personalized, adaptive, and context-sensitive support. This paper explores the potential of AI-enhanced teaching tools in improving the learning process for both English and Arabic learners, addressing the challenges and opportunities they present.
1. Understanding Syntax and Semantics in Language Learning
Syntax refers to the rules that govern sentence structure, while semantics deals with meaning in language. Both elements are crucial in language acquisition, especially for English and Arabic, which have fundamentally different grammatical and syntactical structures.
English Syntax: English follows a Subject-Verb-Object (SVO) structure, and its syntax tends to be more linear, with fewer inflections compared to languages like Arabic.
Arabic Syntax: Arabic is a more flexible language in terms of word order (it can be Verb-Subject-Object or Subject-Verb-Object), and it uses complex morphological structures to convey meaning. It also has a rich system of diacritics and verb conjugations that change based on tense, mood, and aspect.
AI can be particularly useful in addressing these syntactical and semantic differences by providing real-time feedback and personalized learning experiences for students.
2. The Role of AI in Enhancing Syntax Learning for English and Arabic Learners
AI-driven tools can help learners of both languages grasp syntax by:
Automatic Grammar Checking: Tools like Grammarly for English and Arabic-language counterparts use AI algorithms to detect errors in sentence structure and suggest improvements. These tools can give learners real-time corrections and explanations about their syntax mistakes.
Syntax Tree Parsing: AI can help break down complex sentences into syntactical trees, offering a visual representation of how words function within a sentence. This is particularly helpful for Arabic, where word order can change without altering the meaning.
Language Models for Sentence Generation: Advanced AI models like OpenAI’s GPT can generate sentences and offer practice exercises based on the learner's level, helping them understand sentence construction in both languages.
3. AI’s Impact on Semantics and Contextual Understanding
Semantics is often the most challenging part of learning a new language because it requires not only understanding individual words but also grasping their meanings within various contexts.
AI-Powered Translation Tools: Translation tools, such as Google Translate and DeepL, utilize advanced AI models to provide accurate translations while considering both syntax and semantics. For Arabic learners, these tools help with understanding idiomatic expressions, cultural nuances, and metaphorical language that can be difficult to translate directly.
Context-Aware Learning: AI can leverage machine learning to recognize the context in which a word is used and adapt its meaning accordingly. This is especially useful in Arabic, where a single word may have multiple meanings depending on its use.
Sentiment Analysis: AI can analyze the tone, mood, or sentiment of a sentence, which is crucial for both English and Arabic learners in understanding subtleties in communication, such as sarcasm, politeness, or emphasis.
4. Personalized Learning Experiences
One of the major benefits of AI in language learning is its ability to offer personalized learning experiences:
Adaptive Learning Platforms: AI-based platforms like Duolingo, Rosetta Stone, and Babbel use algorithms to adapt the content and difficulty based on the learner's progress, ensuring that both English and Arabic learners receive the right level of challenge at each stage.
Chatbots and Virtual Tutors: AI-powered chatbots, such as Google Assistant or language-specific tutors like Replika, provide opportunities for learners to engage in real-time conversations, simulate language use in a variety of contexts, and practice their skills in a risk-free environment.
5. Addressing Challenges in AI-Based Language Learning
While AI tools offer numerous benefits, there are still challenges to consider:
Cultural and Contextual Sensitivity: Language is deeply embedded in culture, and AI systems may not always account for cultural nuances or regional dialects. For example, in Arabic, the variety of dialects (e.g., Egyptian Arabic vs. Levantine Arabic) may be underrepresented in AI systems, making it harder for learners to understand everyday communication.
Data Bias: AI models are only as good as the data they are trained on. If training data is skewed or lacks sufficient diversity, it can lead to errors, particularly in languages with complex morphology like Arabic.
Overreliance on Technology: While AI tools are powerful, overreliance on them may limit learners’ ability to engage in natural, human-led conversations. Balancing AI use with human interaction is crucial for true language fluency.
6. The Future of AI in Language Learning
As AI continues to evolve, its potential to enhance language learning will only increase. Future AI tools may incorporate:
Augmented Reality (AR) and Virtual Reality (VR): These technologies, combined with AI, could create immersive learning environments where learners practice language in real-life settings, interacting with virtual objects or characters that require them to use English or Arabic.
Voice Recognition and Pronunciation Improvement: AI systems will likely improve in their ability to assess pronunciation in both languages and offer more precise feedback.
Cross-Language Comparison: AI can be used to create tools that allow learners to directly compare English and Arabic in terms of syntax, grammar, and usage, helping them see the similarities and differences more clearly.
Conclusion
AI-enhanced teaching tools have the potential to revolutionize the way English and Arabic languages are learned. By addressing the challenges of syntax, semantics, and contextual understanding, these tools offer learners a more personalized, effective, and engaging way to master both languages. However, for these tools to reach their full potential, they must be continuously improved to account for cultural nuances, regional variations, and the complex dynamics of human communication. As technology advances, AI will likely play an even more central role in bridging language barriers, helping learners from diverse backgrounds access the full richness of both English and Arabic languages.
  • asked a question related to Syntax
Question
13 answers
I am carrying out a research on patients with sarcopenia related to fracture rate, using SF-12 version 2 as the QoL tool.
I was wondering if anyone is using the same questionnaire and calculate the scores using SPSS syntax? Thank you very much!
Relevant answer
Answer
Ahmed Ibrahim Morshedy Thank you so much! Do you know if this syntax also works for the acute version of the SF12v2 (with a 7 days recall period)? I am not sure if the norm values are the same. I would appreciate your help very much!
  • asked a question related to Syntax
Question
1 answer
In one of my Phd papers, I have cross-sectional dependence problem among the panels. I thus need to test for presence of unit roots. From literature, I note that there are two panel unit root tests that take into account cross-sectional dependency: Pesaran (CIPS) test and Bai and Ng test. I know the code for the former in Stata, but am not aware of the code for the later. Kindly please assist.
Kind regards
Relevant answer
Answer
Hi, did you find the Stata code? Eviews can do it though.
  • asked a question related to Syntax
Question
3 answers
Hello everyone, I use IBM SPSS Statistics 29.0.2.0 and I have a list of the following dichotomous variables:
  • Number of people who are self-employed Recoded,
  • Number of people with multiple jobs,
  • Number of household members who work full time,
  • Number of household members who work part-time,
  • Number of unemployed household members,
  • Household members who are retired,
  • Number of household members who are disabled,
  • Members who are not working for some reason.
This is an example of how they are coded: {1.00, No household members who are self-employed} and {2.00, Household members who are self-employed}. Of course, each one has its corresponding values. Now, I want to compute all of these variables into one comprehensive dichotomous variable called: Household economic activity status with the following values: 1.00, Households with at least one Economically Active Member and 2.00, Households with at least one Economically Inactive member.
I tried running different codes on the syntax, but none worked. I would greatly appreciate any help on how to do this.
Thanks and best wishes,
Amina
Relevant answer
Answer
Years ago I used the book SPSS analysis without anguish.
  • asked a question related to Syntax
Question
5 answers
Hi there,
I was looking for a scoring guide or SPSS/Stata/R syntax for scoring SF 12 version-2. Can anyone help me in this regard? My email address is m.alimam@cqu.edu.au
Thanks in advance.
Relevant answer
Answer
SPSS Syntax
*/SF12 V2 Scoring
RECODE SF12HF_1 (1=5) (2=4) (3=3) (4=2) (5=1) INTO SF12HF_1_r.
EXECUTE.
RECODE SF12HF_8 (1=5) (2=4) (3=3) (4=2) (5=1) INTO SF12HF_8_r.
EXECUTE.
RECODE SF12HF_9 SF12HF_10 (1=6) (2=5) (3=4) (4=3) (5=2) (6=1) INTO SF12HF_9_r SF12HF_10_r.
EXECUTE.
RECODE SF12HF_1_r SF12HF_2 SF12HF_3 SF12HF_4 SF12HF_5 SF12HF_6 SF12HF_7 SF12HF_8_r SF12HF_9_r
SF12HF_10_r SF12HF_11 SF12HF_12 (ELSE=Copy) INTO Item1 Item2A Item2B Item3A Item3B Item4A Item4B
Item5 Item6A Item6B Item6C Item7.
EXECUTE.
COMPUTE PF_1=Item2A+Item2B.
EXECUTE.
COMPUTE RP_1=Item3A+Item3B.
EXECUTE.
COMPUTE BP_1=Item5.
EXECUTE.
COMPUTE GH_1=Item1.
EXECUTE.
COMPUTE VT_1=Item6B.
EXECUTE.
COMPUTE SF_1=Item7.
EXECUTE.
COMPUTE RE_1=Item4A+Item4B.
EXECUTE.
COMPUTE MH_1=Item6A+Item6C.
EXECUTE.
COMPUTE PF_2=100*(PF_1 - 2)/4.
EXECUTE.
COMPUTE RP_2=100*(RP_1 - 2)/8.
EXECUTE.
COMPUTE BP_2=100*(BP_1 - 1)/4.
EXECUTE.
COMPUTE GH_2=100*(GH_1 - 1)/4.
EXECUTE.
COMPUTE VT_2=100*(VT_1 - 1)/4.
EXECUTE.
COMPUTE SF_2=100*(SF_1 - 1)/4.
EXECUTE.
COMPUTE RE_2=100*(RE_1 - 2)/8.
EXECUTE.
COMPUTE MH_2=100*(MH_1 - 2)/8.
EXECUTE.
*/TRANSFORM SCORES TO Z-SCORES;
COMPUTE PF_Z = (PF_2 - 81.18122) / 29.10588 .
EXECUTE.
COMPUTE RP_Z = (RP_2 - 80.52856) / 27.13526 .
EXECUTE.
COMPUTE BP_Z = (BP_2 - 81.74015) / 24.53019.
EXECUTE.
COMPUTE GH_Z = (GH_2 - 72.19795) / 23.19041.
EXECUTE.
COMPUTE VT_Z = (VT_2 - 55.59090) / 24.84380 .
EXECUTE.
COMPUTE SF_Z = (SF_2 - 83.73973) / 24.75775 .
EXECUTE.
COMPUTE RE_Z = (RE_2 - 86.41051) / 22.35543 .
EXECUTE.
COMPUTE MH_Z = (MH_2 - 70.18217) / 20.50597 .
EXECUTE.
*/CREATE PHYSICAL AND MENTAL HEALTH COMPOSITE SCORES:
COMPUTE AGG_PHYS = (PF_Z * 0.42402) +
(RP_Z * 0.35119) +
(BP_Z * 0.31754) +
(GH_Z * 0.24954) +
(VT_Z * 0.02877) +
(SF_Z * -.00753) +
(RE_Z * -.19206) +
(MH_Z * -.22069).
EXECUTE.
COMPUTE AGG_MENT = (PF_Z * -.22999) +
(RP_Z * -.12329) +
(BP_Z * -.09731) +
(GH_Z * -.01571) +
(VT_Z * 0.23534) +
(SF_Z * 0.26876) +
(RE_Z * 0.43407) +
(MH_Z * 0.48581) .
EXECUTE.
*/TRANSFORM COMPOSITE AND SCALE SCORES TO T-SCORES
COMPUTE AGG_PHYS_T= 50 + (AGG_PHYS * 10).
EXECUTE.
COMPUTE AGG_MENT_T = 50 + (AGG_MENT * 10).
EXECUTE.
COMPUTE PF_T = 50 + (PF_Z * 10) .
EXECUTE.
COMPUTE RP_T = 50 + (RP_Z * 10) .
EXECUTE.
COMPUTE BP_T = 50 + (BP_Z * 10) .
EXECUTE.
COMPUTE GH_T = 50 + (GH_Z * 10) .
EXECUTE.
COMPUTE VT_T = 50 + (VT_Z * 10) .
EXECUTE.
COMPUTE RE_T = 50 + (RE_Z * 10) .
EXECUTE.
COMPUTE SF_T = 50 + (SF_Z * 10) .
EXECUTE.
COMPUTE MH_T = 50 + (MH_Z * 10) .
EXECUTE.
  • asked a question related to Syntax
Question
1 answer
I'm trying to use this novel material (LaZrO2) as a gate oxide in SOI using 'user.material' syntax but getting an error
Relevant answer
I'm sorry, but this isn't my field.
Good research CCG
  • asked a question related to Syntax
Question
4 answers
1. a. Whoj knows whok heard what stories about himselfk?
b. John does (=John knows whok heard what stories about himselfk).
2. a. Whoj knows what stories about himselfj whok heard?
b. John does (=John knows what stories about himselfj whok heard
/John knows whok heard what stories about hisj own)
The examples (1a) and (2a) ask questions about the matrix subject 'who', with 'John' italicized in (1b) and (2b) corresponding to the wh-constituents that are being answered. I am curious about the binding relations in these examples, particularly in (2). Can example (2a) be construed as a question target matrix subject 'who' with 'himself' bound by the matrix subject?
Relevant answer
Answer
I don't think the English language is set up to nest separate questions this way, at least not to do that and be grammatically correct. It is logical that if someone heard a story about themselves, then the question could always follow, as to what that story was, so these 2 questions can be logically nested.
But I think you're trying to ask "what were the stories, if the person heard stories about themselves ?" But you can't do that by just using "what stories" since it becomes grammatically incorrect, so to be correct you need to use "which questions" but this then becomes a logical problem because "which stories" implies the selection of stories has already been determined, and a choice just needs to be as to which one, which isn't the case here.
  • asked a question related to Syntax
Question
2 answers
Recently I was suggested to do Parallel Analysis and compare with EV>1 to determine no of factors for my scale. However, the scale I have developed uses Principal Axis Factoring and not PCA. If I use the SPSS syntax for Common Factor Analysis as given by Connors, all the mean EVs are less than 1.If I am using PCA then I am getting some factors with mean values greater than 1. I am confused as how to proceed further as the constructs in my scale are correlated and not suitable for PCA theoretically. Do I still proceed with Parallel Analysis?
Relevant answer
Answer
O'Connor's syntax for parallel analysis does support principal axis factoring (PAF), as stated in the webpage:
I haven't used it for a while. As far as I recall, it prints the results for both principal component analysis and PAF by default. If it does not, it should be easy to request them.
By the way, there may be a misunderstanding of how parallel analysis works. It was developed to address a problem with the "EV (eigenvalues) >1" rule. Therefore, if we use parallel analysis, we do *not* check the number of EVs > 1. We check the number of EVs greater than the corresponding EVs based on random data, although dfiferent cutoff values have been proposed (e.g., means, percentiles), and there are different ways to generate the random data (e.g., multivariate normal distribution or random permutation).
You can find more about parallel analysis from O'Connor's paper, or the following two papers:
Glorfeld, L. W. (1995). An improvement on Horn’s parallel analysis methodology for selecting the correct number of factors to retain. Educational and Psychological Measurement, 55(3), 377–393. https://doi.org/10.1177/0013164495055003002
Horn, J. L. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185.
Hope this helps
  • asked a question related to Syntax
Question
1 answer
Usually we use .fchk files (Gaussian) for fcclasses to generate .fcc files for running fcclasses. But, we have optimised the geometries using ORCA which generated .out, .hess, .grad files. Though Fcclasses help is suggesting that .hess file can be used instead of .fchk, we're facing errors. Giving suggestions on syntax will help us a lot.
Relevant answer
Answer
Unlike gaussain outputs, which generates .chk/.fchk files, the syntax for ORCA is different.
---> gen_fcc_state -i molecule.hess -fts orca4 {the hessian file is output}
---> gen_fcc_state -i molecule.engrad -ih molecule.hess {for latest version of FCClasses}
The above lines can be utilised to generate required outputs from fcclasses with orca generated outputs.
Thanks for the developers of FCClasses for their mail in response to my query.
  • asked a question related to Syntax
Question
3 answers
Hi everyone,
I have conducted multigroup sem in JASP. I got measurement invariance test results that I expected. Now, I wonder to know how I can write the syntax showing constrained path to be equal in two groups. I could not find any explanations about it. Thank you for your answer
Relevant answer
I didn't understand this JASP tutorial very well, because the deltas in the model fit are comparisons between the first model (configural invariance) with the 2nd and 3rd models (metric and scalar), but they don't use the model without multigroup SEM to compare. One possible way to check this difference is to use the x2 difference in this Stats Tools Package (https://view.officeapps.live.com/op/view.aspx?src=https%3A%2F%2Fwww.gaskination.com%2FStats%2520Tools%2520Package.xlsm&wdOrigin=BROWSELINK).
  • asked a question related to Syntax
Question
2 answers
Hi everyone,
I tested a traditional 2-1-1 MLM based on the Mplys syntax example of Preacher
I specified for standardized output in the syntax, which I received for the direct effects and path coefficients etc. The problem is that I did not get standardized estimates for the calculated indirect effects but only unstandardized coefficients. I was looking for the answer on why this is, but I can only seem to find papers who also report unstandardized estimates for indirect effects yet don't explain why.
So in short, why are standardized estimates for indirect effects of multilevel mediation models in Mplus not included in the output even though I requested it?
Cheers,
Maria
Relevant answer
Answer
Thank you for answering this question! It has been very helpful with reviewers' comments on this particularly issue. I was wondering though whether you could provide us with a source or a reference for citing?
  • asked a question related to Syntax
Question
3 answers
I have tries "a_nonSchmid_110" or "a_nS" in the material.yaml file, but none of these works. Help is highly appreciated.
Relevant answer
Answer
thanks!
We know that higher values of non-Schmid coefficients deteriorate convergence but so far we could not investigate this systeamtically.
  • asked a question related to Syntax
Question
2 answers
Somebody who give me a syntax for SAS software to determine de letal concentration on probit analysis?
Relevant answer
Answer
Yo tengo ambos, sas y R. Y polo PC.
  • asked a question related to Syntax
Question
1 answer
I am trying to conduct an a priori Monte Carlo power analysis for a multi-group path analysis. Does anyone have a sample syntax for Mplus or a reference? I have collected effect sizes for each of my paths but I am not sure how to calculate the parameter estimates
Relevant answer
Answer
I offer a free on-demand workshop on that topic that you can find here:
  • asked a question related to Syntax
Question
2 answers
Is anyone using MFT to calculate J , then which code are you using...is it possible by DFT .. IF yes then what will be the syntax for it , can u explain by giving a simple example.
Relevant answer
Answer
Without knowing any info on the system it is hard to help regarding MFT, in particular.
But for DFT usually people try to employ, directly, a broken-symmetry approach on systems with two coupled spins. If the system has more than two magnetic atoms you'd need to solve a particular set of equations that can be obtained considering the electronic system, as you can read in some papers like
This kind of modelling is possible using Gaussian, Orca, well, any of the major packages that we use for DFT.
  • asked a question related to Syntax
Question
2 answers
Hello, I'm quite new here.
I'm just following a tutorial based on the folding Trp Cage Amber tutorial.
When I try to do the minimization with the following code:
pmemd -O -i 1min.in -o 1min.out -p TC5b.prmtop -c TC5b.inpcrd -r 1min.ncrst -inf 1min.mdinfo
it returns me:
"Error: Error from the parser: syntax error.
Check for typos, misspellings, etc. Try help on the command name and desc on the command arguments"
It seems that there is a syntax error but I can't find out which is it.
Could someone help me? Thank you very much
Relevant answer
Answer
Hello,
I actually just run that command inside tleap before typing quit. That was the error for sure! I solved it some minutes ago. Now it works.
Thank you for your kind response
  • asked a question related to Syntax
Question
4 answers
Words get thrown around. Terminology changes. Therefore, syntax is center of linguistics.
Relevant answer
Answer
sentence is the center of linguistics; starting from phonological level, moving to morphological level, then grammatical one(According to De Saussure structuralism)
  • asked a question related to Syntax
Question
2 answers
i am trying to make a GUI interface for my modelings but a syntax error are emerged.
how can i solve this error?
GUI codes are attached...
Relevant answer
Answer
have you found the solution of it
  • asked a question related to Syntax
Question
3 answers
transitivity Analysis can identify the development of the human language according to the analysis done by Halliday and Roqaia Hassen on the work of William Golding's novel " the inheritors"
Relevant answer
Answer
I share Maalej's point of view: systemic linguistics is not known to have shed light onto the human mind. It is cognitive linguistics that did it. Moreover, which version of so-called 'systemic linguistics' are we speaking of ?
  • asked a question related to Syntax
Question
10 answers
Right now I am wondering what I might be possibly doing wrong with my SPSS syntax. I have got 2 already dichotomized variables and I would need to combine them to make the 3rd one.
If var1 and var2 = 0, than it should be 0, and if var1 or var2 = 1 (or both of them), then it should be 1.
I tried this syntax (and many more :D), it is just not working.
if(roboSV_19_more = 0 and roboSV_19_less = 0) roboSV_19_together = 0.
if(roboSV_19_more = 1 or roboSV_19_less = 1) roboSV_19_together = 1.
I would be very greatful for any of your help, tipps and tricks. Thank you!
Relevant answer
Answer
Maksim Sokolovskii, I am surprised that you prefer to use a spreadsheet for a task like this. Using code in a statistical software package has the great benefit of documenting exactly what was done and making it reproducible.
  • asked a question related to Syntax
Question
4 answers
I want to run FMOLS and DOLS in stata. Can someone share the syntax or help me guide about the process?
Relevant answer
Answer
check if installed otherwise try ssc install xtcointreg
  • asked a question related to Syntax
Question
5 answers
what is the relationship between syntax and syntax ?
Relevant answer
Answer
A very short answer: stylistic concerns the speech acts, the saussurean 'parole' of the speaking individuals. Syntax concerns the language rules (of a single language as well as the general rules that can be found in all languages of the world ; i.e. the saussurean 'langue')
  • asked a question related to Syntax
Question
6 answers
this is what they say on etymoline.com:
"late 14c., auctorisen, autorisen, "give formal approval or sanction to," also "confirm as authentic or true; regard (a book) as correct or trustworthy," from Old French autoriser, auctoriser "authorize, give authority to" (12c.) and directly from Medieval Latin auctorizare, from auctor (see author (n.))."
Relevant answer
Answer
That is the way. The first step is the addition of the suffix -ize, which is used to create verbs from adjectives, to the root 'author' (in its adjectival meaning) creating the verb digitalized with the meaning of 'making something A ('author')'. After this, we add negative preffix 'un', which means 'the opposite or contrary action of V', creating 'unauthorize'. The same evolution follows the chain: digital>digitalize>undigitalize.
  • asked a question related to Syntax
Question
26 answers
What are some (esoteric) programming languages whose syntax comes closest
to natural language ? An example usually given is Perl which was developed by a linguist.
Personally I find that functional programming is closer to natural language than procedural programming.
Relevant answer
Answer
In the early eighties I wrote a Basic program on a Wang computer. While I was waiting for the result, I estimated how long the outer loop would take - one year, so I stopped this run. What is easily forgotten in automatic programming systems is complexity theory. You cannot simply throw in a bunch of requirements, rules, axioms, and always expect an automatic programmer to produce code that delivers results within the lifetime of the universe.
Regards,
Joachim
  • asked a question related to Syntax
Question
2 answers
JEWISH HUMOR
In 1978, psychologist Samuel Janus conducted a study which found that although Jews constituted only 3 percent of the U.S. population, 80 percent of the nation’s professional comedians were Jewish. The percentage of comedians is less today not because there are fewer Jewish comedians, but because in response to ethnic and gender identity movements, many new comedians have come from groups that were previously under-represented. Belle Barth, Danny Kaye, and other Jewish comedians substituted Yiddish for English when they wanted to fool English-speaking censors with risque jokes.
In Mel Brooks’ The Producers there is a play within the play called “Springtime for Hitler.” Dozens of dancers, singers, actors and pantomimists of every race and shape audition for the role of Hitler. The show’s opening production number culminates in the formation of a slowly turning swastika. The pillars at the back of the set are being lowered to a horizontal position and transformed into cannons. After seeing a bizarre interview on TV, Reiner turned to Brooks and said, “I understand you were actually at the scene of the Crucifixion.”
Brooks responded, “Ooooooh, boy!” and then continued in character saying that yes, he had known Christ. “He was a thin lad, always wore sandals. Came into the store but never bought anything.”
Henry Spalding says that much Jewish humor is in the form of honey-coated barbs at the people and things Jews love the most. Jews verbally attack their loved ones and their religion, but with the grandest sense of affection. Their jokes are “a kiss with salt on the lips, but a kiss nevertheless.” Dolf Zillman says that Jewish humor exhibits two antithetical statures: disparagement and superiority. This antithesis can be seen in the following joke:
The Israeli Knesset is lamenting all of the challenges that Israel faces.
One member of the Knesset suggests that Israel go to war against the United States. Other members say, “What?” “Such a war wouldn’t last 10 minutes.” “I know. I know. But then we would be a conquered country and the Americans would send us aid. They would build roads and hospitals and send food and agricultural experts.” “But,” said another member of the Knesset, “What if we win?”
Jewish stereotypes include the shrewd businessman, the overbearing mother, the Jewish American Princess, and the persecuted Jew. Arthur Naiman illustrates the stereotype of the overbearing Jewish mother with a story about a psychiatrist who tells a Jewish mother that her son has an Oedipus complex. The mother responds, “Oedipus, schmoedipus, just so long as he loves his mother.”
Yiddish is the language of sarcasm and irony. It is also the language of Jewish culture. Richard Fein’s experiences were typical:
“Yiddish was in my bones, but hidden from my tongue. I did not know Yiddish as a language, but I felt reared in its resonance, pitch, and tone. I recognized a few words uttered in isolation, grasped nothing of its structure, but felt washed in its rhythms. Although I could not speak Yiddish, it was not a foreign language. I never possessed it, but sensed it possessing me.”
Here is a sampling of Yiddish words and expressions:
Bobehla: “little grandmother” term of endearment
Chutzpah: gall or incredible nerve
Ganeff: a thief or mischievous prankster
Kibitz: kidding around
Mishmash: flagrant disorder or confusion
Nebish: a loser or sad sack
Nosh: a snack
Schmaltz: “chicken fat” sentimentality
Schmear: bribing or greasing the palm
Schmooz: a heartfelt visit
Shlemiel: clumsy or inept person
Shlep: carrying things (including oneself) in an undignified way
Shlimazl: fall guy or luckless oaf
Shnorrer: a beggar
In The Joys of Yiddish, Leo Rosten says that Yiddish syntax also enters the English Language:
Fancy-schmancy
kvetch
maven
mazel tov
tanz
Oy Vey!
Get lost.
You should live so long!
Who needs it?
He should excuse the expression.
It shouldn’t happen to a dog.
On him it looks good.
Other Yiddish patterns include virus schmirus, and a real no-goodnik.
Relevant answer
Answer
Marta: Very insightful response. Jewish humor also includes Schpritzing, kvetching, schtick, and oy vey jokes. Thanks for your insights.
  • asked a question related to Syntax
Question
15 answers
CASE GRAMMAR: A MERGER OF SYNTAX AND SEMANTICS
Charles Fillmore’s Deep Cases are determined not by syntax, but rather by semantics. Rather than having Subject, Indirect Object and Direct Object, Fillmore uses such terms as Agent, Experiencer, Instrument, and Patient.
The semantic features often occur in contrasting pairs, like Animate vs. Inanimate, and Cause vs. Effect. Thus:
Agent: Animate Cause
Experiencer: Animate Effect
Instrument: Inanimate Cause
Patient: Inanimate Effect
In an Active Sentence the most active Deep Case is eligible to become the Subject and the least active is eligible to become the Direct Object.
In a Passive Sentence the least active Deep Case is eligible to become the Subject and the most active case becomes an Object of the Preposition “by.”
Normally, the most active deep case is selected as the subject of the sentence:
The Actor if there is one
If not, the Instrument if there is one
If there is no Actor or Instrument, the Object becomes eligible. Therefore we have the following:
The boy opened the door with the key.
The key opened the door.
The door opened.
Is Case Grammar an effective method for showing the interrelationships between syntax and semantics?
Relevant answer
Answer
Anton: Excellent response. It's OK if the levels remain separated as long as there is eventually an interface between the two. This is the tricky part.
  • asked a question related to Syntax
Question
5 answers
Hi,
I have just installed and used spss 29. I was using spss 27.
I am analyzing data with a crossed random effects mixed model.
I am using syntax for this type of analysis. With the exact same syntax and data base I obtain different results with spss 29 and spss 27!
Specifically, the same model (that I call model 3) run with spss 27 was not giving me a warning whereas with spss 29 I get a warning (The final Hessian matrix is not positive definite although all convergence criteria are satisfied. The MIXED procedure continues despite this warning. Validity of subsequent results cannot be ascertained.).
Another case: with a slightly simpler model that I call model 2, I have no warnings but the results with spss 27 and spss 29 are not identical (e.g. BIC is different).
Is anyone experiencing the same or similar ?
Relevant answer
Answer
Thanks. I am running with SPSS29 a syntax written with SPSS27.
  • asked a question related to Syntax
Question
1 answer
Hi all,
I previously ran Little's MCAR test in mplus (all variables are categorical), but I am unable to find the syntax/steps to run it now. Can anyone point me in the correct direction in where to find this syntax/code/steps?
Thank you all so much!
Relevant answer
Answer
You can get it by specifying a mixture model with a single latent class. Below is an example syntax for a 1-factor CFA model with ordinal indicators:
TITLE: Ordinal CFA Example
DATA: FILE = data.txt;
VARIABLE: NAMES = y1-y4;
CATEGORICAL = y1-y4;
MISSING = *;
CLASSES = c(1);
ANALYSIS: TYPE = MIXTURE;
MODEL: %OVERALL%
F by y1-y4;
  • asked a question related to Syntax
Question
1 answer
I am currently initiating Calphad optimization for my experimental phase diagram construction, and I want a complete list of the items (keyword, operator, etc.) of .pop file.
Although I managed to get explanative documentation from Thermocalc and Computherm (Pandat), it is more like a case study rather than a complete explanation.
Relevant answer
Answer
Hello
You will find some examples in this video :
Regards
  • asked a question related to Syntax
Question
6 answers
We are recoding our data and transforming the data values into the values we won't according to the questionaries we have used and its scores.
The first couples of recodes went fine, and we can see the values in the Data view after we ran the syntax. But then all of a sudden after we ran a recode of the next variable in our syntax, the value did not show in the Data view, and instead of the value, the columns just have a dot; .
We don't have any missing values, and the original data, we are recoding, has values. So what are we doing wrong, can any body help us?
We have been checking the syntax over and over again, to see if we are doing something diffrent form the first couples of recodes, where nothing is wrong.. but we can't find any differences.
Relevant answer
Answer
I tried to recode a string to a numeric variable twice, once by typing in the syntax and then by using the drop-down menu. Neither one worked. The hand-entered syntax kept generating different errors even though it's identical to the drop-down version. The drop-down version ran correctly (no error messages), but the string variable didn't get recoded.
  • asked a question related to Syntax
Question
3 answers
Dear all,
I installed the latest Vina version on my conputer and as everyone knows, it does not have the log file generation argument. so we have to do it manually. Previously, Dr. Muniba of Bioinformatics review website was helpful in providing scripts to sort the log files.
Now the only option is the output pdbqt file. The script provided by Dr. Trott, gives errors and people have reported previously. But the issue is not resolved. The syntax error is in the line with "print" keyword as python recognises "print()" syntax.
So i changed theses line and now no error is generated but there is now output file to see any result.
Can someone help me with the matter as have a small library of 1000 compounds but without sorting them, i am stuck.
Thank you everyone in advance.
Best regards
Ayesha
Relevant answer
Answer
One more thing, to add
verify that there are .pdbqt files present in the current directory and subdirectories.
  • asked a question related to Syntax
Question
10 answers
Hi all!
I would like to test the effectiveness of an intervention that I conducted in a randomized controlled trial design (pre-test and post-test, intervention and control group). I read that fitting linear mixed models (LMM) would do the job, and I was wondering whether anyone knows of syntax available for doing this in Mplus.
Thank you very much for helping me out.
Lara
Relevant answer
Answer
Hi Lara. I had to go back to check your earlier post. You said that schools, not students, were randomly allocated to the two treatments. Searching for resources on analysis of cluster-randomized controlled trials turned up this 2018 article, which you may find helpful:
I've only scanned it quickly, but it discusses a few different approaches, and concludes with these recommendations.
The analysis of a cluster randomised trial with a baseline assessment of outcome is not as straightforward as it might seem, but the advice is similar for cohort and for cross sectional designs. ANCOVA should adjust for the baseline cluster mean, even in a cohort design where individual level adjustment at baseline is also possible. A good, all round alternative to ANCOVA is a constrained baseline analysis with a suitably flexible model for the correlation between individuals from the same cluster. We do not recommend a difference of differences analysis for a cluster randomised trial. Any analysis using mixed regression or generalised estimating equations has an increased risk of a false positive finding when there are relatively few clusters, so analysts should apply a correction in this case if one is available, or consider aggregating results at the cluster level.
HTH.
  • asked a question related to Syntax
Question
1 answer
How can I get the table tab in Amos v23?
Relevant answer
Answer
To add a published paper to your ResearchGate profile, you need to:
  1. Go to the Research tab on your profile.
  2. On the left, select Preprints and locate your publication.
  3. Click Add published version under the preprint title.
  4. Select the published work you want to link to if it’s already on ResearchGate, or create a new publication if it’s not.
  5. Click Add published version.
Alternatively, you can add a publication page to your profile by clicking the Add new button at the top right-hand corner of any ResearchGate page. I hope that helps!
  • asked a question related to Syntax
Question
2 answers
Dear scholars!
I have collected data on the met need for EMOC (Having expected obstetric complications, treated obstetric complications, and the met need in percentage) of certain countries. Can I do a descriptive meta-analysis?
What commands in Stata can I use?
I wanted to use CMA, but it's not freely accessible.
thanks.
Melese
Relevant answer
Answer
The metan command in Stata is used to perform a descriptive meta-analysis. To do this, the estimates must be converted to proportions by dividing by 100. The "metan" command will generate a forest plot showing the summary estimate and confidence intervals for the met need for EMOC in each country. The results can be interpreted based on the summary estimates and uncertainty around them. By using the metan command in Stata, users can perform a descriptive meta-analysis without relying on external software like CMA.
  • asked a question related to Syntax
Question
8 answers
I have a dataset of patients with ESRD and want to estimate GFR using the 2021 CKD-EPI formula.
Relevant answer
Answer
Hello Dineo
Here attached the code to calculate eGFR according to the CKD -EPI 2021
gen eGFR01 = .
replace eGFR01 = 142 * (PreopCreatinine/0.9)^(-1.2) * 0.9938^Ageatdx if SexM1==1 & PreopCreatinine > 0.9
replace eGFR01 = 142 * (PreopCreatinine/0.9)^(-0.302) * 0.9938^Ageatdx if SexM1==1 & PreopCreatinine <= 0.9
replace eGFR01 = 142 * (PreopCreatinine/0.7)^(-1.2) * 0.9938^Ageatdx * 1.012 if SexM1==0 & PreopCreatinine > 0.7
replace eGFR01 = 142 * (PreopCreatinine/0.7)^(-0.241) * 0.9938^Ageatdx * 1.012 if SexM1==0 & PreopCreatinine <= 0.7
best regards.
  • asked a question related to Syntax
Question
1 answer
I would like to improve the accuracy of the frequency from the Gaussian or Orca software please which syntax will I use?
Relevant answer
Answer
To calculate precision on a theoretical frequency, you need to have a set of predicted values and corresponding actual values. Precision is a metric used in classification tasks to measure the accuracy of positive predictions.
The formula to calculate precision is:
code Precision = True Positives / (True Positives + False Positives)
Here, True Positives refer to the number of correctly predicted positive values, and False Positives refer to the number of incorrectly predicted positive values.
If you have a set of predicted values predictions and corresponding actual values you can calculate precision in Python using the scikit-learn library as follows:
Use python code from sklearn.metrics import precision_score precision = precision_score(actual_values, predictions)
Make sure that the predicted values and actual values are in the correct format. For example, if you have binary classification (0 and 1), both predictions and actual_values should be arrays/lists of 0s and 1s.
If you are working with a different programming language or framework, the syntax may vary, but the underlying concept of calculating precision remains the same.
  • asked a question related to Syntax
Question
1 answer
Syntax is studied in both Linguistics and Computer science, merging researches between the researchers is likely to exposes further horizons of challenges which can only be solve by the team.
Relevant answer
This is what is done in some teams, right?
  • asked a question related to Syntax
Question
4 answers
Hi all,
I am looking for SPSS syntax to calculate the Framingham Risk Score of Cardiovascular disease. I need to calculate this for 500 people.
Thanks in advance for your help!
Relevant answer
Answer
Framingham has been churning out risk scores since the 1960s! Which one do you want? Their papers usually give a step by step calculation.
I should note that the coefficient for smoking in Framingham is surprisingly small. And also that Framingham is actually a family study, but this does not seem to have been taken into account in the models. Familial clustering of smoking and risk may account for the strange odds ratio.
  • asked a question related to Syntax
Question
7 answers
To be more specific, my experimental design is based on 3-way (4*2*2) ANOVA. Three independent variables: factor 1=type of microorganism (4 levels), factor 2=time (2 levels), and factor 3=moisture level (2 levels).
The interaction effect (microorganism*time*moisture level) was significant, but I do not know which interaction is significant.
So, in your opinion, which way is the best to use SPSS syntax in my case?
/EMMEANS=TABLES(microorganism*time*moisture_level) COMPARE(time) COMPARE(moisture_level)?
or
/EMMEANS=TABLES(microorganism*time*moisture_level) COMPARE(time)?
or
/EMMEANS=TABLES(microorganism*time*moisture_level) COMPARE(moisture_level)?
Your help is highly appreciated.
Thank you so much
Relevant answer
Answer
Hello A. M. A. Al-Khdri. If I follow, you want to carry out interaction contrasts. E.g., you want to test one of the two-way interactions at each level of the 3rd variable. Is that right? If so, I fear that it is not terribly straightforward in SPSS (IMO). The 2008 article by Howell & Lacroix shows some examples, but they the method they use requires fairly advance understanding of how to generate contrast codes. I think it would be much easier to get the desired results using the MANOVA command that they show in some of the supplementary files. You can see their article and get the supplementary materials (i.e., the Appendix link) here:
Meanwhile, do you have access to any other stats packages? I know that the contrasts (I think) you want to look at are much easier to get using Stata. And I imagine the same is true of R or SAS.
HTH.
  • asked a question related to Syntax
Question
3 answers
Hi there,
does anyone know, how to implement more than one group in the following syntax:
calc.relimp(swiss,
type = c("lmg", "last", "first"), rela = FALSE,
groups = c("Education","Examination"), weights = abs(-23:23) )
Let´s say we have "age" and "sex" as IVs and want to run this analysis with group 1 "Education" and "Examination" and with group 2 "age" and "sex". Is there a way to do this?
Thanks in advance,
Michael
Relevant answer
Answer
..thank you for your detailed answer!
Greetings,
Michael
  • asked a question related to Syntax
Question
1 answer
I just need to know how to specify a material in case of defining doping profile in SIlvaco.
Relevant answer
In Silvaco Atlas TCAD, you can dope a region with carbon by defining a carbon impurity profile in the input file. The syntax for this would depend on the specific file format you are using (e.g., deck, MSDX, etc.). Here is an example of the syntax for doping a region with carbon in a deck file:
*Region definition
REGION
...
CARBON 1.0e20
...
END
In this example, the CARBON keyword is used to specify the doping concentration of carbon in the region, which is set to 1.0e20 cm^-3. The REGION and END keywords are used to define the beginning and end of the region definition, respectively. The dots (...) represent other parameters or regions that may be present in your input file.
Note that the doping concentration and other parameters may need to be adjusted based on your specific device requirements and simulation setup. Additionally, you may need to add additional keywords or parameters to specify the distribution and type of doping. Please refer to the Silvaco Atlas TCAD user manual for more information on the syntax and usage of these keywords.
  • asked a question related to Syntax
Question
8 answers
I need to know what's going on inside a Fortran code which is not originally mine. I have a linux system and I tried the gdb debugger with some breakpoints but it didn't work( the compiler didn't compile it with -g).
I also tired using print for my variables in order to use them for plotting but after adding some lines of "print*, " syntax, the compiler again showed some errors and didn't show me any results.
I would be grateful if anyone can help me on this problem.
Relevant answer
Answer
Robert Schaefer
Thanks a lot. I'll try it and see if it works.
  • asked a question related to Syntax
Question
1 answer
Respected Nick Papior,
Sir Iam using SIESTA v4.0.2
I want to fix electrode position in scattering region calculation. In manual of v4.0.2 SIESTA it is given as
%block GeometryConstraints
position from -1 to -8 # to fix atoms
%endblock GeometryConstraints
but when i tried this atoms are not fixing there respective positions.
i also tried many syntaxes to fix positions but nothing worked.
Any help will be highly appreciated.
Thanks & Regards
Shanmuk
  • asked a question related to Syntax
Question
3 answers
Hello, I currently have a set of categorical variables, coded as Variable A,B,C,etc... (Yes = 1, No = 0). I would like to create a new variable called severity. To create severity, I know I'll need to create a coding scheme like so:
if Variable A = 1 and all other variables = 0, then severity = 1.
if Variable B = 1 and all other variables = 0, then severity = 2.
So on, and so forth, until I have five categories for severity.
How would you suggest I write a syntax in SPSS for something like this?
Relevant answer
Answer
* Create a toy dataset to illustrate.
NEW FILE.
DATASET CLOSE ALL.
DATA LIST LIST / A B C D E (5F1).
BEGIN DATA
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
1 1 0 0 0
0 1 1 0 0
0 0 1 1 0
0 0 0 1 1
1 0 2 0 0
END DATA.
IF A EQ 1 and MIN(B,C,D,E) EQ 0 AND MAX(B,C,D,E) EQ 0 severity = 1.
IF B EQ 1 and MIN(A,C,D,E) EQ 0 AND MAX(A,C,D,E) EQ 0 severity = 2.
IF C EQ 1 and MIN(B,A,D,E) EQ 0 AND MAX(B,A,D,E) EQ 0 severity = 3.
IF D EQ 1 and MIN(B,C,A,E) EQ 0 AND MAX(B,C,A,E) EQ 0 severity = 4.
IF E EQ 1 and MIN(B,C,D,A) EQ 0 AND MAX(B,C,D,A) EQ 0 severity = 5.
FORMATS severity (F1).
LIST.
* End of code.
Q. Is it possible for any of the variables A to E to be missing? If so, what do you want to do in that case?
  • asked a question related to Syntax
Question
3 answers
Hello, I currently have a set of categorical variables, coded as Variable A,B,C,etc... (Yes = 1, No = 0). I would like to create a new variable called severity. To create severity, I know I'll need to create a coding scheme like so:
if Variable A = 1 and all other variables = 0, then severity = 1.
if Variable B = 1 and all other variables = 0, then severity = 2.
So on, and so forth, until I have five categories for severity.
How would you suggest I write a syntax in SPSS for something like this? Thank you in advance!
Relevant answer
Answer
Ange, I think the easiest way for you to find an answer to your question would be to google something such as "SPSS recode variables YouTube". You'll probably find several sites that demonstrate what you want to do.
All the best with your research.
  • asked a question related to Syntax
Question
3 answers
I'm trying to translate the following line of Stata code into its equivalent SPSS syntax:
mixed outcome i.time c.age i.gender || id:, cov(unstr)
My understanding is that the equivalent SPSS syntax would be:
mixed outcome by time gender with age
/fixed time gender age
/random intercept | subject(id) COVTYPE(UN)
/print = solution
...and indeed, the two commands produce identical parameter estimates for the fixed effects. However, the parameter estimate for the _cons/Intercept term, standard errors and p-values they calculate are completely different. I would expect some variation in how these are calculated between programs, but this is to an unusual extent. Any thoughts?
Relevant answer
Answer
Hello Alice Wickersham. A couple of possibilities come to mind. IIRC, Stata's -mixed- command uses ML by default, whereas SPSS's MIXED command uses REML. So you'll have to decide which one of those you want to use, and modify one of the commands to use that one.
Also, IIRC, Stata's mixed command does large sample estimation by default--i.e., it reports z-tests and Chi2 tests. If you want it to report t-tests and F-tests instead, you have to use the dfmethod() option.
HTH.
  • asked a question related to Syntax
Question
4 answers
Hello,
I want to create the tertiles in SAS to organize my NRF variable into categories. I used the below syntax to do so but the problem is that the number of observations in each category is not similar. I am wondering if there is a potential error that I missed here.
PROC UNIVARIATE DATA=master2.NRF noprint;
VAR NRF;
WEIGHT WTS_M;
OUTPUT OUT=master2.NRFTertile PCTLPTS= 33 67 PCTLPRE=NRF_P;
RUN;
DATA master2.NRF;
SET master2.NRF;
IF NRF le ..... THEN NRFTertile=1;
ELSE IF NRF gt ..... AND NRF lt ..... THEN NRFTertile=2;
ELSE IF NRF ge ..... THEN NRFTertile=3;
RUN;
Thanks,
Elsa
Relevant answer
Answer
David Eugene Booth Thank you so much for the attachment and your reply.
  • asked a question related to Syntax
Question
7 answers
I think that when examining dyslexia in children, the emphasis should be on decoding, as when the child reads, it performs both alphabetic and semantic decoding of the text. ith other stages, morphosyntactic language levels
Relevant answer
Very interesting question. I am following it.
  • asked a question related to Syntax
Question
4 answers
Can anyone tell me the syntax in Mathematica or MATLAB for finding the Lyapunov exponents for five-dimensional and six-dimensional systems?
  • asked a question related to Syntax
Question
7 answers
Hello! My question concerns linguistics: I am looking for a method to measure recursion at the syntax level of natural languages. Are there any computerized instruments for this purpose? Which languages are they applicable to? I would appreciate any hint or publication on this issue!
Relevant answer
Answer
First any definition of a concept should follow a linguistic approach or theory in whose term you would define what recursion is. Then, an identification of the level at which you want to measure recursion is required.
As natural as it seems, recursion seems incalculable by observation. But, data may be limited to a certain number of languages in a certain type of texts. This may be a good beginning.
Regards
  • asked a question related to Syntax
Question
5 answers
I need to learn the syntax as a beginner.
Relevant answer
Answer
Initially I am practising on arty board which have internal XADC. I am able to give analog signal and able to display(debugging). But I have problem in finding the address of the stored sample. Basically I want to monitor those samples.
  • asked a question related to Syntax
Question
2 answers
Dear Professors, I have a question about my EMG data(The sampling frequency was 1500 Hz). The movement phase was different from one subject to another and had a different number of EMG data for each person. Now, how can I decrease the number of data to 100 data in each phase without changing in data Pattern for all subject's muscles (normalized to 100 data). Can I use the syntax"spline" for this goal or not? If yes, how should I use it? Please explain it. Best wishes Somayeh
Relevant answer
Answer
Somayeh,
EMG generally results from a bi-polar signal measurement of muscle activity with both positive and negative components as the activation occurs. As such, the signal is usually measured at a high frequency to pick up the signal's relative positive and negative directions. So merely downsampling or using a spline to decrease the number of samples will remove important parts of the signal.
That being said, the positive-negative parts of that signal are usually physiologically challenging, if not impossible, to interpret and what is often of more interest is the timing of increasing or decreasing muscle activation, which is correlated to the net amplitude of the signal as time progresses. Increased activation yields an increased EMG signal as more motor units are recruited via a more significant electrical signal.
Therefore, you should first use a moving root mean squared (RMS) window that takes the RMS of a certain number of samples (n) at a time and progresses to the end of the waveform. The RMS window process will then remove the positive-negative component of the signal and leave only information related to the modulating amplitude of the signal. In the more general signal processing realm, this is often referred to as "demodulation." After you have RMS windowed/demodulated your signal, you can simply use a spline filter or a standard downsampling/algorithm to get your data and your desired number of samples for your 100 data points for the complete movement phase for the individual.
Thor
  • asked a question related to Syntax
Question
2 answers
We are using the SF12 in our master thesis.
We would like to know what the exact difference is between SF12 v1 and v2?
We already found a syntax for v1, but we have used the second version and could'nt find a syntax for v2.
Is it possible that anyone can send us this syntax for the second version for SPSS (/excel)?
Thanks in advance!!
Relevant answer
Answer
Dear Heleen, did you find an answer to the question, what are the differences between the two versions?
Also would it be possible to receive the syntax for version 1?
Thank you so much in advance!
Kind regards, Johann
  • asked a question related to Syntax
Question
5 answers
Hi all,
I'm having trouble converting one particular variable in my dataset from string to numeric. I've tried manually transforming/recoding into a different variable and automatic recoding. I've also tried writing syntax (see below). The same syntax has worked for every other variable I needed to convert but this one. For all methods (manual recode, automatic recode, and writing a syntax), I end up with missing data.
recode variablename ('Occurred 0 times' = 0) ('Occurred 1 time' = 1) ('Occurred 2 times' = 2) ('Occurred 3+ times' = 3) into Nvariablename.
execute.
VALUE LABELS
Nvariablename
0 'Occurred 0 times'
1 'Occurred 1 time'
2 'Occurred 2 times'
3 'Occurred 3+ times'.
EXECUTE.
Thank you in advance for your help!
Relevant answer
Answer
Konstantinos Mastrothanasis, by introducing manual copying & pasting etc., you make reproducibility much more difficult. IMO, anything that can be done via command syntax ought to be done via command syntax. The basic code Ange H. posted will work for the particular values she showed in her post--see the example below. If it is not working, that suggests there are other values present in the dataset other than the ones she has shown us. But we are still waiting for her to upload a small file including the problematic cases.
Meanwhile, here is the aforementioned example that works.
* Read in the values Angela showed in her post.
NEW FILE.
DATASET CLOSE ALL.
DATA LIST LIST / svar(A20).
BEGIN DATA
'Occurred 0 times'
'Occurred 1 time'
'Occurred 2 times'
'Occurred 3+ times'
END DATA.
LIST.
* Recode svar to nvar.
RECODE svar
('Occurred 0 times' = 0)
('Occurred 1 time' = 1)
('Occurred 2 times' = 2)
('Occurred 3+ times' = 3) into nvar.
FORMATS nvar (F1).
VALUE LABELS nvar
0 'Occurred 0 times'
1 'Occurred 1 time'
2 'Occurred 2 times'
3 'Occurred 3+ times'
.
CROSSTABS svar BY nvar.
  • asked a question related to Syntax
Question
1 answer
How to define *fracture criterion, type = FATIGUE in the inp file and the different material constants. If one has an example of such syntax, i would appreciate that, thank you in advance.
Relevant answer
There is one for low cycle fatigue with the direct cyclic approach. The inp file can be found here:
  • asked a question related to Syntax
Question
1 answer
Hi everyone ,
I have a time series with a weekly seasonality (365 samples), and want to perform SARIMA in Eviews. Could anyone can help me to understand the estimate equation syntax for SARIMA(2,1,2)(1,1,0) with weekly seasonality?
Also should I have to use the differenced series or the actual series in the equation for the above SARIMA case.
ie, if my actual series is series 1, and the first difference is series 2 how can I write the equation for SARIMA(2,1,2)(1,1,0) with weekly seasonality
Thank you
Relevant answer
Answer
sophia.pwadam@stu.uccedu.gh @ Isaac mwinlaaru PHD
  • asked a question related to Syntax
Question
1 answer
Is there any easy to understand resource for network meta-analysis using Stata? I am looking for stata syntax for network meta-analysis.
Relevant answer
Answer
Part 6 in "Meta-Analysis in Stata: An Updated Collection from the Stata Journal" by T.M. Palmer and J.A.C. Sterne (2016) provides a good first step. The "help network" command in Stata is also useful. Cochrane also used to sponsor training at the University of Bristol for doing network meta-analysis using Stata but they may have switched to using R. That was a good course.
  • asked a question related to Syntax
Question
3 answers
I would like to include 90% CI for the estimate instead of the p-value, but I used Mplus to generate the standardized model results and I see that the syntax does not report confidence intervals, but "Estimate / S.E. /Est./S.E. and p -value".
Relevant answer
  • asked a question related to Syntax
Question
2 answers
I have tried several times to input ma-def2svp basis set in Gaussian Software but received a syntax error. I tried : custom=madef2svp .
ma stands for minimally augmented.
I would be very grateful if anyone kindly inform me the procedure.
Relevant answer
Answer
When your basis set is not on G16, you need to write the "gen" keyword on your command line instead of the basis set. Then you write the coordinates and finally, you put the info of the basis set for every atom involved. The info for every atom must be separated with four asterisks.
I will show you the format that you should follow along
%chk=water.chk
%mem=4GB
%nprocs=4
#p opt freq b3lyp/gen
(leave a blank space here)
Title Card Required
(leave a blank space here)
0 1
H 2.75003900 -0.15335800 -0.68738400
O 2.15041200 -0.17165800 0.06153800
H 2.01471800 -1.09859400 0.27260600
(leave a blank space here)
H 0
S 5 1.00
3.387000D+01 6.068000D-03
5.095000D+00 4.530800D-02
1.159000D+00 2.028220D-01
3.258000D-01 5.039030D-01
1.027000D-01 3.834210D-01
S 1 1.00
3.258000D-01 1.000000D+00
S 1 1.00
1.027000D-01 1.000000D+00
S 1 1.00
0.0252600 1.0000000
P 1 1.00
1.407000D+00 1.000000D+00
P 1 1.00
3.880000D-01 1.000000D+00
P 1 1.00
0.1020000 1.0000000
D 1 1.00
1.057000D+00 1.0000000
D 1 1.00
0.2470000 1.0000000
****
O 0
S 10 1.00
1.533000D+04 5.080000D-04
2.299000D+03 3.929000D-03
5.224000D+02 2.024300D-02
1.473000D+02 7.918100D-02
4.755000D+01 2.306870D-01
1.676000D+01 4.331180D-01
6.207000D+00 3.502600D-01
1.752000D+00 4.272800D-02
6.882000D-01 -8.154000D-03
2.384000D-01 2.381000D-03
S 10 1.00
1.533000D+04 -1.150000D-04
2.299000D+03 -8.950000D-04
5.224000D+02 -4.636000D-03
1.473000D+02 -1.872400D-02
4.755000D+01 -5.846300D-02
1.676000D+01 -1.364630D-01
6.207000D+00 -1.757400D-01
1.752000D+00 1.609340D-01
6.882000D-01 6.034180D-01
2.384000D-01 3.787650D-01
S 1 1.00
1.752000D+00 1.000000D+00
S 1 1.00
2.384000D-01 1.000000D+00
S 1 1.00
0.0737600 1.0000000
P 5 1.00
3.446000D+01 1.592800D-02
7.749000D+00 9.974000D-02
2.280000D+00 3.104920D-01
7.156000D-01 4.910260D-01
2.140000D-01 3.363370D-01
P 1 1.00
7.156000D-01 1.000000D+00
P 1 1.00
2.140000D-01 1.000000D+00
P 1 1.00
0.0597400 1.0000000
D 1 1.00
2.314000D+00 1.000000D+00
D 1 1.00
6.450000D-01 1.000000D+00
D 1 1.00
0.2140000 1.0000000
F 1 1.00
1.428000D+00 1.0000000
F 1 1.00
0.5000000 1.0000000
****
(leave a blank space here)
Hope this helps!
  • asked a question related to Syntax
Question
2 answers
What is the matlab
Relevant answer
Answer
There are several sources for defining matlab and matlab code. Particularly, you can read the definition of the Matlab code below:
Also, I found new source about Matlab for you pdf file. You can look. Siddharth Kamila
  • asked a question related to Syntax
Question
7 answers
Hello all,
I have a dataset which contains some couples and some single people. I want to keep all the singles and keep one person from each couple. Specifically, from the couples, I want to keep the person who has the highest score between the two partners on a specific variable.
I cannot figure out how to do this with syntax. Any advice would be greatly appreciated. Thank you!
Relevant answer
Answer
Hello Laurien Meijer. I think it is easier to tackle this with the data in the LONG format. E.g., suppose you have a variable called PairID that shows which pair each row belongs to. And suppose you want to keep for each pair the row with the higher age. This would do the trick:
* Save the max age for each pair as new variable age_max.
AGGREGATE
/OUTFILE=* MODE=ADDVARIABLES OVERWRITE=YES
/BREAK=PairID
/age_max=MAX(age).
* For each pair, keep the record with the higher age.
SELECT IF Age EQ age_max.
DESCRIPTIVES Age PairID.
Note that if both members of a pair have exactly the same age, both would be kept in the dataset. I don't know if that is possible for your data.
  • asked a question related to Syntax
Question
2 answers
I have ecological momentary assessment (EMA) data consisting of 3 daily assessments for 14 days. The 3 daily assessments are three individual 'items', for which taking a mean is appropriate.
At the row-level I can take the mean across the items, giving me 3 time-point specific means each day for each individual. If an individual is missing an item at any specific time point I can account for that using the "mean." syntax.
I want to aggregate all three within-day timepoints up to a daily mean using 'aggregate' in SPSS, but SPSS uses all the within-day timepoints to do so, even if one is missing. Does anybody know how to overcome this issue? I've tried the MISSING = COLUMNWISE command but that omits the entire day for anyone missing anything within day.
For example: (where TP = timepoint, and '.' = missing)
TP_MEAN TP_TOTAL AGGREGATED_VAR
T1 1 2 3 2 6 3.80
T2 3 2 4 3 9 3.80
T3 . . . . . 3.80
So for the daily mean I want (1+2+3+3+2+4)/6 (= 2.5)
but using AGGREATE (MEAN TP_TOTAL), SPSS is giving me 3.80, AND filling it in at the row level for T3 where there was no timepoint data.
Does anybody know how to fix this?
Thanks!
Relevant answer
Answer
Hi There David Morse
Thanks for your response, it is much appreciated! I think I realized from your reply that I need to restructure the data - it is in wide format in terms of the items but not in terms of the timepoints/days. After having banged my head against the wall for a while on this I am going to use R instead ;)
  • asked a question related to Syntax
Question
2 answers
I was trying to optimize an organoaluminium compound using the PBE0-D3(BJ)/def2-SVP level of theory. I have not used this theory before for my work. But after reading the Gaussian documentation and googling, I came up with the input like below -
PBE1PBE/Def2SVP EmpiricalDispersion=GD3BJ Fopt Freq=NoRaman
But Gaussian is throwing a syntax error in the basis set input. Can anyone help me with this issue?
The gaussian output message is attached below.
Relevant answer
Answer
Dear Aritra Roy ,
I think GD3BJ empirical dispersion was included in later revisions of Gaussian 09, so you should move either to not-so-old versions of G09 or directly to G16.
Hope you find it helpful