Science topics: Tables
Science topic
Tables - Science topic
Tables are presentations of nonstatistical data in tabular form.
Questions related to Tables
I am making a circular map with BRIG where I use multi-fasta files as a reference and 5 draft genomes as queries. The problem is that I don't get rings for my specific genomes, though BRIG doesn't give any error message. I can also see that blast tables are created for all the genomes. If anyone had the same problem and could suggest a solution that would be very helpful! Thanks
The article is assessed trace metals from pond water.
I have stated Certified standard reference materials from the National Institute of Standards and Technology (NIST), USA were used to perform this study. By spike recoveries for each metals, blank, independent standards and duplicate checks are ensured during laboratory work. and inserted a table as
Table 1: Summary of AAS protocols and LoD, LoQ and recovery (%) of the study
But the reviewer is not satisfied? what should be probable solutions? Thanks in advance.
Can Iran’s economy be rebuilt? Have thinkers in economics, humanities, and political science in Iran concluded that the Iranian economy can be made dynamic, advanced, and developed again? Or not?
There are many shared historical experiences and similarities between Iran and China. Both are legacies of the long-lasting empires and civilisations in West and East Asia, respectively. Like other great Asian empires, Iran and China were confronted with the expansion of the European imperial powers in the early-nineteenth century which ultimately led to the dislocation of these ancient empires.1 Both countries had resisted pressures towards peripheralisation in the global economy by the creation of nationalist popular revolutions and by building modern nation states and identities in the first half of the twentieth century. Despite different political systems, cultures, and external relations, both Iran and China have been trying to escape from external pressures and internal socio-economic backwardness by the modernisation of their states, societies, and economies via a state-led catch-up development strategy. These efforts led to the rise of China in the late-20th century and the emergence of postIslamic revolutionary Iran 1978/79 as a “contender state”2 to the hegemony of the United States (US) in West Asia. This article studies the impacts in the 19th century of the European-dominated global system on Imperial China and Iran. The expansion of European imperial powers through trade domination and (semi-)colonisation exposed these two empires to the pressures of marginalisation, peripheralisation, internal strife and loss of territory, ultimately leading to the responses of social revolutions, nation-state buildings and state-driven industrialisation. These efforts led to the rise of China in the late-20th century and the emergence of Iran as a “contender state” against the hegemony of the US in the Middle East and/or West-Asia after the Iranian Islamic Revolution of 1978/79. When the Trump Administration3 pulled out of the Iran nuclear deal, Iran’s long-awaited economic rebound stalled through the continuation of sanctions. Trump’s Administration also announced many new critical sanctions on Iran’s strategic institutions, economic sectors, and the key elements of the ruling elites. After more than 40 years of isolation, embargoes and threats of war, Iran is far from being recognised as a regional power. It has become accustomed to isolation because it aims to challenge US hegemony and efforts to make “a geopolitical order” in the Middle East before a successful catch-up drive. As for China, it generally refrained from offensive external relations, and after a century and a half of struggle against external pressures, in the early-21st century, the People’s Republic of China (PRC) became the world’s second largest economy and a modern industrialised power, while Iran is still seeking regional power status in West Asia. China, having become the second largest economy, has changed strategies to pursue more assertive external relations.This development raises two key questions: why did China succeed in rising as an industrialised regional and global power, and has Iran’s development strategy failed so far? I argue that the main reason of post-revolution Iran’s failure to become the regional hegemon comes from two interconnected issues: (i) the failure of its economic development strategy, which was mainly caused by (ii) the “offensive” external involvement in its own region before a successful catch-up process. Iran’s catch-up development strategy, which is the main material basis for the country’s rise, was hampered after the revolution by its “offensive, revolutionary and military oriented foreign policy”. This strategy blocked Iran from access to capital, information and technology concentrated in the core area of the global economy dominated by the US. Unlike Iran, China’s successful catch-up industrialisation was driven, in part, through rapprochement and consensus between Chinese leaders and the US and its allies in 1970s. This strategy led China to distance itself from Mao’s revolutionary offensive foreign relations and replace it with “defensive” and peaceful foreign relations in the era of its catch-up industrialisation (1980–2000s). The change and reorientation of China’s external relations paved the way for China to access capital, information, and technology necessary for its successful economic development and eventually its rise. Theory and practice of state, market, and development The forms and relations between state, society and the market in both China and Iran differ from liberal, pluralistic countries. This raises several questions: (1) What is the form of political authority and market regulations in China and Iran? (2) How can we conceptualise the configuration of China’s and Iran’s state-society and market forms, compared with the liberal state-society and market model? (3) what are the forces behind China’s and Iran’s socio-economic policies and development?
Unlike the (neo-)realist perspective on the fixed state and state function, there is no fixed form of the state, but, rather, a structure through which social forces and interest groups operate. At the global level, the state-society and market complex constitute the basic entity of international relations.4 Forms of political authority vary through differences in the degree of autonomy in relation to both internal and external environments, including the inter-state system and the global political economy. 5 In advanced liberal societies, the state builds consensus between capital and labour in the development of socio-economic policy. In authoritarian and/or centralised societies, a framework of collaboration and domination between state and society, and capital and labour, is imposed in an authoritarian manner, reflecting the relative autonomy of the state from society.6 Generally, we can make a distinction between two ideal types of statesociety, and market complexes in international relations: the “liberal state-society, market complex” (LSMC) and an “authoritarian” or “centralised state-society, market complex” (CSMC).7 The liberal statesociety complex which is characterised by a relative distinction between a governing or political class and the ruling class – the latter being mainly the capitalist class whose interests are predominantly represented by the governing class. One of the conditions for the creation of a LSMC is the existence of a strong civil-society and market with relative autonomy of classes and interest groups – such as capitalist, middle, and working classes. The emergence of a class-divided civil-society and civil-society organisations is the product of capitalist industrial development. In the LSMC, civil-society is relatively “self-regulating” because state intervention is less important in ensuring civil-society’s proper functioning.8 On the other hand, in the CSMC (e.g. China and Iran), a distinction between ruling and governing classes is negligible. The “state class” derives its power from control of the state apparatus and intervenes in society and market.9 In this configuration, autonomous social forces, mainly a strong capitalist class, are either underdeveloped or dependent on the state. Neither could assert their interests independent of state power. Thus, in the CSMC, a framework of collaboration between capital and labour is imposed in an authoritarian manner, reflecting both the state’s autonomy from society and the market, and control over domestic and external relations. Together with the centralisation of state power, the promotion of a state-led development strategy (i.e. long-term socio-economic, political, and cultural modernisation) is one of the driving forces of the state-class.China’s successful capitalist industrial development, accompanied by the ambitions of its leaders, created the propensity to gain a larger share of the world’s economy and resources,10 and are embodied in the Going Out Strategy and the BRI.11 Despite the geopolitical challenges of realising this, China’s industrial development – including military industrialisation and the formation of multilateral institutions like the Asian Infrastructure and Investment Bank, Shanghai Cooperation Organization (SCO), and BRICS (Brazil, Russia, India, China, and South Africa) – has facilitated its rise in the global wealth-power hierarchy. Whilst China left the global economy’s periphery, its success and integration into the global political economy’s core comes at the cost of domestic control. The Iranian experience of state-led industrialisation (mainly in 1960s and 1970s) was a success story amongst Asian developmental states. However, Iran’s successful development strategy was discontinued by the post-revolutionary “offensive, revolutionary, and military-oriented regional-external relations”, which, as stated, blocked access to capital,information and technology concentrated mainly in the US-dominated global economy. In contrast, China’s successful catch-up industrialisation was driven, in part, by rapprochement and consensus between Chinese leaders and the US (and its allies) in the 1970s when China reoriented Chairman Mao’s offensive and revolutionary external relations towards defensive and peaceful relations, thereby facilitating access to capital, information and technology for its successful economic development and eventual rise.The global wave of state-led industrialisation The post-imperial Chinese and Iranian political economy of state-led industrialisation is neither unique nor exceptional. Considering the rise and expansion of industrial capitalism from Europe over 250 years, the CSMC has emerged in different times and spaces as a response to two external pressures towards colonisation and domestic backwardness in political and socio-economic structures. The dialectic of these two factors led a limited number of the leaders of peripheral states to resist peripheralisation in the emerging global political economy by forming a centralised state and achieving self-reliant catch-up development from above.12 After WWII, some Asian states such as China, the Asian Tigers (i.e. Hong Kong, Singapore, South Korea, and Taiwan), Turkey, Iran, and India tried to resist economic backwardness and their peripheral position in the Westerndominated global political economy via autonomous, state-led catch-up industrialisation strategies. None industrialised under a liberal regime.13 European expansion, peripheralisation and resistance in China and Iran China’s imperial disintegration and peripheralisation in the Europeancentred world economy began when Europeans appropriated shipping and merchant activities from indigenous traders in the early-19th century.14 From the late-19th century until 1949, the heavy price that China paid for resisting such an existential threat to its survival included millions of victims, the systematic appropriation of large areas of its territory, the swamp of a brutal civil war between nationalist and communist fronts, and the formal loss of Taiwan. Nevertheless, in 1949, the Chinese Communist Party (CCP) Chairman, Mao Zedong, grandly announced that his people had finally brought a decisive end to the “century of humiliation” at the hands of internal and external enemies. Hence, with the establishment of the PRC, the CCP proclaimed itself the vanguard and supreme saviour of the Chinese nation. As a result, for more than three decades, nationalist calls were completely eclipsed by the strength of the new official political system and ideology. Equally, from the mid-19th century onwards, Persia was confronted with the expansion of European imperial powers (in particular Britain and Russia) who began to have a significant military, political and economic impact on the country’s political economy.15 The competition between Russia and Britain invited the Persian court to engage in balancing acts between its two enemies. European expansion eventually led to the Persian Empire’s peripheralisation and the incorporation of its economic system into the global capitalist system,16 which marked the beginning of the local economy’s disintegration and subordination to the capitalist world economy, the growth of foreign trade, and specialisation in the production of raw materials.
The political economy and security strategy of postrevolution Iran (1980–2020) After the emergence of the Islamic Republic of Iran (IRI) during 1979/80, its political economy of development and external relations drastically changed. The core of Iran’s post-revolutionary foreign policy centres around the “export of the revolution” and efforts to create a “geopolitical order” in West Asia. These new external relations led to a shift in the hierarchy of the triad between oil surplus, economic development, and security strategy in Iran. While the Shah used oil revenues mainly for economic development, the post-revolutionary ruling class emphasised the militarysecurity apparatus, thereby subordinating the “national development strategy.” The core of external relations was gradually redesigned by the leaders of the IRI as an “offensive” military strategy (predominantly in the Middle East). In this context, the IRI’s ruling class, among others, attempted to mobilise globally anti-American revolutionary Islamic-oriented peoples and organisations for the realisation of its strategic goals. Despite contradictory interests among factions of the ruling class, external relations remained unchanged. This core of external regional relations was aimed at forging a geopolitical order and gaining hegemonic status in its own region. This policy-strategy leads to the consequence of the US and its regional allies blocking and hindering Iran’s ambition and national development strategy. A key force in this strategy is the Islamic Revolutionary Guard Corps (IRGC). Its main purpose has been to protect the revolution from within and beyond Iran’s borders, while expanding Iran’s sphere of influence. 43This key aim of the IRI gradually became more influental in Iran’s economy and politics. The elite Quds Force – responsible for the IRGC’s foreign operations – emerged as one of the most significant Iranian armed forces, maintaining a network of para-military and Islamic revolutionary forces in Lebanon, Iraq, Syria, Yemen, and elsewhere. This strategy was confronted by the US who attempted to trigger a regime change using, among others, strategic and structural sanctions on Iran’s politics, economy, and military. The key sanctions against Iran’s oil and military industry came from the United Nations Security Council, the US and its allies. Although UNSC sanctions were lifted in 2016, sanctions by the US and its allies were reimposed after the US withdrawal from the Joint Comprehensive Plan of Action (JCPOA) as part of President Donald Trump’s “maximum pressure campaign” which has been continued under President Joe Biden. By targeting strategic economic sectors and companies (including oil, military, finance, and automotive), blocking Iran’s ability to earn revenues from oil exports and to import and export weaponry and military technology,44 US sanctions have hit Iran’s economy hard. Through the dollar’s position as a global reserve currency and the designation of the IRGC, among others, as a terrorist organisation, the US has also restricted companies from other countries from doing business with Iranian companies. In turn, this hostile environment reinforces the IRI’s determination to develop its domestic military capabilities and to mobilise social and material forces in the Middle East aimed towards pushing the US out of the region.Thus, the experiment of Iran’s rapid industrialisation after the revolution was hindered. The causes of this problem may be traced back to the external relations mentioned above, which have also influenced the policy of the political economy of development. As the Iranian economy remains heavily based on fossil fuels, GDP growth is largely driven by the export of oil and gas45 and less based on the productivity of a modern (non-oil) sectoral economy. Although many modern economic sectors exist in Iran’s economy, their growth and development occur at a very slow rate as sanctions have prohibited Iran from accessing capital, technology, and information (see also Figure 2). The external relations, based on conflict, and the political economy of development policies, which mainly emphasise the security-military sectors, are a permanent factor in Iran’s development crisis. Below, we present selected economic data which indicate the structural impasse of Iran’s economy after the revolution.Oil production and export remain key to Iran’s economy despite production remaining below pre-revolutionary levels (Figure 1). Shown in Table 2 and Figure 2, we also see that Iran’s manufacturing growth rates were high compared to other developmental states and even outperformed India, Indonesia, and Turkey, but also that the post-revolutionary change in domestic policy priorities that allocated oil revenues to the development of the security apparatus impeded Iran’s success. This left Iran, at US$64bn, behind many of its peers and even the city-state of Singapore (US$65bn). Another major post-revolutionary problem is high inflation and currency depreciation (Figure 3), which, coupled with low oil production, prevented high economic growth and the development of trade relations despite the temporary lifting of sanctions after signing and implementing the JCPOA in the mid-2010s (see Figure 4). These impediments to Iran’s industrialisation are reflected in its GDP which grew by only 52% (1976–2018), since 1991 22% of which has been dependent on oil. For the average Iranian, this means that pre-revolutionary incomes were higher (see Figure 5). To sum up, the Islamic Revolution severely distorted Iran’s industrialisation. The Shah’s use of oil revenues and the security apparatus in service of rapid state-led industrialisation with de-escalation of tensions in external relations was crucial to Iran’s socio-economic development strategy. The pivot towards offensive external relations where oil revenues are used to develop military-security capacities led to sanctions, the subordination of economic development in the triangular strategy and a lack of capital, information, and technology. To create the conditions for the lifting of sanctions and realise its long-awaited catch-up development strategy, this article contends that Iran needs to change its external relations back to “defensive, peaceful” external relations. Unlike Iran, China’s successful catch-up industrialisation was driven, in part, through rapprochement and consensus between Chinese leaders and the US and its allies in 1970s. This strategy led China to distance itself from Mao’s revolutionary offensive foreign relations and replace it with “defensive” and peaceful foreign relations in the era of its catch-up industrialisation (1980–2020). The change and reorientation of China’s external relations paved the way for China to access the capital, information, and technology necessary for its successful state-led development, and eventually, its rise.
Question about SPSS Process Model 4 which tests mediation
This table is used to find fh/U as a given log g for calculating the method according to the formula of f0. However, what are the values at the top of the table (0.0, 0.01, 0.02, ...) and how are they used?

for constructing a 2D structure using a composite genetic code table
Myself and Dr Pethuru Raj PhD, SMIEEE and Dr Sundaravadivazhagan Ph.D,SMIEEE pleased to invite you to contribute a chapter to our forthcoming Elsevier book, Advances in Computers: Cloud-Native Architecture (CNA) and Artificial Intelligence (AI) for the Future of Software Engineering. This publication seeks industry perspectives, practical insights, and cutting-edge research on CNA and AI to shape the next generation of software engineering.
We are specifically looking for chapters on (but not limited to) the following themes:
Foundations of Cloud-Native Architecture
(Microservices, containers, orchestration platforms, serverless computing)
AI and Machine Learning in Software Engineering
(Automated code generation, predictive analysis, intelligent testing)
DevOps, CI/CD Pipelines, and Automation
(Best practices in cloud-native development, AI-driven CI/CD)
Scalability and Performance Optimization
(Resilience engineering, performance monitoring, observability in distributed systems)
Security, Privacy, and Compliance in Cloud-Native Environments
(Secure development practices, threat modeling, regulatory requirements)
Edge Computing and Hybrid Cloud Solutions
(Decentralized processing, IoT integration, data management at the edge)
Data Engineering and Big Data Analytics
(Data pipelines, real-time analytics, AI-driven data processing)
Industry 4.0, Emerging Trends, and Future Directions
(Innovations that leverage CNA and AI, potential disruptions in software engineering)
We warmly encourage submissions that showcase industry know-how, case studies, and real-world implementations.
Key Dates & Deliverables
Final Table of Contents & Author List
(including email addresses):
Due by Friday, 25th March
Chapter Submission Deadline
1st July
Final Material (Ready for Production)
1st October
We ask that you confirm your proposed chapter title, list of authors, and contact details by Friday, 25th March to help us finalize the Table of Contents. Please feel free to reach out if you have any questions or require additional guidance on aligning your submission with the book’s objectives.
You can share your proposal, abstract, and contact details with us directly by replying to this email or by sending them to:
We appreciate your prompt response and look forward to including your valuable perspectives in this publication. Thank you for your time and collaboration.

I am a materials engineer.
I want to know what would be the voltage between one of the battery terminals and the ground ? And also between battery terminal and any metal (say aluminium can on a table).
How exactly the circuit would look like in that case?
How much readings will change when battery is charged and when it is discharged?
Bcz will have like table fruit or salad
We are excited to announce the return of the ITTF Sports Science Congress, set to take place on 15-16 May 2025 at Aspetar, Doha, Qatar—a world-leading specialized orthopaedic and sports medicine hospital.
After a six-year hiatus since the last Congress in 2019, we are bringing back this key event to foster collaboration among physicians, allied healthcare practitioners, sports scientists, coaches, and sports managers.
The Congress will cover cutting-edge research in sports science and medicine, and will feature a diverse range of topics, including prevention of common injuries in table tennis players, travel sports medicine, and aspects related to sleep, biomechanics, physiology, nutrition, fitness testing, training, perceptuo-motor skills, match analysis, para table tennis, youth development, table tennis as a health sport, anti-doping, mental and psychological aspects, gender equality, diversity and inclusion, coaching, governance, integrity, equipment, esports, and sustainability.
More information about registration, full agenda, and call for papers:

I would like to asses the performance of a non-survey regionalisation method in order to produce an Inter Regional Input Output Table (IRIOT) for France. Therefore, i wish to replicate the Montecarlo method used by Bonfliglio and Chelli (2008). They used 1000 randomly generated IO tables in order to compare performances of several regionalisaiton methods.
However, their IO tables were 20-regions x 20 sectors tables, which is 160 000 cells per table, repeated 1000 times per method, and they tested 22 methods. My computer can't manage that much calculations.
I was wondering if using 10 000 randomly generated smaller IRIOT (3 regions x 3 sectors) which are ligher for calculations would work, and by extension, if the sectoral disagregation had an effect on the performances of non-survey regionalisation methods?
The goal is to determine if my regionalisation method is statically good enough, in order to apply it to built a 22 regions x 38 sectors for France, (22 regions x 64 sectors eventually).
Thanks
Thermodynamics
==============
Zero-point energy = 3.369180 eV
------------------------------------------------------------------------------
T(K) E(eV) F(eV) S(J/mol/K) Cv(J/mol/K)
------------------------------------------------------------------------------
298.0 3.541961 3.298459 78.840 170.326
------------------------------------------------------------------------------
================================================================
The below paper maybe the first guideline of Adaptive regional Input Output (ARIO) model from Stéphane Hallegatte:
I am not clearly understand about the guides in Appendix B about making Local IO Table.
If you can understand that or have some experience about that, please support and discuss with me!
Many thanks and best regards!
I am trying to run BioStudio and GeneDesign to design a chromosome. When running any of the BioStudio scripts (for example, BS_PCRTagger), I encounter the following error:
DBD::SQLite::db prepare_cached failed: no such table: locationlist at /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store/DBI/mysql.pm line 1807.
-------------------- EXCEPTION --------------------
MSG: no such table: locationlist
STACK Bio::DB::SeqFeature::Store::DBI::mysql::_prepare /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store/DBI/mysql.pm:1807
STACK Bio::DB::SeqFeature::Store::DBI::SQLite::_offset_boundary /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store/DBI/SQLite.pm:606
STACK Bio::DB::SeqFeature::Store::DBI::SQLite::_fetch_sequence /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store/DBI/SQLite.pm:562
STACK Bio::DB::SeqFeature::Store::seq /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store.pm:2054
STACK Bio::DB::SeqFeature::Store::fetch_sequence /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store.pm:1289
STACK Bio::BioStudio::Chromosome::sequence /usr/local/share/perl/5.34.0/Bio/BioStudio/Chromosome.pm:390
STACK toplevel /usr/local/bin/BS_PCRTagger.pl:88
I would appreciate any help to resolve this issue.
How to prepare a Shukalev classification chart/table to define the groundwater types?
The importance of researchers thinking about creating focused teaching materials that include meaningful and illustrative images, as well as tables, lies in several key aspects. Firstly, such materials enhance understanding and comprehension by presenting complex concepts visually, making them easier to grasp. Images and tables also improve memory retention by organizing information in a way that is easier for students to recall during review or exams. Additionally, these materials simplify content and reduce complexity, helping students focus on the key points rather than being overwhelmed by details. Visual elements can also increase student engagement and interest, making the learning process more stimulating. Furthermore, incorporating images and tables caters to diverse learning styles, as some students learn better through reading, while others benefit from visual aids. Therefore, considering the creation of focused, visually-rich teaching materials is crucial for improving learning effectiveness and increasing students' academic performance.
A database error has occurred: SQLSTATE[HY000] [1045] Access denied for user 'ojs'@'localhost' (using password: YES) (SQL: create table `announcement_types` (`type_id` bigint not null auto_increment primary key, `assoc_type` smallint not null, `assoc_id` bigint not null) default character set utf8 collate 'utf8_general_ci')
This question is intended to obtain processing software and input output table analysis.
Hi all,
I have a total of 279 participants for my measurement invariance analysis. The majority of articles in my research area cited Chen (2007).
Chen, F. F. (2007) Sensitivity of Goodness of Fit Indexes to Lack of
Measurement Invariance, Structural Equation Modeling: A Multidisciplinary Journal, 14:3,
464-504, DOI: 10.1080/10705510701301834
However, I found it confusing as these authors cited the same article but used different ΔCFI and ΔRMSEA values as indicators.
As in Chen (2007), my understanding is, with small sample (N<300) and unequal sample sizes, ΔCFI > -.005 and ΔRMSEA < .010 indicate measurement invariance. Refer to page p501 and table 4 to 6.
But even those with small sample, they referred to Chen (2007), but used different values. It is really confusing. Can someone please help me out? TIA

Currently I am working on a study that (among other questions) compares effects between groups, while the effects within each group are expressed by odds ratio's (OR). We expressed the difference between groups in terms of statistical significance (p-value) but also wanted to add a measure of practical significance: effect size (ES). In the attached study from Sullivan e.a. 2012, it was described that this can simply be done by dividing the OR from both groups. For instance: OR group A = 3.0, OR group B (reference group) = 1.5, so the effect size of the difference between these groups is (3.0/1.5=) 2.0. Which, according to table 1 in Sullivan's paper, is a medium effect size. And in this case, the equation says: the ES of comparing the difference between both OR's can be calculated by dividing the OR from the intervention group (numerator) by the OR from the reference (denominator) group.
However, what should be done when the OR of group B (reference group) is larger than group A? In that case, the effect size would be smaller than 1, but table 1 in Sullivan e.a. suggests that an ES of 1 is the lowest possible ES. If that is true, then this would favor an equation that says: the ES of comparing the difference between both OR's can be calculated by dividing the largest OR (in that case: by definition the numerator group) by the smallest OR (in that case: by definition the denominator group).
Can anyone help me out with this question? Whether the denominator should be either the OR from the reference group, or the smallest OR?
Dear colleagues in the research community,
As we know, there are two approaches to hypothesis testing of cross-tables: testing for independence and testing for correlation between variables. In both cases, for exact probabilities, we ask the same question: what is the probability of getting "this table" and the "more extreme tables". For independence tests, the traditional exact test is the (dominant) Fisher-Freeman-Halton (FFH) statistic, and for correlation tests, the Mehta-Patel (MP) algorithm is a widely used solution. In some cases, especially when the table is sparse and ordinal, these algorithms give conflicting, if not opposite, inferences. I recently faced a table where the exact probability by FFH could be p = 1, while MP was p < 0.001 because of high correlation. In the attached note, I ponder this issue and compare their strategies. It seems that FFH's result is confusingly wrong, and the reason is the way the FFH algorithm treats tables with the same probability as the one of interest. This claim is strong, and it calls for a larger discussion within the research community about FFH: Should we change the logic of FFH to avoid confusing results? If we should, why? If we should not, why not?
I am conducting experiments using a 1-g shake table and have collected data from both accelerometers and strain gauges. However. I am uncertain about the appropriate filtering method for the data. Should I apply a low-pass filter or a band-pass filter for optimal results? The shake table has a maximum frequency of 50 Hz, while the excitation frequency is 2 Hz.
I'm extracting some data from an older paper and I've run into some units that are, to me, a little obscure. At first I thought I had it figured out. Its nutrient data reported in y/mL (with the y being a 'gamma' symbol). After looking into it I found some information suggesting that gamma is equivalent to a microgram. Data reported in micrograms per mililiter made sense for what I was looking at and I moved on.
I have gotten to a new table in the paper with measurements reproted in my/ml (again, it is a 'gamma' not a y but I'm not sure how to insert the correct symbol here). If my previous assumption is correct then that means that this measurement is somehow milli-micrograms/milliter? I'm a little perplexed because I don't see that making sense.
Is anyone here familiar with these units?
Dear All,
I am working on human gut microbial metagenome analysis.
I wonder if the 'canonical correspondence analysis' technique, which is widely used in ecological studies, could be used to explore the effects of environmental variables on microbial pathway abundances. Which means, sites = sample ID, species = pathways and their abundances in each sample, environmental variables = various anthropometric data such as BMI, age, protein intake....
I assume if a pathway-abundance table and an environmental-variables table are provided, CCA would not care if it is a species abundance table or a pathway abundance table.
I look forward to your suggestions.
Have anyone a one table for all colorimetrical agents for UV-VIS spectroscopy?
The mistake in the article that was already published (a few days ago) occurred under the responsibility of the journal's production team and not under my responsibility. I don't understand why I have to pay for open access and also tolerate the publisher's mistakes.
They swapped the column headings in one table so that each heading belongs to the neighboring column (it's a two-column table).
Suggestions/thoughts?
At the moment I'm struggling with the documentation of my labs cell culture work, and would be happy to hear some suggestions.
I've established an excel table that I though is easy to use and straight forward, calculates some parameters automatically (Population Doubling Time (PDT), cumulative population doubling). Intention was that each passage is one row in the table. This would be ideal if we would have a defined optimal seeding cell density range for the cell lines and with every passage we only prepare one (or multiple) flasks with the same seeding density.
However, when technicians are preparing multiple flasks with various seeding cell density within one passage step, then the whole table is not useful anymore: there is only 1 next row in the table (as the automatic calculations are using the seeded and the harvested cell numbers), but multiple new cultures that needs to be followed separately, as - depending on how the cells feel themselves - these flask may have different outcomes (viability, VCD, PDT, next passage date).
I have done various paper-based cell culture documentation earlier, none of them were able to track properly such dichotomy when multiple flasks were prepared with different parameters (seeding cell density or different volume) at the same time.
Anyone has experienced the same? Any idea what would be the best solution (besides obviously optimize cell culture conditions and than stick to them later on)?
As I understood, Vbi is the difference between the conduction band energy level between the Absorber and ETL interface to the Absorber and HTL interface (EC_Abs/ETL - EC_Abs/HTL) divided by the elementary charge (q).
but I am confused about identifying the table's built-in potential (Vbi) value.
Please anyone suggest to me, in full detail.


I read here and there trying to understand the mechanism that throws buildings into the earthquake.
Civil engineers, professors of earthquake engineering, regulations, trying to make buildings invulnerable to earthquake.
And yet, despite all the science, when there is a big nearby earthquake the structures are destroyed and we are flattened.
Things to me are simple. Too simple. But they don't want to listen. I challenge any engineer who wants any professor to a dialogue about what I say below. I'll say them simply so that even someone who is not an expert can understand them.
Let's take 30 CDs placed on top of each other on a table.
If we move the table abruptly the 30 CDs will slide one on top of the other and the pile of CDs will be shattered.
If on these 30 CDs we place toothpicks on them for legs, the column of 30 CDs will become much taller, and with a slight shake of the table it will collapse more easily than before.
If we now replace the CDs with the building plates and the toothpicks with the columns, we will have a 30-storey building.
We all now understand the simple mechanism that brings down the stack of 30 CDs and the 30-story apartment building.
Action - reaction, or acceleration - inertia.
This is the problem.
What is the solution?
The solution is so simple to save us from the earthquake and all the officials pretend not to understand when I tell them.
Now why they don't listen, don't answer, or pretend not to understand ask them yourself.
They do not answer me.
The solution is this.
If inside the hole of the 30 CD stack we nail a 45 nail into the table, then move the table as much as you want, the 30 CDs will stay on top of each other. That's the solution to the earthquake.
The nailed nail on the table stopped the inertia of the 30 CDs.
I did the same thing.
I bolted the lift shaft of the structure to the ground instead of nailing the nail to the table and checked the inelastic deformation of the structure in the rocking of the earthquake.
What is the difference with today's constructions? In the method I propose the soil participates by taking up the inertia of the structure and dissipating it into the soil before breaking the beams.
Hello, which test can be used to calculate the p values in this table? If it is the chi-square goodness of fit test, how will we enter the expected values in SPSS? Thank you very much for your attention.

Hi everyone,
I ran a Generalised Linear Mixed Model to see if an intervention condition (video 1, video 2, control) had any impact on an outcome measure across time (baseline, immediate post-test and follow-up). I am having trouble interpreting the Fixed Coefficients table. Can anyone help?
Also, why are the last four lines empty?
Thanks in advance!

How to change the displayed full article text to its corrected version? In the file on the page of the journal where I published the article, there was an error in the text, the table is incorrectly displayed. The journal has already corrected the content, but on ResearchGate there is still the old version, with the mistake. What should I do so that only the corrected version, which is already on the journal's website, would be displayed?
Hello, when calculating the p value for the alleles in the table, how do we place the values in the chi-square test in the four-eyed table? Thank you very much for your attention.

Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. If we have two categorical variables both of them have 4 levels and the (68%) have expected count less than 5, so the result of chi-squared test will not be accurate. What is the alternative test? May I use likelihood ratio test despite of chi square?
Secondly Fisher exact test is only use for 2x2 tables and with cell counts less than 5? what about if cell counts are more than 5 like 12 or 15 in 4x5 tables? which test will be used?
Hi all,
I am attempting supervised and object-based image classification using Arch Pro (V.3.2). Here are the steps I have followed:
I acquired Sentinel-2 imagery and combined bands 2, 3, 4, and 8 (10m resolution).
I performed image segmentation.
I created a classification schema.
I generated training samples.
However, when I attempt to classify using a support vector machine classifier, I encounter the following error:
ERROR 003436: No training samples found for these classes: Soil, Water, Impervious, Grass, Tress.
The table was not found. [VAT_Segmented_202407110934456475089_interIndex]
The table was not found. [VAT_Segmented_202407110934456475089_interIndex]
Failed to execute (TrainSupportVectorMachineClassifier).
Failed at Thursday, July 11, 2024 9:44:52 AM (Elapsed Time: 0.22 seconds)
(I have attached a screenshot of the error.)
I have tried several times but haven't been able to identify the cause of this error. Do any of you know what might be causing it?

In the qualitative compound report obtained from HR-LCMS analysis of crude plant extracts, what is meant by Hits (DB)? Should we consider all the predicted compounds from the list for further studies?
Provide the formula for determining sample size, given a study population. sampling table by different scholars can be of value to me.
I am planning to optimize my adsorption data using Box-Behnken Design (BBD) using Design-Expert Software (DoE).
Please note I have already completed my experimental work BEFORE using DoE software, and some data points are MISSING from the design generated by the software !!
How can I complete the responses in Design of Expert table??
please find below attached the images of: 1- my original data, and 2-the BBD in DoE with the empty cells (to be completed).
Kindly guide me to complete the design and optimize my data.
I greatly appreciate your replies :) !


I am getting problem to exactly identify the SNP position.

Other two questions are:
1) Are the conversion tables (grit-microns) found om internet reliable?
2) Where does the equation to convert grit in microns come from?
In SPSS 25, I have a categorical variable which I would like to display in a frequency table. However, one of my categories was not selected by any of my respondents. I can generate a frequency distribution for this variable, but the unselected category is not included. How can I generate a frequency distribution table to show a zero count for this category?
in the descriptive table, how would you interpret the p-values of the descriptives of your sample?
For instance, if there was a p < .001 between three levels of poverty, how would this be interpreted if the outcome was hypertension. There were significant differences in hypertension among the three levels of poverty?
I would greatly appreciate if you could give a better example so I can understand the idea of p-values in table 1.
I've have been digging on the internet and I could only find one-way ANOVA APA format table example.
Hi All,
I use AMOS. Firstly, my study has the AVE values are less than 0.5 for 3 constructs but this issue can be solved using justification from Fornell & Larcker, 1981 then I carried on with the analysis but now I have issues discriminant validity where the MSV values are higher than the AVE values for 3 out of 6 constructs. What should I do about it? Pls see the table attached. Any insight is appreciated.

Dear Professionals and eminent Professors
Recently, I participated in an international conference (webinar) for presenting my paper on positive psychology research. I could see other participants could showcase their statistics knowledge in form of tables, which made me ashamed and also curious to learn these. So far the books I read only consisted of descriptive statistics, t-test, ANOVA, Non- Parametric tests. Utmost I can see quasi experimental designs.
But I could see in those presentations normality testing, time series designing tables, bootstrapping.
I want a book which could elaboratively explain how to conduct research, what kind of statistics to be employed with specific criteria.
So far I collected so many research methodology books and studied them but could not find the above.
Recently participated in factor analysis webinar which is a disappointment. I need a clear book to learn these all. Through self-study of a book I learnt manual way of computing factor loading.
Some of the research papers have confidence interval tables. Now when to use confidence intervals in tables and why do we use SE , Beta values in tables, all these if a particular text book or series of books can explain. Kindly suggest. I don't know professors at my end who could help me in this. I am only searching net with no goal. Hence as a last resort, I seek your help.
Finally I accept my infancy state of knowledge in Research and seek your pardon for this lengthy message.
Regards
Deepthi
I faced a problem with non-symmetric (non k x k) contingency tables in SPSS. I have categorical variables for a certain pathology (theoretically scored 0-3), and I want to compare the scores of region A with those of region B in a set of subjects. The samples are related (though it is not a repeated test) because A and B regions are present and scored for each subject. In theory, related-samples test for non-dichotomous categorical variables (i.e. larger than 2x2 contingency tables = k x k) can be done by the McNemar-Bowker test in SPSS (an extension of the McNemar test for 2x2 tables), which is fine. However, if e.g. a pathology is so frequently severe in region A that no 0 score (or even no 1) is given while region B has the complete spectrum of scores (0-3) then we face an asymmetric contingency table (e.g. 3x4 or 2x4) for which the McNemar-Bowker test fails and gives no result. Does anyone has a suggestion which test is appropriate in such scenario? Many thanks!
my research was intervention with 2 groups experimental and control group with dv 1 with five subscales one's value should decrease others value should increase has 3 levels pre post follow up, i have done 2x3 mixed factorial design as my sir guided me to go to general linear model than to repeated measure than i have given pre post follow up of 1st scale, define it and selected the options plots em means and run it likewise rest of 4 scales. and for result i had drawn 2 tables 1 with univariate discussing mean sd F significance and partial eta and the other table include 1st subscale and 1st subscale x group and values written were wilk's lamda F significance and partial eta. please guide me
How to write credit lines for a figure or table that has been created by the author itself?
Hi, I'm Yusuke Mikami, a master's student doing LLM for embodied control
I'm personally making a list of LLM-related papers here
[Notion table] https://potent-twister-29f.notion.site/b0fc32542854456cbde923e0adb48845?v=e2d14d2ef0c848f5a1d5b71f9977d7c5
However, I am a very new person in this field, so I want to have help from you.
Please post interesting papers and keywords at
I am looking to create a table that includes both descriptive statistics and correlation coefficients for a mix of binary and continuous variables. Given the complexity of the data and the need for clarity in presentation, I am seeking examples or advice on how to best structure this table.
Could you provide guidance or share a sample table that includes:
- Descriptive statistics (mean, standard deviation, etc.) for continuous variables,
- Frequencies and percentages for binary variables, and
- Correlation coefficients between these variables,
all formatted according to APA 7 standards? Any tips on best practices for organizing this information in a clear, concise, and APA-compliant manner would be greatly appreciated.
i am trying to add the maximum number of articles for my study.
two surgeries are comparing in this SR and MA. the outcome is a quantitative variable that report in mean and sd.
there are some studies that report the outcome of only one approach. and some of them are comparing two approaches.
can i use both kind of studies for my analysis. if not, is it ok to report the one approach articles in the table 1 of my result?
Who has a copy of this article? If it's okay, could anyone share it with me? Thank you very much in advance :)
Jeffreys, H. and Bullen, K.E. (1940),
Seismological Tables.,
London: British Association for the Advancement of Science, Burlington House.
Dear RG community,
I have been studying the impact of the political, economic and financial risk indices on foreign direct investment flow, so I need data from ICRG Database political, financial, and economic risk ratings from 1984 to 2023 for all countries?
Table 3B: Political Risk Points by Component, 1984-2023
Table 4B: Financial Risk Points by Component, 1984-2023
Table 5B: Economic Risk Points by Component, 1984-2023
Unfortunately, me or my organization do not have access to the (ICRG) database, so I would greatly appreciate your help in obtaining this data, if you can.
Just in case of you may need, my e-mail is elmehdiajjig@gmail.com
Thank you in advance.
Best regards,
This is from the oncomine comprehensive assay v3 protocol. Does "gDNA (10 ng, ≥0.67 ng/μL) " mean that the concentration range of gDNA should be from 10 ng to ≥0.67 ng/μL, and If I already have 10 ng/ul, do I set up a reaction without Nuclease-free Water?

How do I analyze and interprete this table of Non-overlap of All Pairs (NAP) with respect to its significance..

Hi!
I am evaluating the performance of different models for a binary outcome. These models can be either single parameter or multiparametric but they give a yes/no result. That is, I can easily depict them in a 2x2 matrix from which I can draw sensitivity, specificity or the c-statistic.
To test model performance I am evaluating %outcome vs %predicted outcome, c-statistic, correctly classified but I would like to add a goodness of fit measure. Does it make sense at all in this context? What would be the best way to test their goodness of fit?
All I get from Hosmer Lemeshow is (Table collapsed on quantiles of estimated probabilities) (There are only 2 distinct quantiles because of ties)
Thanks!
Is there a table that indicates the intervals at which soil is classified as low, moderate, or high compaction?
Why in some articles when I calculate the Soil Quality Index, when I add up the Si*Wi I don't get the SQI result that is displayed in the table. For example, in the article entitled "Effects of land use types on soil quality dynamics in a tropical sub-humid
ecosystem, western Ethiopia", the sum of si*wi does not give the SQI value shown in table 5 of the article.
I am considering switching from R to STATA for several reasons. However a big plus of STATA is the table1 function which can in one line of code generate a baseline table of several groups and run the necessary tests according the type of variables. Is there something like that also available in R?
Hello Reseachgate community.
I have perused several recent sources to either find data or power tables missing and there I cannot seem to find the best source for an appropriate minimum sample size for a conditional process (moderated mediation) analysis.
With 4 variables (3 predictors, 1 outcome) and assuming power .80 with alpha .05 and small to medium effect sizes between all (i.e. 0.30) could anyone point me in the right direction please?
When conducting a logistic regression analysis in SPSS, a default threshold of 0.5 is used for the classification table. Consequently, individuals with a predicted probability < 0.5 are assigned to Group "0", while those with a predicted probability > 0.5 are assigned to Group "1". However, this threshold may not be the one that maximizes sensitivity and specificity. In other words, adjusting the threshold could potentially increase the overall accuracy of the model.
To explore this, I generated a ROC curve, which provides both the curve itself and the coordinates. I can choose a specific point on this curve.
My question now is, how do I translate from this ROC curve or its coordinates to the probability that I need to specify as the classification cutoff in SPSS (default: 0.50)? The value must naturally fall between 0 and 1.
- Do I simply need to select an X-value from the coordinate table where I have the best sensitivity/specificity and plug it into the formula for P(Y=1)?
- What do I do when I have more than one predictor (X) variable? Choose the best point/coordinate for both predictors separately and plug in the values into the equation for P(Y=1) and calculate the new cutoff value?
When i do regression analyze, in Model Summary Table, i found Rsquare is very weak like:0,001 or 0.052, and value of sig. in Anova table is greater than 0.05, how can i fix this?
How to calculate the RMSE value especially (testing and training values) of artifical neural network by using spss ? in the output is parameter estimate heading especially output value is act as a testing and predicted value under the input layer is training ? i am attaching my parameter estimates table output for more clear understanding about it.
Good afternoon,
I am thinking about how I can present the data cleaning stage of my research project. I am hesitating between generating a summary table (could be complex because it includes 16 different datasets) with a paragraph presenting the overall steps, for example, n rows were removed due to missing values, duplicated data, spelling was modified for n rows, etc. or generating a list of items (could appear redundant for the reader) for each encountered issues or checking, for example, days of the week were investigated to check that all collected data were recorded during a school day. In the article format, this stage was usually not really developed except in the data supplementary appendix due to the size of the format. I wanted to develop this section in my thesis report but I am not sure about the most appropriate format to make it clear, concise, simple to understand, and interesting for the readers. Please, could you tell me if as a reader you prefer to have a table or descriptive paragraphs or more visual elements like charts to understand how the research team cleaned the data?
Thanks in advance for all your feedback on the data cleaning oresentation
I have converted the raster to points in the GIS software and transferred table attributes to the Origin Pro program through an Excel file to draw a diagram. But there is a problem with it that I don't know which part it is from. Also, I encounter many errors in the Origin program, and only one graph is drawn from it. Can you please guide me regarding this problem by classifying the points? I have attached some Documents for a better understanding of the information.

Hi! Can anybody help me regarding how the p value has been calculated and what is meant by p value here at the right side of the table shown, any formula, tutorial vedio from Youtube would be great, Thanks!

I am reading a research article (The association between vegetarian diet and varicose veins might be more prominent in men than in women).In the result section of this article, different tables are given in which Odds ratio, Confidence interval and P value is calculated.
Mindfulness based interventions in neurodevelopment disorders
with data sets and
tables
I am using TracePro software for optical simulation of my solar concentrator system. The software gives the results as irradiation flux map and also the incident ray table. Now for thermal simulationtion in ansys fluent whether the data from the irradiance flux map or the incident ray table with x,y,z coordinates should be used? Also how to import these irradiance data in ANSYS fluent?
Hello everyone,
I have performed a Survival Analysis in R. I have 13 patients with 5 events.
If I calculate my survival rate manually, I got 8/13 = 0.615
In my output in R (Screenshot) this value is different (0.598) and I can't get my head around why. Do you have any suggestions?
Thank you.

I need a table with the standard limits of heavy metals in agricultural soil
Hi, I'm submitting a systematic review and meta-analysis and I'd like to incorporate the forest plots generated in R into a table containing all the numerical data. I'm curious if the publisher will accept them in SVG format and if they can be positioned alongside the table in the Word document of the manuscript?
Why does the Retail & Wholesale industry appear in the intermediate inputs of the education industry in Input-Output Tables? Can you provide an example? Thanks
I have a compound (C23N3OH27) to repeat some results with a molecular weight of 361.48. The problem is that the results are not being the same, I am evaluating cell viability (K562 and KG1) with resazurin (24 hours of plating 20.000 cells/100uL, 24 hours of treatment 100uL, 4 hours of resazurin 20uL) and the results lead us to believe that it does not induce death in any of the cases. concentrations tested (30 uM, 20uM, 10uM, 5uM, 1uM), I have already evaluated cellular metabolism, resazurin, interaction of the compound with resazurin and none explains the reason for not repeating the results. I am suspicious that it could be my dilution, I used a table from a colleague that performs the calculation automatically. Could someone help me to do the dilution directly just so I can assess if it's correct? I have 5g powder of the compound which was diluted in 2305.34uL of 100% DMSO, which according to the table gave me a solution of 6,000uM, I don't know if that's correct.
obs: my controls (+/-) are responding well so I don't believe it's the resazurin or the plating
Thanks for all contributions!
I have attached the dilution table below.
I need a refrigerant r600a property table for both saturated and superheated conditions.
in the trend analysis ANOVA table, r squared and adjusted r squared table were included. for keyword analysis keywords from the Scopus database and with the help of factor analysis keywords are analyzed, and descriptive statistics are shown in the paper.
The question is related to a medication list, which I have embedded into a table. Current medications are asked at multiple follow-up points; however, I want participants to have the table pre-filled from their last response so they don't have to re-enter everything if there are any medication changes. Is this possible?
I am currently learning a new data analysis program and have found RStudio to be a user-friendly and efficient software. As I work, I am curious if there is a package or code available that can generate a comprehensive descriptive table encompassing frequencies and percentages of multiple categorical variables in a well-organized manner instead of manual table writing.
Error [65]: Error, extent of vector too large or attribute table error." I uninstalled and reinstalled QGIS 3.26.2 but I still get the error. Any idea what is causing this?
hello, professor, I want to know if every essay should have table, form and sheet? My major belong to humanities, how to make the table, form which in the essay? thank you very much.
I run a multinomial logistic regression. In the SPSS output, under the table "Parameter Estimates", there is a message "Floating point overflow occurred while computing this statistic. Its value is therefore set to system missing." How should I deal with this problem? Thank you.
1. For a given soil sample the following data were measured. During sample collection, water table was observed at a depth of 40 cm below the soil surface. Assume that the reference is placed at the water table. Based on this information and the one in the table, fill-in the missing values of component potentials and the total hydraulic head: 7 points
Depth (cm) Gravitational head (cm) Matric head(cm) Pressure head(cm) Hydraulic
head (H) cm)
0 -105
10 -50
20 -36
30 -22
40 0
50 0
60 0
70 0
after literature review, I have come to know about two almost identical formulas to calculate fluorescent/radiative decay rate as given in the attached files. But my values after calculation, according to the given formulas in the articles, is different from that given in the table. can anyone tell me what's wrong with me? thanks



Hello everyone
I have written a systematic review, but the plagiarism checker, keep indicating that I have plagiarized the keywords. for example the phrase "TITLE-ABS-KEY" which is a search term for the Scopus is getting flagged as plagiarism.
Can I change it to a figure? so it won't make me a problem?
I have run EFA on 50 items scale, EFA supported 35 items structure with 5 factors (eigen value >1, variance 68%, screeplot supported 5factors and PCA as well). Correlation among factors were quite low and some are negatives.
While running CFA, items reduced to 33 with the same model structure and this is supported only when I run each factor independenly in CFA.
Later I ran second order model where all five factors showed good model fit indices.
I am struggling in presenting my findings in a paper/thesis. Do i need to present tables/figures for each items separately as i did CFA ? or there are other options?
Any similar article/reference is appreciated.
Hello,
I am trying to design a table (rectangle or circle, doesn't really matter) that will hole an object on the center of it. The table will be connected to a shaft which will be driven by a belt that connected to an engine.
My goal is to rotate the table in a constant angular speed, let's say - w.
The calculation that I did is basically:
F = mw2r
T = r*F = mw2r2
m is the total mass (object + table), r is the radius of the table, w is the angular speed that I want, T is the torque needed.
Now, when I get the needed torque I can choose the engine that can provide enough torque.
Of course I will choose bearings that can hold the weight and let's say the gear will be 1:1.
Is this calculation is right? I added a picture of what I am trying to do, B is bearing, so from the bearing to the top is dynamic and below is static.
My question is, is it a valid way to make an estimation of engine type?
Thanks.

How can I get the table tab in Amos v23?
How should I proceed in obtaining this information? Additionally, the article states that prior to analysis, each sample was spiked with the internal standard (Rh). Was the sample manually spiked with the internal standard? Also, what was the dilution factor used for each sample, as it was not mentioned in the article?
Thank you for your assistance.

Particularly formula for block sum of squares
when calculate the rates the attribute table of the transect_rates it doesnt work. It desnt appear the calculate of nsm, epr, lrr. Does anyone know what the solution is?
Thank you


Has anybody a scan of the frontmatter incl Table of Contents of Volume 2 of the Proceedings CCCT 2004, Austin, TX, USA ??
very much appreciated
I'm confused on what values to use from the table below, can you help me compute for the Ca/P?


Dear ResearchGate Community,
I am seeking guidance regarding the appropriate statistical analysis for my research study. In my study, I have two groups (Control and Experimental) and two states (Pre and Post). I conducted a Repeated Measures ANOVA with the factors of states, States*group interaction, and error(states) to analyze my data. However, I am unsure if this is the most suitable test for comparing the differences between the two groups.
Additionally, I am seeking advice on how to effectively present these findings in a result table following the guidelines of the APA (American Psychological Association) style. Should I create two separate tables, one for descriptive statistics and the other for the ANOVA table? I would appreciate any assistance in formatting the result table specifically for the factors of States, States*group interaction, and error(states).
Thank you in advance for your valuable insights and assistance
From the link https://gtexportal.org/home/datasets, under V7, I'm trying to do R/Python analyses on the Gene TPM and Transcript TPM files. But in these files (and to open them I had to use Universal Viewer since the files are too large to view with an app like NotePad), I'm seeing a bunch of ID's for samples (i.e. GTEX-1117F-0226-SM-5GZZ7), followed by transcript ID's like ENSG00000223972.4, and then a bunch of numbers like 0.02865 (and they take up like 99% of the large files). Can someone help me decipher what the numbers mean, please? And are the numbers supposed to be assigned to a specific sample ID? (The amount of letters far exceed the amount of samples, btw). I tried opening these files as tables in R but I do not think R is categorizing the contents of the file correctly.
For your kind ref. I have attached some doi links:
DOI: 10.1002/adfm.202107650 table 1
Hello. I am trying to implement space vector pwm control for permanent magnet motor. In my work the permanent magnet motor is represented by two 3d look up table based flux maps. I have generated necessary switching signals for the universal bridge block which works as an inverter. My plan is to measure the three phase voltages with three phase V-I measurement block and then use park transformation to convert the abc voltage to d-q voltage values which after some mathematical operation will be inputs to the look up tables. However, I am facing two issues.
(1) I can not connect the output of the three phase VI measurement to a multiplexer through which I can connect the three phase voltages to the abc to dq0 block (as highlighted in the attached image). Is there any converter block required so that they can be connected?
(2) I need to measure the phase voltages (phase to ground). However, in my model there is ground connection. Will I be able to measure the phase voltage?
