Science topics: Tables
Science topic

Tables - Science topic

Tables are presentations of nonstatistical data in tabular form.
Questions related to Tables
  • asked a question related to Tables
Question
3 answers
I am making a circular map with BRIG where I use multi-fasta files as a reference and 5 draft genomes as queries. The problem is that I don't get rings for my specific genomes, though BRIG doesn't give any error message. I can also see that blast tables are created for all the genomes. If anyone had the same problem and could suggest a solution that would be very helpful! Thanks
Relevant answer
Answer
Use gbk files rather than fasta. It worked for me.
  • asked a question related to Tables
Question
1 answer
The article is assessed trace metals from pond water.
I have stated Certified standard reference materials from the National Institute of Standards and Technology (NIST), USA were used to perform this study. By spike recoveries for each metals, blank, independent standards and duplicate checks are ensured during laboratory work. and inserted a table as
Table 1: Summary of AAS protocols and LoD, LoQ and recovery (%) of the study
But the reviewer is not satisfied? what should be probable solutions? Thanks in advance.
Relevant answer
Answer
There are many reference materials available for purchase, and NIST is only one provider of many. If the reference material you tested (eg solid mine tailings) is not relevant to the study (eg a freshwater pond), there is not much point reporting the data, as the matrices aren't similar enough to know if your method is applicable. In the above sentence you haven't divulged any helpful information to the reader about the reference material.
To demonstrate your method is fit-for-purpose, you need to report which particular SRM was used and which metals were recovered within the specified limits on the certificate (otherwise it wasn't a "certified" reference material).
Did you spike blanks or the reference material? Your text doesn't really make it clear either way.
What were the "independent standards" - were they a second preparation made from the same stock source, or were they prepared from an entirely different stock, from a different material or vendor?
Unless your article is subject to extremely stringent word limits, you may want to expand the two sentences you have shown above with a much more expansive paragraph.
  • asked a question related to Tables
Question
4 answers
Can Iran’s economy be rebuilt? Have thinkers in economics, humanities, and political science in Iran concluded that the Iranian economy can be made dynamic, advanced, and developed again? Or not?
There are many shared historical experiences and similarities between Iran and China. Both are legacies of the long-lasting empires and civilisations in West and East Asia, respectively. Like other great Asian empires, Iran and China were confronted with the expansion of the European imperial powers in the early-nineteenth century which ultimately led to the dislocation of these ancient empires.1 Both countries had resisted pressures towards peripheralisation in the global economy by the creation of nationalist popular revolutions and by building modern nation states and identities in the first half of the twentieth century. Despite different political systems, cultures, and external relations, both Iran and China have been trying to escape from external pressures and internal socio-economic backwardness by the modernisation of their states, societies, and economies via a state-led catch-up development strategy. These efforts led to the rise of China in the late-20th century and the emergence of postIslamic revolutionary Iran 1978/79 as a “contender state”2 to the hegemony of the United States (US) in West Asia. This article studies the impacts in the 19th century of the European-dominated global system on Imperial China and Iran. The expansion of European imperial powers through trade domination and (semi-)colonisation exposed these two empires to the pressures of marginalisation, peripheralisation, internal strife and loss of territory, ultimately leading to the responses of social revolutions, nation-state buildings and state-driven industrialisation. These efforts led to the rise of China in the late-20th century and the emergence of Iran as a “contender state” against the hegemony of the US in the Middle East and/or West-Asia after the Iranian Islamic Revolution of 1978/79. When the Trump Administration3 pulled out of the Iran nuclear deal, Iran’s long-awaited economic rebound stalled through the continuation of sanctions. Trump’s Administration also announced many new critical sanctions on Iran’s strategic institutions, economic sectors, and the key elements of the ruling elites. After more than 40 years of isolation, embargoes and threats of war, Iran is far from being recognised as a regional power. It has become accustomed to isolation because it aims to challenge US hegemony and efforts to make “a geopolitical order” in the Middle East before a successful catch-up drive. As for China, it generally refrained from offensive external relations, and after a century and a half of struggle against external pressures, in the early-21st century, the People’s Republic of China (PRC) became the world’s second largest economy and a modern industrialised power, while Iran is still seeking regional power status in West Asia. China, having become the second largest economy, has changed strategies to pursue more assertive external relations.This development raises two key questions: why did China succeed in rising as an industrialised regional and global power, and has Iran’s development strategy failed so far? I argue that the main reason of post-revolution Iran’s failure to become the regional hegemon comes from two interconnected issues: (i) the failure of its economic development strategy, which was mainly caused by (ii) the “offensive” external involvement in its own region before a successful catch-up process. Iran’s catch-up development strategy, which is the main material basis for the country’s rise, was hampered after the revolution by its “offensive, revolutionary and military oriented foreign policy”. This strategy blocked Iran from access to capital, information and technology concentrated in the core area of the global economy dominated by the US. Unlike Iran, China’s successful catch-up industrialisation was driven, in part, through rapprochement and consensus between Chinese leaders and the US and its allies in 1970s. This strategy led China to distance itself from Mao’s revolutionary offensive foreign relations and replace it with “defensive” and peaceful foreign relations in the era of its catch-up industrialisation (1980–2000s). The change and reorientation of China’s external relations paved the way for China to access capital, information, and technology necessary for its successful economic development and eventually its rise. Theory and practice of state, market, and development The forms and relations between state, society and the market in both China and Iran differ from liberal, pluralistic countries. This raises several questions: (1) What is the form of political authority and market regulations in China and Iran? (2) How can we conceptualise the configuration of China’s and Iran’s state-society and market forms, compared with the liberal state-society and market model? (3) what are the forces behind China’s and Iran’s socio-economic policies and development?
Unlike the (neo-)realist perspective on the fixed state and state function, there is no fixed form of the state, but, rather, a structure through which social forces and interest groups operate. At the global level, the state-society and market complex constitute the basic entity of international relations.4 Forms of political authority vary through differences in the degree of autonomy in relation to both internal and external environments, including the inter-state system and the global political economy. 5 In advanced liberal societies, the state builds consensus between capital and labour in the development of socio-economic policy. In authoritarian and/or centralised societies, a framework of collaboration and domination between state and society, and capital and labour, is imposed in an authoritarian manner, reflecting the relative autonomy of the state from society.6 Generally, we can make a distinction between two ideal types of statesociety, and market complexes in international relations: the “liberal state-society, market complex” (LSMC) and an “authoritarian” or “centralised state-society, market complex” (CSMC).7 The liberal statesociety complex which is characterised by a relative distinction between a governing or political class and the ruling class – the latter being mainly the capitalist class whose interests are predominantly represented by the governing class. One of the conditions for the creation of a LSMC is the existence of a strong civil-society and market with relative autonomy of classes and interest groups – such as capitalist, middle, and working classes. The emergence of a class-divided civil-society and civil-society organisations is the product of capitalist industrial development. In the LSMC, civil-society is relatively “self-regulating” because state intervention is less important in ensuring civil-society’s proper functioning.8 On the other hand, in the CSMC (e.g. China and Iran), a distinction between ruling and governing classes is negligible. The “state class” derives its power from control of the state apparatus and intervenes in society and market.9 In this configuration, autonomous social forces, mainly a strong capitalist class, are either underdeveloped or dependent on the state. Neither could assert their interests independent of state power. Thus, in the CSMC, a framework of collaboration between capital and labour is imposed in an authoritarian manner, reflecting both the state’s autonomy from society and the market, and control over domestic and external relations. Together with the centralisation of state power, the promotion of a state-led development strategy (i.e. long-term socio-economic, political, and cultural modernisation) is one of the driving forces of the state-class.China’s successful capitalist industrial development, accompanied by the ambitions of its leaders, created the propensity to gain a larger share of the world’s economy and resources,10 and are embodied in the Going Out Strategy and the BRI.11 Despite the geopolitical challenges of realising this, China’s industrial development – including military industrialisation and the formation of multilateral institutions like the Asian Infrastructure and Investment Bank, Shanghai Cooperation Organization (SCO), and BRICS (Brazil, Russia, India, China, and South Africa) – has facilitated its rise in the global wealth-power hierarchy. Whilst China left the global economy’s periphery, its success and integration into the global political economy’s core comes at the cost of domestic control. The Iranian experience of state-led industrialisation (mainly in 1960s and 1970s) was a success story amongst Asian developmental states. However, Iran’s successful development strategy was discontinued by the post-revolutionary “offensive, revolutionary, and military-oriented regional-external relations”, which, as stated, blocked access to capital,information and technology concentrated mainly in the US-dominated global economy. In contrast, China’s successful catch-up industrialisation was driven, in part, by rapprochement and consensus between Chinese leaders and the US (and its allies) in the 1970s when China reoriented Chairman Mao’s offensive and revolutionary external relations towards defensive and peaceful relations, thereby facilitating access to capital, information and technology for its successful economic development and eventual rise.The global wave of state-led industrialisation The post-imperial Chinese and Iranian political economy of state-led industrialisation is neither unique nor exceptional. Considering the rise and expansion of industrial capitalism from Europe over 250 years, the CSMC has emerged in different times and spaces as a response to two external pressures towards colonisation and domestic backwardness in political and socio-economic structures. The dialectic of these two factors led a limited number of the leaders of peripheral states to resist peripheralisation in the emerging global political economy by forming a centralised state and achieving self-reliant catch-up development from above.12 After WWII, some Asian states such as China, the Asian Tigers (i.e. Hong Kong, Singapore, South Korea, and Taiwan), Turkey, Iran, and India tried to resist economic backwardness and their peripheral position in the Westerndominated global political economy via autonomous, state-led catch-up industrialisation strategies. None industrialised under a liberal regime.13 European expansion, peripheralisation and resistance in China and Iran China’s imperial disintegration and peripheralisation in the Europeancentred world economy began when Europeans appropriated shipping and merchant activities from indigenous traders in the early-19th century.14 From the late-19th century until 1949, the heavy price that China paid for resisting such an existential threat to its survival included millions of victims, the systematic appropriation of large areas of its territory, the swamp of a brutal civil war between nationalist and communist fronts, and the formal loss of Taiwan. Nevertheless, in 1949, the Chinese Communist Party (CCP) Chairman, Mao Zedong, grandly announced that his people had finally brought a decisive end to the “century of humiliation” at the hands of internal and external enemies. Hence, with the establishment of the PRC, the CCP proclaimed itself the vanguard and supreme saviour of the Chinese nation. As a result, for more than three decades, nationalist calls were completely eclipsed by the strength of the new official political system and ideology. Equally, from the mid-19th century onwards, Persia was confronted with the expansion of European imperial powers (in particular Britain and Russia) who began to have a significant military, political and economic impact on the country’s political economy.15 The competition between Russia and Britain invited the Persian court to engage in balancing acts between its two enemies. European expansion eventually led to the Persian Empire’s peripheralisation and the incorporation of its economic system into the global capitalist system,16 which marked the beginning of the local economy’s disintegration and subordination to the capitalist world economy, the growth of foreign trade, and specialisation in the production of raw materials.
The political economy and security strategy of postrevolution Iran (1980–2020) After the emergence of the Islamic Republic of Iran (IRI) during 1979/80, its political economy of development and external relations drastically changed. The core of Iran’s post-revolutionary foreign policy centres around the “export of the revolution” and efforts to create a “geopolitical order” in West Asia. These new external relations led to a shift in the hierarchy of the triad between oil surplus, economic development, and security strategy in Iran. While the Shah used oil revenues mainly for economic development, the post-revolutionary ruling class emphasised the militarysecurity apparatus, thereby subordinating the “national development strategy.” The core of external relations was gradually redesigned by the leaders of the IRI as an “offensive” military strategy (predominantly in the Middle East). In this context, the IRI’s ruling class, among others, attempted to mobilise globally anti-American revolutionary Islamic-oriented peoples and organisations for the realisation of its strategic goals. Despite contradictory interests among factions of the ruling class, external relations remained unchanged. This core of external regional relations was aimed at forging a geopolitical order and gaining hegemonic status in its own region. This policy-strategy leads to the consequence of the US and its regional allies blocking and hindering Iran’s ambition and national development strategy. A key force in this strategy is the Islamic Revolutionary Guard Corps (IRGC). Its main purpose has been to protect the revolution from within and beyond Iran’s borders, while expanding Iran’s sphere of influence. 43This key aim of the IRI gradually became more influental in Iran’s economy and politics. The elite Quds Force – responsible for the IRGC’s foreign operations – emerged as one of the most significant Iranian armed forces, maintaining a network of para-military and Islamic revolutionary forces in Lebanon, Iraq, Syria, Yemen, and elsewhere. This strategy was confronted by the US who attempted to trigger a regime change using, among others, strategic and structural sanctions on Iran’s politics, economy, and military. The key sanctions against Iran’s oil and military industry came from the United Nations Security Council, the US and its allies. Although UNSC sanctions were lifted in 2016, sanctions by the US and its allies were reimposed after the US withdrawal from the Joint Comprehensive Plan of Action (JCPOA) as part of President Donald Trump’s “maximum pressure campaign” which has been continued under President Joe Biden. By targeting strategic economic sectors and companies (including oil, military, finance, and automotive), blocking Iran’s ability to earn revenues from oil exports and to import and export weaponry and military technology,44 US sanctions have hit Iran’s economy hard. Through the dollar’s position as a global reserve currency and the designation of the IRGC, among others, as a terrorist organisation, the US has also restricted companies from other countries from doing business with Iranian companies. In turn, this hostile environment reinforces the IRI’s determination to develop its domestic military capabilities and to mobilise social and material forces in the Middle East aimed towards pushing the US out of the region.Thus, the experiment of Iran’s rapid industrialisation after the revolution was hindered. The causes of this problem may be traced back to the external relations mentioned above, which have also influenced the policy of the political economy of development. As the Iranian economy remains heavily based on fossil fuels, GDP growth is largely driven by the export of oil and gas45 and less based on the productivity of a modern (non-oil) sectoral economy. Although many modern economic sectors exist in Iran’s economy, their growth and development occur at a very slow rate as sanctions have prohibited Iran from accessing capital, technology, and information (see also Figure 2). The external relations, based on conflict, and the political economy of development policies, which mainly emphasise the security-military sectors, are a permanent factor in Iran’s development crisis. Below, we present selected economic data which indicate the structural impasse of Iran’s economy after the revolution.Oil production and export remain key to Iran’s economy despite production remaining below pre-revolutionary levels (Figure 1). Shown in Table 2 and Figure 2, we also see that Iran’s manufacturing growth rates were high compared to other developmental states and even outperformed India, Indonesia, and Turkey, but also that the post-revolutionary change in domestic policy priorities that allocated oil revenues to the development of the security apparatus impeded Iran’s success. This left Iran, at US$64bn, behind many of its peers and even the city-state of Singapore (US$65bn). Another major post-revolutionary problem is high inflation and currency depreciation (Figure 3), which, coupled with low oil production, prevented high economic growth and the development of trade relations despite the temporary lifting of sanctions after signing and implementing the JCPOA in the mid-2010s (see Figure 4). These impediments to Iran’s industrialisation are reflected in its GDP which grew by only 52% (1976–2018), since 1991 22% of which has been dependent on oil. For the average Iranian, this means that pre-revolutionary incomes were higher (see Figure 5). To sum up, the Islamic Revolution severely distorted Iran’s industrialisation. The Shah’s use of oil revenues and the security apparatus in service of rapid state-led industrialisation with de-escalation of tensions in external relations was crucial to Iran’s socio-economic development strategy. The pivot towards offensive external relations where oil revenues are used to develop military-security capacities led to sanctions, the subordination of economic development in the triangular strategy and a lack of capital, information, and technology. To create the conditions for the lifting of sanctions and realise its long-awaited catch-up development strategy, this article contends that Iran needs to change its external relations back to “defensive, peaceful” external relations. Unlike Iran, China’s successful catch-up industrialisation was driven, in part, through rapprochement and consensus between Chinese leaders and the US and its allies in 1970s. This strategy led China to distance itself from Mao’s revolutionary offensive foreign relations and replace it with “defensive” and peaceful foreign relations in the era of its catch-up industrialisation (1980–2020). The change and reorientation of China’s external relations paved the way for China to access the capital, information, and technology necessary for its successful state-led development, and eventually, its rise.
Relevant answer
Answer
Some more literature, in English, that i have found interesting on the Iranian economy
Bertelsmann Stiftung’s Transformation Index (BTI). 2022. Iran: BTI country report 2022, Gütersloh, Germany
Department of Foreign Affairs and Trade (DFAT). 2023. DFAT country information report: Iran, Australian Government
Ferro, C., Rosenberg, P. & Salama, D. 2023. Islamic Republic of Iran: political risk report, Centre of Global Affairs and Strategic Studies, University of Navarra, Pamplona, Spain
World Bank. 2023.Iran, Islamic Republic, Washington D.C.
I found the publication by Ferro et al., 2023, to be most interesting and somewhat in line with my own (humble) research on the matter of the Iranian economy. In Ferro et al., 2024 it is claimed that the Iranian economy, as per the past five decades, has shown to be remarkably strong and mostly resilient to adversities. Such adversities deriving from both internal and external matters.
  • asked a question related to Tables
Question
2 answers
How many figures and tables can we include in Elsevier articles ?
Relevant answer
Answer
Every journal has an "Information for Authors" section on its website. Elsevier typically allows 3 to 5 figures, but the exact limit can vary depending on the specific journal you are submitting to within Elsevier. To find out the precise guidelines, be sure to check the "Information for Authors" for the particular journal you are targeting. If you face restrictions on the number of figures, consider combining multiple components into a single figure. For example, you can create one figure that includes several parts labeled a, b, c, and d, allowing you to display 4 figures as a single image.
  • asked a question related to Tables
Question
1 answer
Question about SPSS Process Model 4 which tests mediation
Relevant answer
Answer
Happens all the time and can be caused by several things. Most common is some sort of misspecification (maybe the mediator isn't that relevant in this particular context). Could also simply be a power issue if the mediating effect is harder to detect (low sample). Or it could be caused by distressors/suppressors, where another variable in your model can "diminish" the indirect effects too
  • asked a question related to Tables
Question
1 answer
This table is used to find fh/U as a given log g for calculating the method according to the formula of f0. However, what are the values at the top of the table (0.0, 0.01, 0.02, ...) and how are they used?
Relevant answer
Where the source of the table ?
  • asked a question related to Tables
Question
1 answer
for constructing a 2D structure using a composite genetic code table
Relevant answer
Answer
  • asked a question related to Tables
Question
1 answer
Myself and Dr Pethuru Raj PhD, SMIEEE and Dr Sundaravadivazhagan Ph.D,SMIEEE pleased to invite you to contribute a chapter to our forthcoming Elsevier book, Advances in Computers: Cloud-Native Architecture (CNA) and Artificial Intelligence (AI) for the Future of Software Engineering. This publication seeks industry perspectives, practical insights, and cutting-edge research on CNA and AI to shape the next generation of software engineering.
We are specifically looking for chapters on (but not limited to) the following themes:
Foundations of Cloud-Native Architecture
(Microservices, containers, orchestration platforms, serverless computing)
AI and Machine Learning in Software Engineering
(Automated code generation, predictive analysis, intelligent testing)
DevOps, CI/CD Pipelines, and Automation
(Best practices in cloud-native development, AI-driven CI/CD)
Scalability and Performance Optimization
(Resilience engineering, performance monitoring, observability in distributed systems)
Security, Privacy, and Compliance in Cloud-Native Environments
(Secure development practices, threat modeling, regulatory requirements)
Edge Computing and Hybrid Cloud Solutions
(Decentralized processing, IoT integration, data management at the edge)
Data Engineering and Big Data Analytics
(Data pipelines, real-time analytics, AI-driven data processing)
Industry 4.0, Emerging Trends, and Future Directions
(Innovations that leverage CNA and AI, potential disruptions in software engineering)
We warmly encourage submissions that showcase industry know-how, case studies, and real-world implementations.
Key Dates & Deliverables
Final Table of Contents & Author List
(including email addresses):
Due by Friday, 25th March
Chapter Submission Deadline
1st July
Final Material (Ready for Production)
1st October
We ask that you confirm your proposed chapter title, list of authors, and contact details by Friday, 25th March to help us finalize the Table of Contents. Please feel free to reach out if you have any questions or require additional guidance on aligning your submission with the book’s objectives.
You can share your proposal, abstract, and contact details with us directly by replying to this email or by sending them to:
We appreciate your prompt response and look forward to including your valuable perspectives in this publication. Thank you for your time and collaboration.
Relevant answer
Answer
Hello Dr. Pushan Kumar Dutta, this looks like an interesting opportunity and I would be interested. I'll send you an email soon.
Warm Regards,
Aldo Augustine
  • asked a question related to Tables
Question
4 answers
I am a materials engineer.
I want to know what would be the voltage between one of the battery terminals and the ground ? And also between battery terminal and any metal (say aluminium can on a table).
How exactly the circuit would look like in that case?
How much readings will change when battery is charged and when it is discharged?
Relevant answer
Answer
Thank you for the response
  • asked a question related to Tables
Question
1 answer
Bcz will have like table fruit or salad
Relevant answer
Answer
Dear colleague,
They are table fruit, at least in my geographical area. Was this your question, or did I misunderstand you?
  • asked a question related to Tables
Question
1 answer
We are excited to announce the return of the ITTF Sports Science Congress, set to take place on 15-16 May 2025 at Aspetar, Doha, Qatar—a world-leading specialized orthopaedic and sports medicine hospital.
After a six-year hiatus since the last Congress in 2019, we are bringing back this key event to foster collaboration among physicians, allied healthcare practitioners, sports scientists, coaches, and sports managers. The Congress will cover cutting-edge research in sports science and medicine, and will feature a diverse range of topics, including prevention of common injuries in table tennis players, travel sports medicine, and aspects related to sleep, biomechanics, physiology, nutrition, fitness testing, training, perceptuo-motor skills, match analysis, para table tennis, youth development, table tennis as a health sport, anti-doping, mental and psychological aspects, gender equality, diversity and inclusion, coaching, governance, integrity, equipment, esports, and sustainability.
More information about registration, full agenda, and call for papers:
Relevant answer
Answer
I will consider it a great privilege to be part of this congress.
  • asked a question related to Tables
Question
2 answers
I would like to asses the performance of a non-survey regionalisation method in order to produce an Inter Regional Input Output Table (IRIOT) for France. Therefore, i wish to replicate the Montecarlo method used by Bonfliglio and Chelli (2008). They used 1000 randomly generated IO tables in order to compare performances of several regionalisaiton methods.
However, their IO tables were 20-regions x 20 sectors tables, which is 160 000 cells per table, repeated 1000 times per method, and they tested 22 methods. My computer can't manage that much calculations.
I was wondering if using 10 000 randomly generated smaller IRIOT (3 regions x 3 sectors) which are ligher for calculations would work, and by extension, if the sectoral disagregation had an effect on the performances of non-survey regionalisation methods?
The goal is to determine if my regionalisation method is statically good enough, in order to apply it to built a 22 regions x 38 sectors for France, (22 regions x 64 sectors eventually).
Thanks
Relevant answer
Answer
Dear Mr Rajabi
Thank you for your answer
Sincerely
Jérémy Pantet
  • asked a question related to Tables
Question
7 answers
Thermodynamics
==============
Zero-point energy = 3.369180 eV
------------------------------------------------------------------------------
T(K) E(eV) F(eV) S(J/mol/K) Cv(J/mol/K)
------------------------------------------------------------------------------
298.0 3.541961 3.298459 78.840 170.326
------------------------------------------------------------------------------
================================================================
Relevant answer
Answer
What is the meaning in the output file of the optimized cell:
Pseudo atomic calculation performed for C 2s2 2p2
Converged in 19 iterations to a total energy of -145.7159 eV
Pseudo atomic calculation performed for N 2s2 2p3
Converged in 23 iterations to a total energy of -261.3616 eV
Is these energies of free C and N atoms or their energies in the cell?
  • asked a question related to Tables
Question
1 answer
The below paper maybe the first guideline of Adaptive regional Input Output (ARIO) model from Stéphane Hallegatte:
I am not clearly understand about the guides in Appendix B about making Local IO Table.
If you can understand that or have some experience about that, please support and discuss with me!
Many thanks and best regards!
Relevant answer
Answer
How to Create a Local Input-Output (LIO) Table from the National IO Table and GSP for the ARIO Model
Creating a Local Input-Output (LIO) Table from a National Input-Output (NIO) Table involves scaling national transactions to match the economic structure of a specific region. The ARIO model requires this step to estimate regional economic responses to disruptions. Below is a step-by-step guide without formulas.
1. Understanding the Purpose
  • The goal is to downscale a national IO table so that it accurately represents regional economic activity.
  • The GSP (Gross State Product) provides a measure of the total economic output of a region, which helps scale the national values appropriately.
  • Since regional economies differ from the national average, adjustments must be made to account for differences in sector size, trade, and dependencies.
2. Identify National Data and Regional Data
  • Obtain the National IO Table, which shows how industries interact at the national level.
  • Gather regional data, including:GSP or regional GDP. Employment statistics by sector. Any available trade or supply chain data.
3. Adjust National IO Values to Reflect Regional Structure
Since regional economies do not mirror national ones, adjustments are needed. These can be done using location quotients (LQ) or similar techniques, which compare the importance of an industry in the region to its importance at the national level.
  • If an industry is larger in the region than at the national level, assume that it produces more locally and relies less on imports.
  • If an industry is smaller, assume it imports more inputs from other regions or countries.
  • For missing industries, assume that all inputs must be imported from elsewhere.
This helps estimate the regional supply chain structure.
4. Scale Sector Outputs to Match GSP
  • The total output of industries in the LIO table must match the region’s GSP to ensure consistency.
  • If sector-specific GSP data is available, adjust the IO table sector by sector to maintain realistic proportions.
  • If only total GSP is known, distribute it among sectors based on employment shares or historical data.
5. Estimate Interregional Trade
  • Many regions do not produce everything they need, so some goods and services are imported from other regions.
  • If trade data is available, use it to determine import/export relationships between regions.
  • If trade data is missing, assume:Larger industries supply more locally, meaning they rely less on imports. Smaller industries depend on other regions, meaning they import more.
This step helps model supply chain dependencies and economic linkages between regions.
6. Ensure Internal Consistency
Once all values are adjusted for regional production, imports, and GSP, the table must be balanced so that total outputs match total inputs.
  • This can be done using adjustment techniques like RAS or entropy-based scaling.
  • The goal is to make sure that every dollar of production, consumption, and trade is accounted for correctly.
7. Incorporate into the ARIO Model
  • The LIO table now represents the regional economy and can be used in the Adaptive Regional Input-Output (ARIO) model.
  • The model then simulates economic shocks, supply chain disruptions, and recovery scenarios based on the regionalized data.
Conclusion
To create an LIO table from a national IO table and GSP, follow these key steps:
  1. Start with the National IO Table as the base.
  2. Adjust industry outputs using regional employment or production data.
  3. Scale to match regional GSP for accuracy.
  4. Estimate regional trade to account for imports and exports.
  5. Ensure balance and consistency in the final table.
  6. Use the LIO Table in the ARIO model for economic simulations.
  • asked a question related to Tables
Question
1 answer
I am trying to run BioStudio and GeneDesign to design a chromosome. When running any of the BioStudio scripts (for example, BS_PCRTagger), I encounter the following error:
DBD::SQLite::db prepare_cached failed: no such table: locationlist at /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store/DBI/mysql.pm line 1807.
-------------------- EXCEPTION --------------------
MSG: no such table: locationlist
STACK Bio::DB::SeqFeature::Store::DBI::mysql::_prepare /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store/DBI/mysql.pm:1807
STACK Bio::DB::SeqFeature::Store::DBI::SQLite::_offset_boundary /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store/DBI/SQLite.pm:606
STACK Bio::DB::SeqFeature::Store::DBI::SQLite::_fetch_sequence /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store/DBI/SQLite.pm:562
STACK Bio::DB::SeqFeature::Store::seq /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store.pm:2054
STACK Bio::DB::SeqFeature::Store::fetch_sequence /usr/local/share/perl/5.34.0/Bio/DB/SeqFeature/Store.pm:1289
STACK Bio::BioStudio::Chromosome::sequence /usr/local/share/perl/5.34.0/Bio/BioStudio/Chromosome.pm:390
STACK toplevel /usr/local/bin/BS_PCRTagger.pl:88
I would appreciate any help to resolve this issue.
Relevant answer
Answer
# 1. First, reset the BioStudio database
rm -f biostudio.db
BS-BuildDB.pl your_genome_file.gb
# 2. Install required prerequisites
cpan Bio::DB::SeqFeature::Store
cpan DBD::SQLite
# 3. Set correct permissions
chmod -R 755 .
chmod 666 biostudio.db # If file exists
# 4. Debug SQLite database (if needed)
sqlite3 biostudio.db
.tables # View existing tables
.schema # View database structure
# If still encountering: "DBD::SQLite::db prepare-cached failed: no such table: sitelist" error:
# 5. Verify file paths and database initialization:
# - Check if BioStudio is properly installed
# - Ensure GenBank file is valid
# - Check write permissions in working directory
# - Make sure Bio::DB::SeqFeature::Store is properly initialized
# 6. If problem persists, try clean reinstall:
cpanm --force Bio::BioStudio # Force reinstall BioStudio
cpanm --force Bio::DB::SeqFeature::Store # Force reinstall SeqFeature Store
  • asked a question related to Tables
Question
2 answers
How to prepare a Shukalev classification chart/table to define the groundwater types?
Relevant answer
Answer
Shukalev, A. A. (1963). Hydrochemical Classification of Groundwater. Moscow: Nauka. Read about it the reference. It will help you fantastically.
  • asked a question related to Tables
Question
1 answer
The importance of researchers thinking about creating focused teaching materials that include meaningful and illustrative images, as well as tables, lies in several key aspects. Firstly, such materials enhance understanding and comprehension by presenting complex concepts visually, making them easier to grasp. Images and tables also improve memory retention by organizing information in a way that is easier for students to recall during review or exams. Additionally, these materials simplify content and reduce complexity, helping students focus on the key points rather than being overwhelmed by details. Visual elements can also increase student engagement and interest, making the learning process more stimulating. Furthermore, incorporating images and tables caters to diverse learning styles, as some students learn better through reading, while others benefit from visual aids. Therefore, considering the creation of focused, visually-rich teaching materials is crucial for improving learning effectiveness and increasing students' academic performance.
Relevant answer
Answer
Research suggests that concise teaching materials incorporating visual aids such as images and tables can significantly enhance students’ learning and exam preparation by improving comprehension, retention, and engagement. Studies on cognitive load theory emphasize that learners can process limited amounts of information at a time, so shorter, well-organized materials reduce extraneous load, allowing students to focus on essential content (Sweller, 1988). Visual aids like images and tables further facilitate learning by leveraging dual-coding theory, which posits that information processed through both verbal and visual channels is retained more effectively (Paivio, 1991). Additionally, a study on the use of instructional visuals highlights that well-designed diagrams and charts help clarify complex concepts and foster deeper understanding (Mayer & Moreno, 2003). When such materials are aligned with exam objectives and emphasize core concepts in a clear, concise manner, students are more likely to find them manageable and effective for focused revision. However, the quality of visuals and their relevance to the topic are critical; poorly designed or overly complex visuals can hinder rather than aid learning. Furthermore, personalization—such as tailoring materials to the learners’ specific needs—can boost motivation and engagement, leading to better outcomes. In conclusion, researchers should aim to develop teaching materials that balance brevity with depth, incorporating strategically designed images and tables to promote efficient and effective learning for exams.
  • asked a question related to Tables
Question
4 answers
A database error has occurred: SQLSTATE[HY000] [1045] Access denied for user 'ojs'@'localhost' (using password: YES) (SQL: create table `announcement_types` (`type_id` bigint not null auto_increment primary key, `assoc_type` smallint not null, `assoc_id` bigint not null) default character set utf8 collate 'utf8_general_ci')
Relevant answer
Answer
Personally, I perfer using apidog to do some quick testing to ensure that there isn't any error, the UI is much better than postman and makes me very easy to debug.
  • asked a question related to Tables
Question
2 answers
This question is intended to obtain processing software and input output table analysis.
Relevant answer
Answer
Thanks You very much
  • asked a question related to Tables
Question
3 answers
Hi all,
I have a total of 279 participants for my measurement invariance analysis. The majority of articles in my research area cited Chen (2007).
Chen, F. F. (2007) Sensitivity of Goodness of Fit Indexes to Lack of
Measurement Invariance, Structural Equation Modeling: A Multidisciplinary Journal, 14:3,
464-504, DOI: 10.1080/10705510701301834
However, I found it confusing as these authors cited the same article but used different ΔCFI and ΔRMSEA values as indicators.
As in Chen (2007), my understanding is, with small sample (N<300) and unequal sample sizes, ΔCFI > -.005 and ΔRMSEA < .010 indicate measurement invariance. Refer to page p501 and table 4 to 6.
But even those with small sample, they referred to Chen (2007), but used different values. It is really confusing. Can someone please help me out? TIA
Relevant answer
Answer
I got some clues, Kline (2016) and some authors used absolute model fit change. Also I think one author might did a typo. If this was the case, then things make sense and the 17 articles I read basically used the similar indicators.
Kline, R. B. (2016). Principles and practice of structural equation modeling. Guilford Press.
  • asked a question related to Tables
Question
5 answers
Currently I am working on a study that (among other questions) compares effects between groups, while the effects within each group are expressed by odds ratio's (OR). We expressed the difference between groups in terms of statistical significance (p-value) but also wanted to add a measure of practical significance: effect size (ES). In the attached study from Sullivan e.a. 2012, it was described that this can simply be done by dividing the OR from both groups. For instance: OR group A = 3.0, OR group B (reference group) = 1.5, so the effect size of the difference between these groups is (3.0/1.5=) 2.0. Which, according to table 1 in Sullivan's paper, is a medium effect size. And in this case, the equation says: the ES of comparing the difference between both OR's can be calculated by dividing the OR from the intervention group (numerator) by the OR from the reference (denominator) group.
However, what should be done when the OR of group B (reference group) is larger than group A? In that case, the effect size would be smaller than 1, but table 1 in Sullivan e.a. suggests that an ES of 1 is the lowest possible ES. If that is true, then this would favor an equation that says: the ES of comparing the difference between both OR's can be calculated by dividing the largest OR (in that case: by definition the numerator group) by the smallest OR (in that case: by definition the denominator group).
Can anyone help me out with this question? Whether the denominator should be either the OR from the reference group, or the smallest OR?
Relevant answer
Answer
Thank you Sharif Ahamed !
  • asked a question related to Tables
Question
7 answers
Dear colleagues in the research community,
As we know, there are two approaches to hypothesis testing of cross-tables: testing for independence and testing for correlation between variables. In both cases, for exact probabilities, we ask the same question: what is the probability of getting "this table" and the "more extreme tables". For independence tests, the traditional exact test is the (dominant) Fisher-Freeman-Halton (FFH) statistic, and for correlation tests, the Mehta-Patel (MP) algorithm is a widely used solution. In some cases, especially when the table is sparse and ordinal, these algorithms give conflicting, if not opposite, inferences. I recently faced a table where the exact probability by FFH could be p = 1, while MP was p < 0.001 because of high correlation. In the attached note, I ponder this issue and compare their strategies. It seems that FFH's result is confusingly wrong, and the reason is the way the FFH algorithm treats tables with the same probability as the one of interest. This claim is strong, and it calls for a larger discussion within the research community about FFH: Should we change the logic of FFH to avoid confusing results? If we should, why? If we should not, why not?
Relevant answer
Answer
The term "correlation" --- like all natural language words --- is "vague on the outside and muddy on the inside".
To some people "correlation" can only refer to Pearson, Spearman, or Kendall correlation. Others might add those appropriate for dichotomous variables (like phi).
I would use the word "measure of association" for things like Freeman's theta. Also for Cramer's V. And probably things like Kendall's tau is I'm dealing with ordinal variables.
I think in naive English, "correlation" just means "association" in some vague way, like "ANOVA showed there was a correlation between height and gender."
I don't place much importance on these words. But I've seen the use of "correlation" cause confusion, if people have in mind a limited definition.
I think, ultimately, it comes down to the analyst deciding if the variables are being treated as nominal, ordinal, or interval/ratio, and that an appropriate measure of association is being used.
  • asked a question related to Tables
Question
3 answers
I am conducting experiments using a 1-g shake table and have collected data from both accelerometers and strain gauges. However. I am uncertain about the appropriate filtering method for the data. Should I apply a low-pass filter or a band-pass filter for optimal results? The shake table has a maximum frequency of 50 Hz, while the excitation frequency is 2 Hz.
Relevant answer
Answer
Good luck, Manisha Yadav
  • asked a question related to Tables
Question
2 answers
I'm extracting some data from an older paper and I've run into some units that are, to me, a little obscure. At first I thought I had it figured out. Its nutrient data reported in y/mL (with the y being a 'gamma' symbol). After looking into it I found some information suggesting that gamma is equivalent to a microgram. Data reported in micrograms per mililiter made sense for what I was looking at and I moved on.
I have gotten to a new table in the paper with measurements reproted in my/ml (again, it is a 'gamma' not a y but I'm not sure how to insert the correct symbol here). If my previous assumption is correct then that means that this measurement is somehow milli-micrograms/milliter? I'm a little perplexed because I don't see that making sense.
Is anyone here familiar with these units?
  • asked a question related to Tables
Question
1 answer
Dear All,
I am working on human gut microbial metagenome analysis.
I wonder if the 'canonical correspondence analysis' technique, which is widely used in ecological studies, could be used to explore the effects of environmental variables on microbial pathway abundances. Which means, sites = sample ID, species = pathways and their abundances in each sample, environmental variables = various anthropometric data such as BMI, age, protein intake....
I assume if a pathway-abundance table and an environmental-variables table are provided, CCA would not care if it is a species abundance table or a pathway abundance table.
I look forward to your suggestions.
Relevant answer
Answer
Yes, canonical correspondence analysis (CCA) is a good method for studying how environmental variables affect the abundance of different pathways. It helps show how changes in the environment relate to changes in pathway presence and abundance. In simpler terms, CCA helps connect environmental conditions to the variety and quantity of pathways you find in your data.
  • asked a question related to Tables
Question
3 answers
Have anyone a one table for all colorimetrical agents for UV-VIS spectroscopy?
Relevant answer
Answer
There is no central database that keeps track of colorimetric agents, but here is a set of international standards:
  • asked a question related to Tables
Question
7 answers
The mistake in the article that was already published (a few days ago) occurred under the responsibility of the journal's production team and not under my responsibility. I don't understand why I have to pay for open access and also tolerate the publisher's mistakes.
They swapped the column headings in one table so that each heading belongs to the neighboring column (it's a two-column table).
Suggestions/thoughts?
Relevant answer
Answer
this conversation is very useful and gives me an idea of what I should do. I will also be submitting my Corrigendum.
may i please know that process to follow to submit the corrigendum? do I have to submit it as a new article with all files attached?
  • asked a question related to Tables
Question
2 answers
At the moment I'm struggling with the documentation of my labs cell culture work, and would be happy to hear some suggestions.
I've established an excel table that I though is easy to use and straight forward, calculates some parameters automatically (Population Doubling Time (PDT), cumulative population doubling). Intention was that each passage is one row in the table. This would be ideal if we would have a defined optimal seeding cell density range for the cell lines and with every passage we only prepare one (or multiple) flasks with the same seeding density.
However, when technicians are preparing multiple flasks with various seeding cell density within one passage step, then the whole table is not useful anymore: there is only 1 next row in the table (as the automatic calculations are using the seeded and the harvested cell numbers), but multiple new cultures that needs to be followed separately, as - depending on how the cells feel themselves - these flask may have different outcomes (viability, VCD, PDT, next passage date).
I have done various paper-based cell culture documentation earlier, none of them were able to track properly such dichotomy when multiple flasks were prepared with different parameters (seeding cell density or different volume) at the same time.
Anyone has experienced the same? Any idea what would be the best solution (besides obviously optimize cell culture conditions and than stick to them later on)?
Relevant answer
Hi Daniel, I’ve also attempted the same thing as you did and as you’ve said it’s not practical when we seed at different densities, or when the cells are just not behaving properly. My takeaway from doing this is basically you can’t really help it. Currently I only record cells that I’ve cryopreserved, doesn’t need a specific row or column for different passages, just record down all of the details, Passage no, PDL etc. And instead of having a consistent PDL for different passage, I just gave up and note down the PDL for each passage instead (yes need to manually calculate the PDL each time). But this way it’s more practical and realistic. In the end, the important thing is for us to know the PDL and date of cryopreservation. And for record keeping, I also add a column for thawed date and how many that I’ve thawed just to keep track on how many of the stocks are left. Hope this would at least help.
  • asked a question related to Tables
Question
2 answers
As I understood, Vbi is the difference between the conduction band energy level between the Absorber and ETL interface to the Absorber and HTL interface (EC_Abs/ETL - EC_Abs/HTL) divided by the elementary charge (q).
but I am confused about identifying the table's built-in potential (Vbi) value.
Please anyone suggest to me, in full detail.
Relevant answer
Answer
The built-in potential (Vbi) is an important parameter in semiconductor devices, particularly in solar cells and other photovoltaic devices. It represents the potential difference that exists across the junction of two materials, typically the absorber layer and the electron transport layer (ETL) or the hole transport layer (HTL).
To identify the built-in potential value, you need to consider the energy band diagram of the device structure. The built-in potential can be calculated as the difference between the conduction band energy levels at the interface between the absorber and the ETL, and the interface between the absorber and the HTL, divided by the elementary charge (q).
Mathematically, the built-in potential can be expressed as:
```
Vbi = (EC_Abs/ETL - EC_Abs/HTL) / q
```
Where:
- `EC_Abs/ETL` is the conduction band energy level at the interface between the absorber and the ETL.
- `EC_Abs/HTL` is the conduction band energy level at the interface between the absorber and the HTL.
- `q` is the elementary charge (1.602 × 10^-19 C).
The key steps to identify the built-in potential value are as follows:
1. Understand the device structure: Identify the absorber layer, the ETL, and the HTL in the device.
2. Determine the energy band diagram: Construct the energy band diagram of the device, which shows the relative positions of the conduction band, valence band, and Fermi level for each layer.
3. Identify the conduction band energy levels: From the energy band diagram, locate the conduction band energy levels at the interface between the absorber and the ETL (`EC_Abs/ETL`), and the interface between the absorber and the HTL (`EC_Abs/HTL`).
4. Calculate the built-in potential: Use the formula `Vbi = (EC_Abs/ETL - EC_Abs/HTL) / q` to calculate the built-in potential value.
It's important to note that the built-in potential can be influenced by various factors, such as the materials used, doping concentrations, and interface properties. Therefore, the specific value of the built-in potential may vary depending on the device structure and the parameters of the individual layers.
good luck; partial credit ai
  • asked a question related to Tables
Question
13 answers
I read here and there trying to understand the mechanism that throws buildings into the earthquake.
Civil engineers, professors of earthquake engineering, regulations, trying to make buildings invulnerable to earthquake.
And yet, despite all the science, when there is a big nearby earthquake the structures are destroyed and we are flattened.
Things to me are simple. Too simple. But they don't want to listen. I challenge any engineer who wants any professor to a dialogue about what I say below. I'll say them simply so that even someone who is not an expert can understand them.
Let's take 30 CDs placed on top of each other on a table.
If we move the table abruptly the 30 CDs will slide one on top of the other and the pile of CDs will be shattered.
If on these 30 CDs we place toothpicks on them for legs, the column of 30 CDs will become much taller, and with a slight shake of the table it will collapse more easily than before.
If we now replace the CDs with the building plates and the toothpicks with the columns, we will have a 30-storey building.
We all now understand the simple mechanism that brings down the stack of 30 CDs and the 30-story apartment building.
Action - reaction, or acceleration - inertia.
This is the problem.
What is the solution?
The solution is so simple to save us from the earthquake and all the officials pretend not to understand when I tell them.
Now why they don't listen, don't answer, or pretend not to understand ask them yourself.
They do not answer me.
The solution is this.
If inside the hole of the 30 CD stack we nail a 45 nail into the table, then move the table as much as you want, the 30 CDs will stay on top of each other. That's the solution to the earthquake.
The nailed nail on the table stopped the inertia of the 30 CDs.
I did the same thing.
I bolted the lift shaft of the structure to the ground instead of nailing the nail to the table and checked the inelastic deformation of the structure in the rocking of the earthquake.
What is the difference with today's constructions? In the method I propose the soil participates by taking up the inertia of the structure and dissipating it into the soil before breaking the beams.
Relevant answer
Answer
Dear Doctor
[We cannot prevent natural earthquakes from occurring but we can significantly mitigate their effects by identifying hazards, building safer structures, and providing education on earthquake safety. By preparing for natural earthquakes we can also reduce the risk from human induced earthquakes.]
Dear Doctor
Go To
A Practical and Effective Solution to Earthquake (EQ) Catastrophe:
  • January 2021
  • International Journal of Geotechnical Earthquake Engineering 12(2):1-17
  • DOI:10.4018/IJGEE.2021070101
  • By Ozgur Yilmazer, Yazgan Kırkayak, Ilyas Yilmazer
[About 50-year direct observation indicated that any civil structure founded in/on rock does not get damage from earthquakes without tsunami effect. The main reason behind this is that the modulus of elasticity of saturated rocks is a million times greater than that of saturated soil units. Furthermore, all saturated soil units are susceptible to liquefaction at varying degrees. Based on the past observations, none of the structures founded in/on rocky ground have been affected from the recent destructive earthquakes studied by the authors in/and abroad. The studied earthquake cases highlighted again that the civil structures in/on rocky grounds, even adjacent to the epicenter, have not been affected from shaking of destructive earthquakes. In Turkey, the land needed for housing is one-hundredth of the country. However, 57% is proper for housing. The remaining 43% consists mainly of forest, restricted zones, rugged terrains, and soil land, which bears essentially plains and very locally landslides. Thus, earthquake disasters could be alleviated by implementing practical land use planning.]
  • asked a question related to Tables
Question
3 answers
Hello, which test can be used to calculate the p values in this table? If it is the chi-square goodness of fit test, how will we enter the expected values in SPSS? Thank you very much for your attention.
Relevant answer
Answer
I recommended you use R software.
  • asked a question related to Tables
Question
4 answers
Hi everyone,
I ran a Generalised Linear Mixed Model to see if an intervention condition (video 1, video 2, control) had any impact on an outcome measure across time (baseline, immediate post-test and follow-up). I am having trouble interpreting the Fixed Coefficients table. Can anyone help?
Also, why are the last four lines empty?
Thanks in advance!
Relevant answer
Answer
Alexander Pabst I would add that the first thing to do is a likelihood ratio test to see if having the fixed effects in the model was better fitting than a model without them. I see that the two of the interaction terms may be significant but that's contingent on the overall system of variables being 'significant'. Personally I don't use Wald tests, their approximation sometimes isn't very good. I would use stepwise LRT to determine whether a term (or system of terms) should be included in the model (although for some situations in a mixed model one needs to use something like the BIC).
  • asked a question related to Tables
Question
2 answers
How to change the displayed full article text to its corrected version? In the file on the page of the journal where I published the article, there was an error in the text, the table is incorrectly displayed. The journal has already corrected the content, but on ResearchGate there is still the old version, with the mistake. What should I do so that only the corrected version, which is already on the journal's website, would be displayed?
Relevant answer
Answer
First, delete the current file. See https://help.researchgate.net/hc/en-us/articles/14293099743121-How-to-make-content-private-or-remove-it for instructions. Then upload the new version. See the section "How do I add a full-text to my publication page?" in this help page for instructions: https://help.researchgate.net/hc/en-us/articles/14293005132305
  • asked a question related to Tables
Question
2 answers
Hello, when calculating the p value for the alleles in the table, how do we place the values in the chi-square test in the four-eyed table? Thank you very much for your attention.
Relevant answer
Answer
Well I'm confused. ;-) It looks like the numbers in parentheses are column percentages. And that suggests (to me, at least) that you have a 2x2 table, and should be using a Chi-square test of association. Using this online calculator...
I get Chi2 = 2.535, p = 0.111.
Q. How did you get the two Chi-square values you show? Are they goodness of fit tests? If so, what are the relevant expected frequencies, and how were they obtained?
  • asked a question related to Tables
Question
2 answers
Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. If we have two categorical variables both of them have 4 levels and the (68%) have expected count less than 5, so the result of chi-squared test will not be accurate. What is the alternative test? May I use likelihood ratio test despite of chi square?
Secondly Fisher exact test is only use for 2x2 tables and with cell counts less than 5? what about if cell counts are more than 5 like 12 or 15 in 4x5 tables? which test will be used?
Relevant answer
Answer
  • Likelihood Ratio Test: Used to compare the goodness of fit between two models, often in complex or nested models.
  • Fisher's Exact Test: Used for small sample sizes or when expected frequencies are low, to assess the association between categorical variables.
  • Chi-Square Test: Used to examine the association between categorical variables or the goodness of fit between observed and expected frequencies, suitable for larger sample sizes.
  • asked a question related to Tables
Question
1 answer
Hi all,
I am attempting supervised and object-based image classification using Arch Pro (V.3.2). Here are the steps I have followed:
I acquired Sentinel-2 imagery and combined bands 2, 3, 4, and 8 (10m resolution).
I performed image segmentation.
I created a classification schema.
I generated training samples.
However, when I attempt to classify using a support vector machine classifier, I encounter the following error:
ERROR 003436: No training samples found for these classes: Soil, Water, Impervious, Grass, Tress.
The table was not found. [VAT_Segmented_202407110934456475089_interIndex]
The table was not found. [VAT_Segmented_202407110934456475089_interIndex]
Failed to execute (TrainSupportVectorMachineClassifier).
Failed at Thursday, July 11, 2024 9:44:52 AM (Elapsed Time: 0.22 seconds)
(I have attached a screenshot of the error.)
I have tried several times but haven't been able to identify the cause of this error. Do any of you know what might be causing it?
Relevant answer
Answer
I think you should share the code, I would be easy to rectify
  • asked a question related to Tables
Question
4 answers
In the qualitative compound report obtained from HR-LCMS analysis of crude plant extracts, what is meant by Hits (DB)? Should we consider all the predicted compounds from the list for further studies?
Relevant answer
Answer
"Hits (DB)" refers to the compounds detected in the sample that match entries in a reference database (DB). These hits are essentially the compounds that the analysis software predicts to be present in the sample by comparing the acquired mass spectra with those in the database. Whether you should not consider all the predicted compounds from the list for further studies before precising it.
  • asked a question related to Tables
Question
5 answers
Provide the formula for determining sample size, given a study population. sampling table by different scholars can be of value to me.
Relevant answer
Answer
Damasco Okettayot -
There seems to be a little confusion here. Leslie Kish is known for the "deff," or "design effect" as Dr-Zaffar Ahmad Nadaf very well describes it: the efficiency obtained by using a more complex probabilistic survey design as opposed to simple random sampling with the same sample size. This is not a sample size formula, though you can see how much the variance is lowered by the more complex design, with the same sample size.
If you have good stratification, where the variance within each stratum is small and the variance between strata is large, then efficiency will be good. However, a cluster sample is generally less efficient than simple random sampling. A cluster sample might be used, however, if it is easier to accomplish and/or costs less overall.
I did not follow the formula by Dr-Zaffar Ahmad Nadaf, but I did see that he explained the deff.
Cheers - Jim Knaub
  • asked a question related to Tables
Question
9 answers
I am planning to optimize my adsorption data using Box-Behnken Design (BBD) using Design-Expert Software (DoE).
Please note I have already completed my experimental work BEFORE using DoE software, and some data points are MISSING from the design generated by the software !!
How can I complete the responses in Design of Expert table??
please find below attached the images of: 1- my original data, and 2-the BBD in DoE with the empty cells (to be completed).
Kindly guide me to complete the design and optimize my data.
I greatly appreciate your replies :) !
Relevant answer
Answer
Shaima Alsaidi no... just no.
  • asked a question related to Tables
Question
2 answers
I am getting problem to exactly identify the SNP position.
Relevant answer
Answer
And you'll be able to get more of the details from the figure caption & methods section.
The convention is that the boxes are exons & the thick solid line is the introns. In general, folks don't bother to look for SNPs in introns unless it's a splice-site acceptor.
  • asked a question related to Tables
Question
1 answer
Other two questions are:
1) Are the conversion tables (grit-microns) found om internet reliable?
2) Where does the equation to convert grit in microns come from?
Relevant answer
Answer
Measuring the roughness of sandpaper requires using a method that can quantify the surface texture or irregularities of the abrasive material. Here are several approaches you can consider for measuring the roughness of sandpaper with precision:
1. Surface Profilometer:
  • A surface profilometer is a specialized instrument designed to measure the surface texture of materials. It typically uses a stylus or optical methods to scan the surface and record the profile of irregularities.
  • Procedure: Place the sandpaper sample under the profilometer's stylus or optical sensor. The instrument will then trace the surface, recording parameters such as Ra (average roughness), Rz (maximum peak-to-valley height), and other roughness parameters specified in standards like ISO 4287.
2. Contact Stylus Profilometer:
  • This type of profilometer uses a mechanical stylus to trace the surface roughness. It moves along the sandpaper's surface, measuring deviations and creating a profile.
  • Procedure: Calibrate the stylus profilometer, then scan multiple areas of the sandpaper to obtain an average roughness value (Ra) or other roughness parameters.
3. Non-Contact Optical Profilometer:
  • Optical profilometers use light-based technologies such as confocal microscopy or interferometry to measure surface texture without physically touching the sample.
  • Procedure: Place the sandpaper sample under the optical profiler. The instrument scans the surface with laser or white light and generates a detailed 3D profile, providing roughness parameters with high precision.
4. Atomic Force Microscopy (AFM):
  • AFM is a high-resolution imaging technique that can also measure surface roughness at the nanoscale level.
  • Procedure: Scan the sandpaper surface with the AFM tip, which interacts with the surface at the atomic level, producing a topographical map and roughness analysis.
5. Visual Comparison and Grading Standards:
  • For simpler assessments, visual comparison against standardized roughness samples or grading scales can provide a qualitative measure of sandpaper roughness.
  • Procedure: Use visual aids such as magnification and standardized roughness samples to estimate the level of roughness relative to known standards.
Considerations:
  • Sample Preparation: Ensure the sandpaper sample is flat and securely mounted to avoid movement during measurement.
  • Measurement Standards: Follow applicable standards (e.g., ISO 4287 for surface texture) to ensure consistency and comparability of measurements.
  • Data Analysis: Use software provided with the profilometer to analyze roughness parameters and generate reports.
By employing these methods, you can accurately measure the roughness of sandpaper, providing quantitative data that is crucial for quality control, product development, and ensuring consistency in abrasive performance.
3.5
  • asked a question related to Tables
Question
3 answers
In SPSS 25, I have a categorical variable which I would like to display in a frequency table. However, one of my categories was not selected by any of my respondents. I can generate a frequency distribution for this variable, but the unselected category is not included. How can I generate a frequency distribution table to show a zero count for this category?
Relevant answer
Answer
Eva Tsouparopoulou I fail to see why including the full scale in a frequency table is problematic. In fact, it could be informative if, say, the empty level is towards the very bottom or top of the scale.
  • asked a question related to Tables
Question
4 answers
in the descriptive table, how would you interpret the p-values of the descriptives of your sample?
For instance, if there was a p < .001 between three levels of poverty, how would this be interpreted if the outcome was hypertension. There were significant differences in hypertension among the three levels of poverty?
I would greatly appreciate if you could give a better example so I can understand the idea of p-values in table 1.
Relevant answer
Answer
A p-value of less than .001 (p < .001) indicates that the observed differences in hypertension prevalence among the three levels of poverty are unlikely to have occurred by random chance alone. In other words, the results are statistically significant.
Interpretation: Since the p-value is less than .001, it suggests that there is strong evidence to reject the null hypothesis, which typically assumes no difference between groups. In this case, it would imply that there are indeed differences in the prevalence of hypertension across the three levels of poverty.
It would be beneficial to conduct additional tests to explore the nature and magnitude of these differences further. This could involve pairwise comparisons between specific poverty levels to identify which groups differ significantly from each other in terms of hypertension prevalence.
  • asked a question related to Tables
Question
3 answers
I've have been digging on the internet and I could only find one-way ANOVA APA format table example.
Relevant answer
Answer
Thank you so much
Onipe Adabenege Yahaya
for your help! Very helpful!
  • asked a question related to Tables
Question
2 answers
Hi All,
I use AMOS. Firstly, my study has the AVE values are less than 0.5 for 3 constructs but this issue can be solved using justification from Fornell & Larcker, 1981 then I carried on with the analysis but now I have issues discriminant validity where the MSV values are higher than the AVE values for 3 out of 6 constructs. What should I do about it? Pls see the table attached. Any insight is appreciated.
Relevant answer
Asra Jabbar Hi Asra, I hope you're good. I am currently facing the problem you mentioned, MSV > AVE of 1 factor in my model (higher around 0.014). Thanks so much if you can give me some advice to solve it.
  • asked a question related to Tables
Question
3 answers
Dear Professionals and eminent Professors
Recently, I participated in an international conference (webinar) for presenting my paper on positive psychology research. I could see other participants could showcase their statistics knowledge in form of tables, which made me ashamed and also curious to learn these. So far the books I read only consisted of descriptive statistics, t-test, ANOVA, Non- Parametric tests. Utmost I can see quasi experimental designs.
But I could see in those presentations normality testing, time series designing tables, bootstrapping.
I want a book which could elaboratively explain how to conduct research, what kind of statistics to be employed with specific criteria.
So far I collected so many research methodology books and studied them but could not find the above.
Recently participated in factor analysis webinar which is a disappointment. I need a clear book to learn these all. Through self-study of a book I learnt manual way of computing factor loading.
Some of the research papers have confidence interval tables. Now when to use confidence intervals in tables and why do we use SE , Beta values in tables, all these if a particular text book or series of books can explain. Kindly suggest. I don't know professors at my end who could help me in this. I am only searching net with no goal. Hence as a last resort, I seek your help.
Finally I accept my infancy state of knowledge in Research and seek your pardon for this lengthy message.
Regards
Deepthi
Relevant answer
Answer
Cohen, Cohen, West, & Aiken, Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences.
  • asked a question related to Tables
Question
2 answers
I faced a problem with non-symmetric (non k x k) contingency tables in SPSS. I have categorical variables for a certain pathology (theoretically scored 0-3), and I want to compare the scores of region A with those of region B in a set of subjects. The samples are related (though it is not a repeated test) because A and B regions are present and scored for each subject. In theory, related-samples test for non-dichotomous categorical variables (i.e. larger than 2x2 contingency tables = k x k) can be done by the McNemar-Bowker test in SPSS (an extension of the McNemar test for 2x2 tables), which is fine. However, if e.g. a pathology is so frequently severe in region A that no 0 score (or even no 1) is given while region B has the complete spectrum of scores (0-3) then we face an asymmetric contingency table (e.g. 3x4 or 2x4) for which the McNemar-Bowker test fails and gives no result. Does anyone has a suggestion which test is appropriate in such scenario? Many thanks!
Relevant answer
Answer
Not yet, unfortunately
  • asked a question related to Tables
Question
5 answers
my research was intervention with 2 groups experimental and control group with dv 1 with five subscales one's value should decrease others value should increase has 3 levels pre post follow up, i have done 2x3 mixed factorial design as my sir guided me to go to general linear model than to repeated measure than i have given pre post follow up of 1st scale, define it and selected the options plots em means and run it likewise rest of 4 scales. and for result i had drawn 2 tables 1 with univariate discussing mean sd F significance and partial eta and the other table include 1st subscale and 1st subscale x group and values written were wilk's lamda F significance and partial eta. please guide me
Relevant answer
Answer
I agree with both of Thom Baguley's suggestions. To use the baseline score as a covariate, and estimating a model for just one DV at a time, you would need something like this:
* Model for DV1.
GLM A1 A2 BY Group WITH A0
/WSFACTOR=Time 2 Polynomial
/METHOD=SSTYPE(3)
/CRITERIA=ALPHA(.05)
/WSDESIGN=Time
/DESIGN=Group A0.
* Model for DV2.
GLM B1 B2 BY Group WITH B0
/WSFACTOR=Time 2 Polynomial
/METHOD=SSTYPE(3)
/CRITERIA=ALPHA(.05)
/WSDESIGN=Time
/DESIGN=Group B0.
Etc.
I do know of any way to implement the adjustment for multiple testing automatically. You would have to do that after the fact, I think.
  • asked a question related to Tables
Question
12 answers
How to write credit lines for a figure or table that has been created by the author itself?
Relevant answer
Answer
Can you explain to me please! My article is accepted while they ask me to make a credit line ...should i make a modification on the original manuscript?
  • asked a question related to Tables
Question
2 answers
Hi, I'm Yusuke Mikami, a master's student doing LLM for embodied control
I'm personally making a list of LLM-related papers here
However, I am a very new person in this field, so I want to have help from you.
Please post interesting papers and keywords at
Relevant answer
Answer
go to alphasignal and search for their latest addition which was a survey on LLMs
  • asked a question related to Tables
Question
2 answers
I am looking to create a table that includes both descriptive statistics and correlation coefficients for a mix of binary and continuous variables. Given the complexity of the data and the need for clarity in presentation, I am seeking examples or advice on how to best structure this table.
Could you provide guidance or share a sample table that includes:
  1. Descriptive statistics (mean, standard deviation, etc.) for continuous variables,
  2. Frequencies and percentages for binary variables, and
  3. Correlation coefficients between these variables,
all formatted according to APA 7 standards? Any tips on best practices for organizing this information in a clear, concise, and APA-compliant manner would be greatly appreciated.
Relevant answer
Answer
This link provides a useful guide for how to structure APA tables. I have also attached some descriptive and correlational tables from my first manuscript in the pics attached to this message:
I'm not sure if you are familiar with R, but there is a really neat package I just learned about called "rempsych" which can create APA ready tables in a regular R workflow quite conveniently:
  • asked a question related to Tables
Question
3 answers
i am trying to add the maximum number of articles for my study.
two surgeries are comparing in this SR and MA. the outcome is a quantitative variable that report in mean and sd.
there are some studies that report the outcome of only one approach. and some of them are comparing two approaches.
can i use both kind of studies for my analysis. if not, is it ok to report the one approach articles in the table 1 of my result?
Relevant answer
Answer
Alireza Keshtkar Alireza, this helps me to understand what you're asking. And...I need to reconsider what I wrote. I will write to you again.
  • asked a question related to Tables
Question
1 answer
Who has a copy of this article? If it's okay, could anyone share it with me? Thank you very much in advance :)
Jeffreys, H. and Bullen, K.E. (1940),
Seismological Tables.,
London: British Association for the Advancement of Science, Burlington House.
Relevant answer
Answer
Before answering the question, it is necessary to find out what kind of tectonics we are talking about - faulting, geological deformations, horizontal and vertical geodynamics, or a comprehensive tectonic and geodynamic assessment of a particular watershed. Each direction requires its own methodological approaches.
  • asked a question related to Tables
Question
2 answers
Dear RG community,
I have been studying the impact of the political, economic and financial risk indices on foreign direct investment flow, so I need data from ICRG Database political, financial, and economic risk ratings from 1984 to 2023 for all countries?
Table 3B: Political Risk Points by Component, 1984-2023
Table 4B: Financial Risk Points by Component, 1984-2023
Table 5B: Economic Risk Points by Component, 1984-2023
Unfortunately, me or my organization do not have access to the (ICRG) database, so I would greatly appreciate your help in obtaining this data, if you can.
Just in case of you may need, my e-mail is elmehdiajjig@gmail.com
Thank you in advance.
Best regards,
  • asked a question related to Tables
Question
1 answer
This is from the oncomine comprehensive assay v3 protocol. Does "gDNA (10 ng, ≥0.67 ng/μL) " mean that the concentration range of gDNA should be from 10 ng to ≥0.67 ng/μL, and If I already have 10 ng/ul, do I set up a reaction without Nuclease-free Water?
Relevant answer
Answer
I think you mast have ≥0.67 ng/μL concentration in one well. You can get it from adding 10ng gDNA in 15 ul water . (10/15=0.67).
  • asked a question related to Tables
Question
3 answers
To obtain a similar table.
Relevant answer
Answer
Dear Eugenia,
I suppose that the authors estimate the causal relationships between two variables and then create a table presenting all relationships in a way similar to the correlation matrix (for the economy of space reasons).
For example, there is a Granger causality relationship running from ED to EU.
Between ED and CO2 there is a bidirectional causal relationship and so on.
Kind regards,
Apostolos
  • asked a question related to Tables
Question
2 answers
How do I analyze and interprete this table of Non-overlap of All Pairs (NAP) with respect to its significance..
Relevant answer
Answer
s. Rama Gokula Krishnan But don't we have to look at p values and find out whether they are lesser than 0.10 which then means that it is significant?
  • asked a question related to Tables
Question
1 answer
Hi!
I am evaluating the performance of different models for a binary outcome. These models can be either single parameter or multiparametric but they give a yes/no result. That is, I can easily depict them in a 2x2 matrix from which I can draw sensitivity, specificity or the c-statistic.
To test model performance I am evaluating %outcome vs %predicted outcome, c-statistic, correctly classified but I would like to add a goodness of fit measure. Does it make sense at all in this context? What would be the best way to test their goodness of fit?
All I get from Hosmer Lemeshow is (Table collapsed on quantiles of estimated probabilities) (There are only 2 distinct quantiles because of ties)
Thanks!
Relevant answer
Answer
Dear Ceclia,
Did you ever get an answer for this question i.e. does it make sense to use HL to assess goodness of fit for a model with a single variable (for me it is a continuous variable - a score built out of multiple categorical and continuous variables)?
  • asked a question related to Tables
Question
2 answers
Is there a table that indicates the intervals at which soil is classified as low, moderate, or high compaction?
Relevant answer
Answer
Thanks a lot Emmanuel Rodriguez Rivera happy new year!
  • asked a question related to Tables
Question
1 answer
Why in some articles when I calculate the Soil Quality Index, when I add up the Si*Wi I don't get the SQI result that is displayed in the table. For example, in the article entitled "Effects of land use types on soil quality dynamics in a tropical sub-humid
ecosystem, western Ethiopia", the sum of si*wi does not give the SQI value shown in table 5 of the article.
Relevant answer
Answer
you are using PCA to build the Wi or you are using a framework that stablish those parameters exactly if you dont have all parameters and their repetitions your calculation are going to be wrong. you can fallow my thesis " soil quality index for the Yauco and San Antón series"
  • asked a question related to Tables
Question
1 answer
I am considering switching from R to STATA for several reasons. However a big plus of STATA is the table1 function which can in one line of code generate a baseline table of several groups and run the necessary tests according the type of variables. Is there something like that also available in R?
Relevant answer
Answer
Yes, you can use table() function or CrossTable() function
Add chisq.test function for any variable you want just like in stata.
  • asked a question related to Tables
Question
2 answers
Hello Reseachgate community.
I have perused several recent sources to either find data or power tables missing and there I cannot seem to find the best source for an appropriate minimum sample size for a conditional process (moderated mediation) analysis.
With 4 variables (3 predictors, 1 outcome) and assuming power .80 with alpha .05 and small to medium effect sizes between all (i.e. 0.30) could anyone point me in the right direction please?
Relevant answer
Answer
For complex mediation (path analytic) models, Monte Carlo simulation techniques are probably your best bet. See:
Thoemmes, F., MacKinnon, D. P., & Reiser, M. R. (2010). Power analysis for complex mediational designs using Monte Carlo methods. Structural Equation Modeling, 17(3), 510-534.
I offer a free mini-course on simulation of path models in Mplus that you can find here:
  • asked a question related to Tables
Question
11 answers
When conducting a logistic regression analysis in SPSS, a default threshold of 0.5 is used for the classification table. Consequently, individuals with a predicted probability < 0.5 are assigned to Group "0", while those with a predicted probability > 0.5 are assigned to Group "1". However, this threshold may not be the one that maximizes sensitivity and specificity. In other words, adjusting the threshold could potentially increase the overall accuracy of the model.
To explore this, I generated a ROC curve, which provides both the curve itself and the coordinates. I can choose a specific point on this curve.
My question now is, how do I translate from this ROC curve or its coordinates to the probability that I need to specify as the classification cutoff in SPSS (default: 0.50)? The value must naturally fall between 0 and 1.
  1. Do I simply need to select an X-value from the coordinate table where I have the best sensitivity/specificity and plug it into the formula for P(Y=1)?
  2. What do I do when I have more than one predictor (X) variable? Choose the best point/coordinate for both predictors separately and plug in the values into the equation for P(Y=1) and calculate the new cutoff value?
Relevant answer
Answer
Good! I'm glad to hear we got there in the end. ;-)
  • asked a question related to Tables
Question
13 answers
When i do regression analyze, in Model Summary Table, i found Rsquare is very weak like:0,001 or 0.052, and value of sig. in Anova table is greater than 0.05, how can i fix this?
Relevant answer
Answer
Unless you have an error in your data, this may just simply be the result of the analysis (i.e., that your predictor(s) is/are only weakly related to, and do not significantly predict, the dependent variable).
  • asked a question related to Tables
Question
2 answers
How to calculate the RMSE value especially (testing and training values) of artifical neural network by using spss ? in the output is parameter estimate heading especially output value is act as a testing and predicted value under the input layer is training ? i am attaching my parameter estimates table output for more clear understanding about it.
  • asked a question related to Tables
Question
3 answers
Good afternoon,
I am thinking about how I can present the data cleaning stage of my research project. I am hesitating between generating a summary table (could be complex because it includes 16 different datasets) with a paragraph presenting the overall steps, for example, n rows were removed due to missing values, duplicated data, spelling was modified for n rows, etc. or generating a list of items (could appear redundant for the reader) for each encountered issues or checking, for example, days of the week were investigated to check that all collected data were recorded during a school day. In the article format, this stage was usually not really developed except in the data supplementary appendix due to the size of the format. I wanted to develop this section in my thesis report but I am not sure about the most appropriate format to make it clear, concise, simple to understand, and interesting for the readers. Please, could you tell me if as a reader you prefer to have a table or descriptive paragraphs or more visual elements like charts to understand how the research team cleaned the data?
Thanks in advance for all your feedback on the data cleaning oresentation
Relevant answer
Answer
  • Greetings, in my recent publication I also had a search strategy and a filtration process to check studies out based on inclusion criteria. for this purpose, I created a somewhat chart that was comprised of elements in Microsoft Word and I typed different stages of my search filtration process on each shape and connected them with flashes, it was accepted and looks good to me. for a better illustration, you can see Fig.1 in the figures section on this page which shows exactly how I generated it. Hope this helps! https://doi.org/10.1007/s13132-023-01518-z
  • asked a question related to Tables
Question
6 answers
I have converted the raster to points in the GIS software and transferred table attributes to the Origin Pro program through an Excel file to draw a diagram. But there is a problem with it that I don't know which part it is from. Also, I encounter many errors in the Origin program, and only one graph is drawn from it. Can you please guide me regarding this problem by classifying the points? I have attached some Documents for a better understanding of the information.
Relevant answer
Answer
At first you have to look if Origin pro can handle such huge amount of data. Or the same for ArcGIS to Excel. How many rows of data can be exported from ArcGIS to Excel or the maximun rows of Excel.
If this is the problem (memory) you have to get a sample of your information.
  • asked a question related to Tables
Question
2 answers
Hi! Can anybody help me regarding how the p value has been calculated and what is meant by p value here at the right side of the table shown, any formula, tutorial vedio from Youtube would be great, Thanks!
Relevant answer
Answer
You need to specify the statistical test that was conducted. The table itself does not contain enough information to tell. If it is a table from a paper/manuscript, look in the methods.
  • asked a question related to Tables
Question
5 answers
I am reading a research article (The association between vegetarian diet and varicose veins might be more prominent in men than in women).In the result section of this article, different tables are given in which Odds ratio, Confidence interval and P value is calculated.
Relevant answer
Answer
The tables in that article report p-values, do they not?
So I'm not sure why you're asking how to compute p-values when given an OR and CI. Nevertheless, here is the general approach.
  • Let y = ln(OR)
  • Let yub = ln(upper bound of CI for OR)
  • Let SEy = (yub-y)/zcrit (where zcrit = 1.959964 for a 95% CI)
  • Let zobs = ABS(y/SEy)
The p-value = 2*p(z > zobs) using the standard normal distribution.
  • asked a question related to Tables
Question
3 answers
Mindfulness based interventions in neurodevelopment disorders
with data sets and
tables
Relevant answer
Answer
( 10.1016/j.hkjot.2017.05.001) in this article I guess you can find all you want.
  • asked a question related to Tables
Question
1 answer
I am using TracePro software for optical simulation of my solar concentrator system. The software gives the results as irradiation flux map and also the incident ray table. Now for thermal simulationtion in ansys fluent whether the data from the irradiance flux map or the incident ray table with x,y,z coordinates should be used? Also how to import these irradiance data in ANSYS fluent?
Relevant answer
Answer
I know I'm a little late, but nevertheless, the initial step involves copying the data into an Excel (.csv) or text file. To begin, select the "irradiance maps" option and transfer the data to an Excel file, resulting in gridwise data. However, it's essential to note that Ansys interprets flux data based on X, Y, and Z coordinates rather than a grid. Therefore, the next step entails converting the grid data into coordinates. You can accomplish this using software such as Matlab, Python, or any other suitable tool. If you require assistance with the code for this conversion, please don't hesitate to reach out to me.
  • asked a question related to Tables
Question
3 answers
Hello everyone,
I have performed a Survival Analysis in R. I have 13 patients with 5 events.
If I calculate my survival rate manually, I got 8/13 = 0.615
In my output in R (Screenshot) this value is different (0.598) and I can't get my head around why. Do you have any suggestions?
Thank you.
Relevant answer
Answer
As the risk set drops from 11 to 9 (ie 1 observation has left the risk set without an event), the numbers are correct:
0.598 = [1 - 1/13]. [1 - 1/12] . [1 - 1/11] . [1 - 1/9] . [1 - 1/8]
  • asked a question related to Tables
Question
3 answers
I need a table with the standard limits of heavy metals in agricultural soil
Relevant answer
Answer
WHO permissible limits for soil is 0.056 kg m -3(Ti), 0.085 kg m -3(Pb), 0.100kg m-3 (Cr) and )0.036 kg m-3 (Cu)
  • asked a question related to Tables
Question
1 answer
Hi, I'm submitting a systematic review and meta-analysis and I'd like to incorporate the forest plots generated in R into a table containing all the numerical data. I'm curious if the publisher will accept them in SVG format and if they can be positioned alongside the table in the Word document of the manuscript?
Relevant answer
Answer
You can ask the publisher or look at the requirements online :)
  • asked a question related to Tables
Question
1 answer
Why does the Retail & Wholesale industry appear in the intermediate inputs of the education industry in Input-Output Tables? Can you provide an example? Thanks
Relevant answer
Answer
The presence of the Retail & Wholesale industry in the intermediate inputs of the Education industry reflects the flow of goods and services required by the Education sector to operate efficiently. This occurs because educational institutions often need various supplies, equipment, and services to support their activities.
For eg : Consider a university:
  • Textbooks and Supplies: Universities purchase textbooks, stationery, laboratory equipment, and other supplies from retail stores and wholesalers to support teaching and research.
  • Cafeteria Services: Many educational institutions provide dining facilities for students and staff, which involve food products and services supplied by the Retail & Wholesale industry.
  • Maintenance and Cleaning: Universities require cleaning services and maintenance supplies, such as cleaning products and equipment, which are sourced from this sector.
  • Infrastructure: Construction materials and services for building and maintaining campus infrastructure may also come from the Retail & Wholesale sector.
  • Technology and Electronics: Computers, software, and electronic equipment used for teaching and administrative purposes are often purchased through retail and wholesale channels.
All these goods and services are considered intermediate inputs for the Education industry because they are essential for its operations but are not produced within the Education industry itself.
Hope this helps Ziyi Chi
  • asked a question related to Tables
Question
1 answer
I have a compound (C23N3OH27) to repeat some results with a molecular weight of 361.48. The problem is that the results are not being the same, I am evaluating cell viability (K562 and KG1) with resazurin (24 hours of plating 20.000 cells/100uL, 24 hours of treatment 100uL, 4 hours of resazurin 20uL) and the results lead us to believe that it does not induce death in any of the cases. concentrations tested (30 uM, 20uM, 10uM, 5uM, 1uM), I have already evaluated cellular metabolism, resazurin, interaction of the compound with resazurin and none explains the reason for not repeating the results. I am suspicious that it could be my dilution, I used a table from a colleague that performs the calculation automatically. Could someone help me to do the dilution directly just so I can assess if it's correct? I have 5g powder of the compound which was diluted in 2305.34uL of 100% DMSO, which according to the table gave me a solution of 6,000uM, I don't know if that's correct.
obs: my controls (+/-) are responding well so I don't believe it's the resazurin or the plating
Thanks for all contributions!
I have attached the dilution table below.
Relevant answer
Answer
Sorry! I did not understand the calculations from the excel sheet as it is very complicated.
Could someone help me to do the dilution directly?”
Yes, let me make it simple.
The Molecular weight of the compound (C23N3OH27) is 361.48.
Then follow the sequence below.
361.48g -------- 1L -------- 1M
361.48g --------- 1L ------- 1000mM
0.36148g ---------- 1L ------ 1mM
361.48mg -------- 1000ml ------ 1mM
3.614 mg ----------- 10ml -------- 1mM
So, weigh 3.614mg of the compound in 10ml 100% DMSO to give 1mM stock.
You may prepare working solutions (30uM, 20uM, 10uM, 5uM, 1uM) as follows.
You may use the formula: C1V1=C2V2
C1= Concentration of stock solution (1mM)
V1= Volume of stock solution (X)
C2= Concentration of working solution (30uM)
V2= Volume of working solution (say 1ml)
Then,
1mM x X = 30uM x 1ml
1000uM x X = 30uM x 1ml
30/1000 = 0.03ml of stock i.e., add 30ul of stock solution to 970ul of media to give 1ml of 30uM working solution.
Similarly,
For 20uM
20/1000 = 0.02 ml of stock i.e., add 20ul of stock solution to 980ul of media to give 1ml of 20uM working solution.
For 10uM
10/1000= 0.01ml of stock i.e., add 10ul of stock solution to 990ul of media to give 1ml of 10uM working solution.
For 5uM
5/1000 = 0.005ml of stock i.e., add 5ul of stock solution to 995ul of media to give 1ml of 5uM working solution.
For 1uM
1/1000= 0.001ml of stock i.e., add 1ul of stock solution to 999ul of media to give 1ml of 1uM working solution.
Since 1ul is a very minute quantity to pipette, it may lead to error. So, you may dilute the stock by 1:10 to make a diluted stock (0.1mM). Then take 10ul of diluted stock (0.1mM) and add to 990ul of media to obtain 1uM working solution. Use this calculation for 1uM working solution instead of the above.
Best.
  • asked a question related to Tables
Question
1 answer
I need a refrigerant r600a property table for both saturated and superheated conditions.
Relevant answer
Answer
If you need anything else send me a message. I have a free Excel Add-In you can use too.
  • asked a question related to Tables
Question
1 answer
in the trend analysis ANOVA table, r squared and adjusted r squared table were included. for keyword analysis keywords from the Scopus database and with the help of factor analysis keywords are analyzed, and descriptive statistics are shown in the paper.
Relevant answer
Answer
Hi,
Incorporating both CAGR in trend analysis and keyword analysis in one paper is permissible, provided the methods are sound. Compliance with the journal's specific guidelines, alongside the novelty and relevance of your work, is essential for consideration. Your employment of ANOVA, R-squared values, and Scopus keywords indicates a strong approach.
Hope this helps.
  • asked a question related to Tables
Question
1 answer
The question is related to a medication list, which I have embedded into a table. Current medications are asked at multiple follow-up points; however, I want participants to have the table pre-filled from their last response so they don't have to re-enter everything if there are any medication changes. Is this possible?
Relevant answer
Answer
Yes, it is possible to pre-fill a table with a participant's previous responses in follow-up points using various data collection and management tools. The method you choose would depend on the platform or software you are using for your data collection. Here are a few general approaches you could consider:
  1. Online Surveys and Forms Platforms (e.g., Google Forms, Qualtrics): Many online survey platforms allow you to pre-fill fields in a form based on previous responses. You can use logic and scripting to achieve this. For example, you could set up branching logic that checks for the participant's previous response and then fills the table with the previous medication list. This would require some scripting knowledge, and the exact steps would depend on the platform you're using.
  2. Databases and Data Management Systems (e.g., REDCap, Microsoft Access): If you're using a more advanced data management system, you could set up a database where you store participants' medication information. When a participant returns for a follow-up, you can retrieve their previous medication data from the database and pre-fill the table fields.
  3. Custom Software Development: If you have access to software development resources, you could build a custom solution where participants log in to an account, and their previous medication data is stored and retrieved for subsequent visits. This would provide a more tailored and seamless experience.
  4. Spreadsheet Software (e.g., Microsoft Excel): While not the most efficient method, you could potentially manage this manually using spreadsheet software. Each follow-up would be a new row, and you'd copy over the previous medication list for each participant.
  5. Programming and Scripting (e.g., Python, R): If you have programming skills, you could automate this process using programming languages like Python or R. You'd need to read and write data files to store and retrieve previous responses.
Remember, the exact steps and feasibility of these methods depend on the capabilities of the tools you're using and your technical proficiency. If you're not sure how to implement this in your specific context, it might be helpful to consult with someone who has experience with the platform you're using or seek assistance from a data management professional.
  • asked a question related to Tables
Question
3 answers
I am currently learning a new data analysis program and have found RStudio to be a user-friendly and efficient software. As I work, I am curious if there is a package or code available that can generate a comprehensive descriptive table encompassing frequencies and percentages of multiple categorical variables in a well-organized manner instead of manual table writing.
Relevant answer
Answer
You can try `tableone`, a package that can do almost everything you need: https://cran.r-project.org/web/packages/tableone/vignettes/introduction.html.
  • asked a question related to Tables
Question
1 answer
Error [65]: Error, extent of vector too large or attribute table error." I uninstalled and reinstalled QGIS 3.26.2 but I still get the error. Any idea what is causing this?
Relevant answer
Answer
The error message you're encountering, "Error [65]: Error, extent of vector too large or attribute table error," typically indicates an issue with working with vector data (like shapefiles or other spatial formats) in a Geographic Information System (GIS) software like QGIS. This error can occur for various reasons. Here are a few things you can try to troubleshoot and resolve the issue:
  1. Data Corruption: The vector dataset you are trying to open might be corrupted. Try opening a different dataset to see if the issue persists. If the problem is specific to one dataset, it's likely that the data itself is corrupted. You might want to obtain a fresh copy of the dataset.
  2. File Size or Complexity: The error could be related to the size or complexity of the vector dataset. If the dataset is too large, it might exceed the memory capacity of your system, leading to this error. Try working with a smaller subset of the data or optimizing the dataset before loading it into QGIS.
  3. Attribute Table Issues: Sometimes, errors in the attribute table of a vector dataset can cause issues. You could try repairing or rebuilding the attribute table using tools available in QGIS or external software.
  4. Software Version Compatibility: Ensure that the version of QGIS you are using is compatible with your operating system and the dataset format you're working with. While you mentioned using QGIS 3.26.2, it's important to verify that this version is indeed available and supported, as the most recent version I'm aware of is QGIS 3.22.
  5. Memory Allocation: If your system's memory is constrained, it might struggle to handle large datasets. You can adjust the memory allocation settings in QGIS to allocate more memory for processing. Go to Settings > Options > System in QGIS and adjust the "Maximum feature count" and "Maximum editing buffer size" settings accordingly.
  6. Update or Reinstall: Although you mentioned reinstalling QGIS, make sure that you have the latest stable version installed. Sometimes, bugs are fixed in newer releases that might resolve the issue you're facing.
  7. Check Log Messages: QGIS usually provides log messages that can give more detailed information about the error. Check the QGIS log window or log file for more insights into what might be causing the error.
  8. Extensions or Plugins: If you have any extensions or plugins installed in QGIS, they might be contributing to the issue. Try disabling them and see if the error persists.
  9. Operating System Issues: Occasionally, operating system updates or changes can affect the functioning of software. Ensure that your operating system is up-to-date and doesn't have any compatibility issues with QGIS.
If none of these steps resolve the issue, you might want to seek assistance from the QGIS community forums or support channels. Providing specific details about the dataset you're working with and any additional error messages you encounter can help others provide more targeted assistance.
  • asked a question related to Tables
Question
1 answer
hello, professor, I want to know if every essay should have table, form and sheet? My major belong to humanities, how to make the table, form which in the essay? thank you very much.
Relevant answer
Answer
In the context of an essay in the humanities, tables, forms, and sheets are not typically essential components. However, if you believe that incorporating visual elements such as tables or forms will enhance the understanding and presentation of your information, you can certainly include them.
Here are a few guidelines on how to create tables and forms in an essay:
Tables: If you need to present data or compare information in a structured format, tables can be helpful. To create a table, follow these steps:
a. Determine the appropriate data to include in the table. b. Decide on the number of columns and rows needed to organize the information effectively. c. Use a word processor or a specific software (e.g., Microsoft Word, Google Docs, or LaTeX) that provides table creation tools. d. Insert a table into your document and populate it with the relevant data. e. Format the table, including adjusting column widths, adding headers, and applying any necessary styling.
Forms: Forms are not commonly used in essays, but they can be applicable in certain cases. For example, if you conducted a survey and want to present the results in a structured manner, you can create a form. Here's how:
a. Determine the questions and response options you want to include in the form. b. Use a tool such as Google Forms or Microsoft Forms to create a digital form. c. Start by adding the necessary questions and response options to your form. d. Customize the form's appearance, if desired, to match the overall style of your essay. e. Once the form is complete, you can embed it in your essay or provide a link for readers to access it.
Remember to consider the purpose and relevance of including tables and forms in your essay. Ensure they add value to your arguments or enhance the reader's understanding. Additionally, follow any specific formatting guidelines provided by your instructor or academic institution.
I hope this helps! Let me know if you have any further questions.
  • asked a question related to Tables
Question
5 answers
I run a multinomial logistic regression. In the SPSS output, under the table "Parameter Estimates", there is a message "Floating point overflow occurred while computing this statistic. Its value is therefore set to system missing." How should I deal with this problem? Thank you.
Relevant answer
Answer
Hello Atikhom,
Which particular statistic in the output was omitted? Which version of spss are you using?
Are you willing to post your data and proposed model, so that others could attempt to recreate the condition?
Some obvious points to consider about your data set whenever unexpected results such as the one you report occur:
1. Are your data for any continuous variables suitably scaled? (so that the leading significant digit is not many orders of magnitude away from the decimal point)
2. For your categorical variables, do you have cases in each possible cell/level? (check frequencies for each such variable)
3. Do you have any instances of perfect relationships among categorical variables (perhaps due to empty cells)? (check cross-tabulations for variable pairs and sets)
4. Is one of the IVs redundant with another variable in the data set?
5. Do you have missing data (and are attempting some sort of imputation process)?
That may not cover the waterfront, but at least it may give you some ideas when checking your data.
Good luck with your work.
  • asked a question related to Tables
Question
1 answer
share the sample table
Relevant answer
Answer
Hi,
IV stands for independent variable, DV for dependent variable, MV for mediating variable, SE for standard error, LLCI and ULCI for lower and upper limits of the 95% confidence interval, respectively.
In the text, you might say something like: The independent variable (IV) significantly influenced both the mediating variable (MV; b = 0.50, p < .001) and the dependent variable (DV; b = 0.35, p < .001). The MV also significantly affected the DV (b = 0.25, p = .002).
Hope this helps.
  • asked a question related to Tables
Question
1 answer
1. For a given soil sample the following data were measured. During sample collection, water table was observed at a depth of 40 cm below the soil surface. Assume that the reference is placed at the water table. Based on this information and the one in the table, fill-in the missing values of component potentials and the total hydraulic head: 7 points
Depth (cm) Gravitational head (cm) Matric head(cm) Pressure head(cm) Hydraulic
head (H) cm)
0 -105
10 -50
20 -36
30 -22
40 0
50 0
60 0
70 0
Relevant answer
Answer
For a given soil sample the following data were measured. During sample collection, water table was observed at a depth of 40 cm below the soil surface. Assume that the reference is placed at the water table. Based on this information and the one in the table, fill-in the missing values of component potentials and the total hydraulic head:
  • asked a question related to Tables
Question
3 answers
after literature review, I have come to know about two almost identical formulas to calculate fluorescent/radiative decay rate as given in the attached files. But my values after calculation, according to the given formulas in the articles, is different from that given in the table. can anyone tell me what's wrong with me? thanks
Relevant answer
Answer
Sir Aftab Hussain,
Kindly share the link to this paper.
Thanks!
  • asked a question related to Tables
Question
3 answers
Hello everyone
I have written a systematic review, but the plagiarism checker, keep indicating that I have plagiarized the keywords. for example the phrase "TITLE-ABS-KEY" which is a search term for the Scopus is getting flagged as plagiarism.
Can I change it to a figure? so it won't make me a problem?
Relevant answer
Answer
It is unlikely that changing "TITLE-ABS-KEY" to a figure will solve the issue with plagiarism. The phrase "TITLE-ABS-KEY" is a common search term used in academic literature to refer to specific sections of a publication such as the title, abstract, and keywords. It is possible that the plagiarism checker is flagging this phrase as it is commonly used and appears in many different publications.
Instead of changing "TITLE-ABS-KEY" to a figure, you could consider paraphrasing the text or rewording the phrase to make it more unique to your review. Additionally, you could check if there are other phrases or terms that are being flagged for plagiarism and address them individually. It is important to make sure your review is original and does not contain any plagiarized content.
  • asked a question related to Tables
Question
3 answers
I have run EFA on 50 items scale, EFA supported 35 items structure with 5 factors (eigen value >1, variance 68%, screeplot supported 5factors and PCA as well). Correlation among factors were quite low and some are negatives.
While running CFA, items reduced to 33 with the same model structure and this is supported only when I run each factor independenly in CFA.
Later I ran second order model where all five factors showed good model fit indices.
I am struggling in presenting my findings in a paper/thesis. Do i need to present tables/figures for each items separately as i did CFA ? or there are other options?
Any similar article/reference is appreciated.
Relevant answer
Answer
It seems that you developed a new scale (50 items). In this case, you'd better run EFA and CFA with different samples. If your sample size is big enough, you can divide them into half: run EFA with one half, and run CFA with another half.
Then, report the EFA results with all the items, and report CFA results with the fit index (without items). you can refer to some articles published at top-tier journals. Meanwhile, you can also refer to the following article that developed a new scale and report EFA and CFA.
·Kim, T.-Y., David, E., Chen, T., & Liang, Y. (2023). Authenticity or self-enhancement? Effects of self-presentation and authentic leadership on trust and performance. Journal of Management, 49(3), 944–973. https://doi.org/10.1177/01492063211063807.
  • asked a question related to Tables
Question
1 answer
Hello,
I am trying to design a table (rectangle or circle, doesn't really matter) that will hole an object on the center of it. The table will be connected to a shaft which will be driven by a belt that connected to an engine.
My goal is to rotate the table in a constant angular speed, let's say - w.
The calculation that I did is basically:
F = mw2r
T = r*F = mw2r2
m is the total mass (object + table), r is the radius of the table, w is the angular speed that I want, T is the torque needed.
Now, when I get the needed torque I can choose the engine that can provide enough torque.
Of course I will choose bearings that can hold the weight and let's say the gear will be 1:1.
Is this calculation is right? I added a picture of what I am trying to do, B is bearing, so from the bearing to the top is dynamic and below is static.
My question is, is it a valid way to make an estimation of engine type?
Thanks.
Relevant answer
Hi friend, did you get an answer for your question? how did you solve the problem?
  • asked a question related to Tables
Question
1 answer
How can I get the table tab in Amos v23?
Relevant answer
Answer
To add a published paper to your ResearchGate profile, you need to:
  1. Go to the Research tab on your profile.
  2. On the left, select Preprints and locate your publication.
  3. Click Add published version under the preprint title.
  4. Select the published work you want to link to if it’s already on ResearchGate, or create a new publication if it’s not.
  5. Click Add published version.
Alternatively, you can add a publication page to your profile by clicking the Add new button at the top right-hand corner of any ResearchGate page. I hope that helps!
  • asked a question related to Tables
Question
2 answers
How should I proceed in obtaining this information? Additionally, the article states that prior to analysis, each sample was spiked with the internal standard (Rh). Was the sample manually spiked with the internal standard? Also, what was the dilution factor used for each sample, as it was not mentioned in the article?
Thank you for your assistance.
Relevant answer
Answer
Thank you!
  • asked a question related to Tables
  • asked a question related to Tables
Question
1 answer
when calculate the rates the attribute table of the transect_rates it doesnt work. It desnt appear the calculate of nsm, epr, lrr. Does anyone know what the solution is?
Thank you
Relevant answer
Answer
Rina Amritasari It seems like you are encountering an issue with DSAS (Digital Shoreline Analysis System) while trying to calculate rates and view the attribute table of the transect rates. The error message you mentioned indicates that the output file for DSAS could not be found at the specified location.
To address this issue, I would suggest the following troubleshooting steps:
1. Verify the file path: Double-check the file path you provided for the output file. Make sure it is accurate and matches the location where DSAS should save the output file. It's possible that there might be a typo or an incorrect file path specified.
2. Check file permissions: Ensure that you have the necessary permissions to read and write files in the specified location. If you are running DSAS with limited user privileges, it might be causing the issue. Try running DSAS with administrative privileges or ensure that the user account you are using has the appropriate permissions.
3. Reinstall DSAS: If the issue persists, it might be worth reinstalling DSAS. Sometimes, software installations can encounter errors or files can become corrupted. By reinstalling DSAS, you can ensure that you have a clean installation, which might resolve the issue you are facing.
4. Seek help from DSAS support: If none of the above steps resolve the problem, it would be beneficial to reach out to DSAS support for assistance. They will have more specific knowledge about the software and can provide guidance tailored to your specific situation. They might be able to identify any known issues or provide further troubleshooting steps to resolve the problem.
Remember to provide as much detail as possible when seeking help from DSAS support, including the specific error message, the steps you followed, and any relevant information about your operating system and DSAS version. This will help them diagnose the issue more effectively and provide an appropriate solution.
I hope these suggestions help you resolve the issue with DSAS and enable you to calculate the rates and view the attribute table successfully.
  • asked a question related to Tables
Question
2 answers
Has anybody a scan of the frontmatter incl Table of Contents of Volume 2 of the Proceedings CCCT 2004, Austin, TX, USA ??
very much appreciated
Relevant answer
Answer
Thanks Mahesh Senadeera for your reply. This is very much appreciated.
What I additionally need is the Table of Contents with
AUTHOR-LIST
TITLE
PAGE NUMBERS
Thanks a lot for finding these meta-data
  • asked a question related to Tables
Question
4 answers
I'm confused on what values to use from the table below, can you help me compute for the Ca/P?
Relevant answer
Answer
Ca/P = (Ca atomic %)/ (P atomic %) = 44.22/22.65 = 1.95. Somewhat high for proper HA (1.67). For better results use a piece of stoichiometric synthetic HA as a standard and acquire data from multiple spots.
  • asked a question related to Tables
Question
4 answers
Dear ResearchGate Community,
I am seeking guidance regarding the appropriate statistical analysis for my research study. In my study, I have two groups (Control and Experimental) and two states (Pre and Post). I conducted a Repeated Measures ANOVA with the factors of states, States*group interaction, and error(states) to analyze my data. However, I am unsure if this is the most suitable test for comparing the differences between the two groups.
Additionally, I am seeking advice on how to effectively present these findings in a result table following the guidelines of the APA (American Psychological Association) style. Should I create two separate tables, one for descriptive statistics and the other for the ANOVA table? I would appreciate any assistance in formatting the result table specifically for the factors of States, States*group interaction, and error(states).
Thank you in advance for your valuable insights and assistance
Relevant answer
Answer
When reporting the results of a repeated measures ANOVA (Analysis of Variance) in APA (American Psychological Association) style, you generally include a table and a concise narrative summary. Here's a step-by-step guide on how to present the results:
  1. Table: Create a table to present the key statistical information. The table should be labeled with a number (e.g., Table 1) and include a descriptive title. Here's an example format:
Table 1 Descriptive Statistics and Repeated Measures ANOVA Results
The table should include the following columns:
  • Descriptive Statistics: Present the means and standard deviations (SD) for each condition or time point.
  • Mauchly's Test of Sphericity: If you have more than two levels of the within-subjects factor, include the results of the Mauchly's test to assess the assumption of sphericity. Report the degrees of freedom (df) and the p-value.
  • Greenhouse-Geisser Correction: If the assumption of sphericity is violated, include the Greenhouse-Geisser correction results, which adjust the degrees of freedom and p-values.
  • Tests of Within-Subjects Effects: Present the main effects and interaction effects. Include the degrees of freedom (df), the F-value, the p-value, and effect size measures like partial eta-squared (ηp²) or epsilon-squared (ε²). Report the effect size values alongside the F-value and p-value.
  1. Narrative Summary: Alongside the table, provide a brief narrative summary of the key findings. The summary should include the following information:
  • Describe the purpose of the analysis and the variables involved.
  • State whether the assumption of sphericity was met. If it was violated, mention that the Greenhouse-Geisser correction was applied.
  • Report the main effects and interaction effects, including the relevant F-values, degrees of freedom, p-values, and effect size measures.
  • Interpret the significant effects and provide a concise summary of the findings. Focus on the direction and magnitude of the effects.
  • If appropriate, discuss any post hoc tests or planned comparisons conducted following the ANOVA. Highlight significant pairwise comparisons and any patterns observed.
Here's an example narrative summary:
"The repeated measures ANOVA revealed a significant main effect of time, F(2, 30) = 7.21, p < .001, ηp² = .32. Post hoc tests using the Bonferroni correction indicated that the mean scores at time point 3 (M = 8.56, SD = 1.21) were significantly higher than at time point 1 (M = 5.34, SD = 0.98, p < .01) and time point 2 (M = 6.12, SD = 1.05, p < .05). However, there was no significant main effect of condition, F(1, 15) = 1.89, p = .18, ηp² = .11. Additionally, the interaction between time and condition was not significant, F(2, 30) = 1.45, p = .25, ηp² = .09. These results suggest that time had a significant impact on the variable of interest, with scores increasing significantly from time point 1 to time point 3."
Remember to adapt the example to fit your specific study and its findings.
  • asked a question related to Tables
Question
3 answers
From the link https://gtexportal.org/home/datasets, under V7, I'm trying to do R/Python analyses on the Gene TPM and Transcript TPM files. But in these files (and to open them I had to use Universal Viewer since the files are too large to view with an app like NotePad), I'm seeing a bunch of ID's for samples (i.e. GTEX-1117F-0226-SM-5GZZ7), followed by transcript ID's like ENSG00000223972.4, and then a bunch of numbers like 0.02865 (and they take up like 99% of the large files). Can someone help me decipher what the numbers mean, please? And are the numbers supposed to be assigned to a specific sample ID? (The amount of letters far exceed the amount of samples, btw). I tried opening these files as tables in R but I do not think R is categorizing the contents of the file correctly.
Relevant answer
Answer
GTEX-1117F-0226-SM-5GZZ7 is the sample ID and the ENSG00000223972.4 refers to the gene symbol according to the HUGO gene nomenclature. The numbers you are referring to are gene expression values. TPM (Transcripts Per Million) is a normalization method that has been used to scale these gene expression values so that it is possible to make the expression of genes comparable between samples. 
  • asked a question related to Tables
Question
2 answers
For your kind ref. I have attached some doi links:
DOI: 10.1002/adfm.202107650 table 1
Relevant answer
Answer
Dear @Afroj that is not my question, you need to kindly read the papers first to fully understand the question. Thank you.
  • asked a question related to Tables
Question
2 answers
Hello. I am trying to implement space vector pwm control for permanent magnet motor. In my work the permanent magnet motor is represented by two 3d look up table based flux maps. I have generated necessary switching signals for the universal bridge block which works as an inverter. My plan is to measure the three phase voltages with three phase V-I measurement block and then use park transformation to convert the abc voltage to d-q voltage values which after some mathematical operation will be inputs to the look up tables. However, I am facing two issues.
(1) I can not connect the output of the three phase VI measurement to a multiplexer through which I can connect the three phase voltages to the abc to dq0 block (as highlighted in the attached image). Is there any converter block required so that they can be connected?
(2) I need to measure the phase voltages (phase to ground). However, in my model there is ground connection. Will I be able to measure the phase voltage?
Relevant answer
Answer
Better, you can directly give it to abc from Vabc without demux.