Science topics: PhilosophyLogic
Science topic
Logic - Science topic
An open group for the discussion of various logics and their applications
Questions related to Logic
Thought experiment:
A bus of length L0 = 10 m is parked at a bus stop.
Two cars drive past with speeds v1 = 100000 km/s and v2= 200000 km/s respectively.
The two observers use the Lorentz Factor (γ = 1/(1 - v2/c2)1/2) and calculate the length of the bus to be L1 = 9.43 m and L2 = 7.45 m.
Note: We only have one bus and three lengths!
Question: How long is the bus?
Please consult my article:
Hi, I am using a regularized XGBoost model. I have included industry and location dummies. The ML model, via regularization, is excluding a few city dummies and a few industry dummies, declaring them statistically insignificant contributors in the prediction.
Is there any literature that we cannot remove a few such dummies just based on the mathematics behind the regularization? We should have strong logic, says one of my labmates.
My personal opinion is that it should be just fine because we need significant predictors for feature selection purposes. Therefore, it should not have any side effects as we have in econometric modeling.
Any constructive suggestions are welcome from domain experts.
Regards,
Sahil
Questions about the relationship between “interpreters” and “interpretants” in Peircean semiotics have broken out again. To put the matter as pointedly as possible — because I know someone or other is bound to — “In a theory of three‑place relations among objects, signs, and interpretant signs, where indeed is there any place for the interpretive agent?”
Resources —
Survey of Pragmatic Semiotic Information
Survey of Semiotics, Semiosis, Sign Relations
I've been working for quite some time on developing self-consistent extended-body electromagnetic theories, with the idea it may resolve some of the infinite self-interactions of the standard model.
The culmination of this is that if one simply considers a single 2-tensor on a manifold, and looks at the lowest order, quadratic Lagrangians that can produce gravity and electromagnetism from this tensor, one obtains a theory that can support such solutions:
It has a number of other interesting properties (for instance, magnetic charge is naturally introduced, but only interacts via curvature coupling, which sounds a lot like dark matter).
I'll be submitting it for peer review soon, and I'm getting some feedback from colleagues, but I'd be interested in feedback from a wider audience. General discussion is great, but I'm looking for two types of feedback in particular.
1. Mathematical or logical flaws with what I've done.
2. Areas where I need to improve how I am communication of what I'm doing.
Any feedback is appreciated.
The logic I aim to establish is rooted in utility theory, which suggests that utility maximisation can motivate farmers to adopt new technologies. In some countries, however, the adoption of new technologies remains low. Many studies attribute this to a lack of awareness regarding their economic benefits. To evaluate these benefits, specific indicators of economic sustainability were selected. If the hypothesis is validated—demonstrating that these technologies provide economic advantages for farmers—it would suggest that adopting them enables farmers to achieve higher utility compared to non-adopters. Consequently, this would not only reinforce the utility theory framework but also contribute to economic sustainability while advancing several Sustainable Development Goals (SDGs).
Does this logical framework seem correct, or could additional elements be included to strengthen it further?
On January 14th, 'World Logic Day' will be celebrated. In connection with this event, I would like to discuss the purpose and, furthermore, the necessity of the formal aspects of logic, which, even in the 21st century, should remain just one part of logic alongside the theory of argumentation and logical propaedeutics.
Data analysis is a fundamental aspect of academic research, enabling researchers to make sense of collected data, draw meaningful conclusions, and contribute to the body of knowledge in their field. This article examines the critical role of data analysis in academic research, discusses various data analysis techniques and their applications, and provides tips for interpreting and presenting data effectively.
Overview of Data Analysis in Research
Data analysis involves systematically applying statistical and logical techniques to describe, summarize, and evaluate data. It helps researchers identify patterns, relationships, and trends within the data, which are essential for testing hypotheses and making informed decisions. Effective data analysis ensures the reliability and validity of research findings, making it a cornerstone of academic research.
Descriptive vs. Inferential Statistics
1. Descriptive Statistics:
• Purpose: Descriptive statistics summarize and describe the main features of a dataset. They provide simple summaries about the sample and the measures.
• Techniques: Common techniques include measures of central tendency (mean, median, mode), measures of variability (range, variance, standard deviation), and graphical representations (histograms, bar charts, scatter plots).
• Applications: Descriptive statistics are used to present basic information about the dataset and to highlight potential patterns or anomalies.
2. Inferential Statistics:
• Purpose: Inferential statistics allow researchers to make inferences and predictions about a population based on a sample of data. They help determine the probability that an observed difference or relationship is due to chance.
• Techniques: Common techniques include hypothesis testing (t-tests, chi-square tests), confidence intervals, regression analysis, and ANOVA (analysis of variance).
• Applications: Inferential statistics are used to test hypotheses, estimate population parameters, and make predictions about future trends.
Qualitative Data Analysis Methods
1. Content Analysis:
• Purpose: Content analysis involves systematically coding and categorizing textual or visual data to identify patterns, themes, and meanings.
• Applications: Used in fields such as sociology, psychology, and media studies to analyze interview transcripts, open-ended survey responses, and media content.
2. Thematic Analysis:
• Purpose: Thematic analysis focuses on identifying and analyzing themes or patterns within qualitative data.
• Applications: Commonly used in social sciences to analyze interview data, focus group discussions, and qualitative survey responses.
3. Grounded Theory:
• Purpose: Grounded theory involves generating theories based on data collected during the research process. It is an iterative process of data collection and analysis.
• Applications: Used in fields such as sociology, education, and health sciences to develop new theories grounded in empirical data.
4. Narrative Analysis:
• Purpose: Narrative analysis examines the stories or accounts provided by participants to understand how they make sense of their experiences.
• Applications: Used in psychology, anthropology, and literary studies to analyze personal narratives, life histories, and case studies.
Tools and Software for Data Analysis
1. Statistical Software:
• SPSS: Widely used for statistical analysis in social sciences. It offers a range of statistical tests and data management tools.
• R: A powerful open-source software for statistical computing and graphics. It is highly extensible and widely used in academia.
• SAS: A comprehensive software suite for advanced analytics, multivariate analysis, and data management.
2. Qualitative Data Analysis Software:
• NVivo: A popular software for qualitative data analysis, offering tools for coding, categorizing, and visualizing qualitative data.
• ATLAS.ti: Another widely used software for qualitative research, providing tools for coding, memoing, and network visualization.
3. Data Visualization Tools:
• Tableau: A powerful data visualization tool that helps create interactive and shareable dashboards.
• Microsoft Power BI: A business analytics tool that provides interactive visualizations and business intelligence capabilities.
Tips for Interpreting and Presenting Data
1. Understand Your Data: Before analyzing data, ensure you have a thorough understanding of its source, structure, and limitations. This helps in selecting appropriate analysis techniques and interpreting results accurately.
2. Use Clear Visualizations: Visual representations such as charts, graphs, and tables can make complex data more accessible and understandable. Choose the right type of visualization for your data and ensure it is clear and well-labelled.
3. Contextualize Findings: Interpret your data in the context of existing literature and theoretical frameworks. Discuss how your findings align with or differ from previous research.
4. Report Limitations: Be transparent about the limitations of your data and analysis. Discuss potential sources of bias, measurement errors, and the generalizability of your findings.
5. Communicate Clearly: Present your data and findings in a clear and concise manner. Avoid jargon and technical language that may confuse readers. Use straightforward language and provide explanations for complex concepts.
In conclusion, data analysis plays a crucial role in academic research, enabling researchers to draw meaningful conclusions and contribute to their field. By understanding different data analysis techniques, utilizing appropriate tools, and following best practices for interpreting and presenting data, researchers can enhance the quality and impact of their work.
#ResearchAdvice #WritingAdvice #AcademicHelp #ResearchHelp #WritingHelp #AcademicSupport #ResearchSupport #dataanalysis #datacollection #researchdata #WritingSupport #hamnicwriting #hamnicsolutions #AcademicResources #ResearchResources
Is it logical to convert an effect measure given as an ODDs ratio in an article to a Hazard ratio to be pooled in the Meta Analysis given the effect measures in other articles are Hazard ratios?
Differential Propositional Calculus • Overview
❝The most fundamental concept in cybernetics is that of “difference”, either that two things are recognisably different or that one thing has changed with time.❞
— W. Ross Ashby • An Introduction to Cybernetics
Differential logic is the component of logic whose object is the description of variation — the aspects of change, difference, distribution, and diversity — in universes of discourse subject to logical description. To the extent a logical inquiry makes use of a formal system, its differential component treats the use of a differential logical calculus — a formal system with the expressive capacity to describe change and diversity in logical universes of discourse.
In accord with the strategy of approaching logical systems in stages, first gaining a foothold in propositional logic and advancing on those grounds, we may set our first stepping stones toward differential logic in “differential propositional calculi” — propositional calculi extended by sets of terms for describing aspects of change and difference, for example, processes taking place in a universe of discourse or transformations mapping a source universe to a target universe.
What follows is the outline of a sketch on differential propositional calculus intended as an intuitive introduction to the larger subject of differential logic, which amounts in turn to my best effort so far at dealing with the ancient and persistent problems of treating diversity and mutability in logical terms.
Note. I'll give just the links to the main topic heads below. Please follow the link at the top of the page for the full outline.
Part 1 —
Casual Introduction
Cactus Calculus
Part 2 —
Formal_Development
Elementary Notions
Special Classes of Propositions
Differential Extensions
• https://oeis.org/wiki/Differential_Propositional_Calculus_%E2%80%A2_Part_2#Differential_Extensions
Appendices —
References —
Logically, when calculating Urbach energy, one should consider the region corresponding to the absorption edge and determine the slope. However, I have found that many studies use the region below the energy corresponding to the bandgap
🔎 In response to the need for a systematic and comprehensive classification of organizational and management theories that can encompass a broad range of perspectives and theories, I have developed a new classification. This classification considers core parameters such as organizational performance (efficiency and effectiveness), factors influencing organizational performance (human beings, human groups, structure, environment, and society), and philosophical foundations of theories. The goal of this classification is to provide a more organized and practical framework for analyzing and comparing organizational and management theories.
The classification consists of five main categories as follows:
In response to the need for a systematic and comprehensive classification of organizational and management theories that can encompass a broad range of perspectives and theories, I have developed a new classification. This classification considers core parameters such as organizational performance (efficiency and effectiveness), factors influencing organizational performance (human beings, human groups, structure, environment, and society), and philosophical foundations of theories. This classification aims to provide a more organized and practical framework for analyzing and comparing organizational and management theories.
Category 1: Humans as Instruments of Organizational Efficiency
This category conceptualizes humans within the framework of industrial modernity, adopting a mechanistic perspective that treats them as the primary resource for organizational production and efficiency. The roots of this approach can be traced back to classical economic thought and scientific management theories, which emerged in the late 19th and early 20th centuries. In the theories under this category, humans are considered efficient machines tasked with delivering maximum output at minimal cost. Evidently, theories in this category view efficiency, as the organization's paramount goal, as the result of internal production factors. Consequently, external environmental factors hold no significant place in the analysis of organizational essence, performance, or management processes. Within this category, the metaphor of an organization as a machine and humans as tools of production dominates.
The level of analysis here is micro-level, focusing on individual employees. Philosophically, theories in this category regard the organization as a tangible entity and management as an objective process, both of which can be comprehended through a positivist approach. Humans, akin to other tools of production, are utilized within a deterministic cycle. Consequently, addressing organizational and managerial challenges in this category is pursued through rule-based solutions, leaving little room for innovation or inductive reasoning.
Core Criteria of Theories in This Category:
- Emphasis on the instrumental role of humans within the organization.
- Use of quantitative and scientific methods to optimize performance.
- Standardization of human behavior.
- Focus on individual productivity and cost reduction.
- Reliance on precise control mechanisms for human resources.
- Positivist assumptions and instrumental rationality underpinning managerial philosophy.
- Micro-level, individual-oriented unit of analysis.
Examples of Theories in This Category:
- Scientific Management
- Principles of Administrative Management
- Task-Oriented Management (Gilbreths)
- Production-Oriented Management Approach (Charles Babbage)
- Economic Motivation (Adam Smith)
Category 2: Human Groups as Instruments of Organizational Efficiency
This category represents a shift from a mechanistic view of humans to an appreciation of the complexities of human interactions within organizations, with a focus on groups and interpersonal relationships as the core drivers of organizational efficiency. Emerging primarily between the 1920s and 1950s, this perspective was advanced by researchers like Elton Mayo, who demonstrated that informal relationships, emotions, and group dynamics have a more profound impact on organizational efficiency than formal and mechanistic mechanisms. This viewpoint regards humans not as passive tools but as social beings who create meaning and efficiency through their interactions with others.
The level of analysis in this category is meso-level, concentrating on groups and informal structures within work settings. It highlights that organizational efficiency stems not from mechanistic control but from a deeper understanding of employees’ needs, expectations, and social relationships. Theories within this category draw on foundations from social psychology and organizational sociology, emphasizing that workgroups are the primary units for creating meaning and efficiency.
From a philosophical standpoint, these theories still perceive organizations as real phenomena and management as an objective process, which can be understood through a positivist lens. However, they recognize that individuals, while somewhat autonomous, are subjected to forms of compulsion imposed by groups and the organization itself. Methodologically, rule-based approaches continue to dominate research in this category, though there is a transition from purely quantitative measurements to a combination of observation and interpretation in data collection and analysis.
Core Criteria of Theories in This Category:
- Emphasis on human relationships and group interactions.
- Examination of informal relations within organizations.
- Focus on motivation, job satisfaction, and team collaboration.
- Viewing the organization as a social environment.
- Use of qualitative methods and analysis of interpersonal relationships.
- Aiming to enhance productivity through improved communication and interactions.
- Unit of analysis: the individual within the group.
Examples of Theories in This Category:
- Human Relations Theory (Elton Mayo)
- Hierarchy of Needs (Abraham Maslow)
- Theory X and Theory Y (Douglas McGregor)
- Motivation-Hygiene Theory (Frederick Herzberg)
- Three Needs Theory (David McClelland)
- Force Field Theory (Kurt Lewin)
- Systems Approach to Motivation (Chris Argyris)
- Participation Theory (Mary Parker Follett)
- Equity Theory (John Stacey Adams)
- Transformational Leadership Theory
Category 3: Organizational Structure as a Driver of Efficiency
In this category, rooted in organizational sociology and classical management theories, the organizational structure is regarded as the primary determinant of efficiency. These theories, which evolved from the early 20th century through the mid-century decades, assert that the precise design of organizational structures—including hierarchy, division of labor, coordination, and control—can maximize efficiency. Scholars such as Max Weber, with his concept of bureaucracy, and later theorists like James Thompson and Henry Mintzberg, sought to develop optimal organizational models. These models emphasize placing every component in its appropriate position, minimizing friction, and maximizing efficiency.
The level of analysis in this category is macro-level, focusing on the overall architecture of the organization. The approach underscores that efficiency arises from the intelligent design of systems, processes, and organizational relationships rather than solely from the performance of individuals or groups. Theories in this category draw inspiration from engineering sciences, cybernetics, and systems theory, while maintaining a closed-systems perspective. They posit that the closer the organizational structure aligns with a mechanical and machine-like model, the greater its efficiency. Philosophically, these theories follow the same objectivist epistemological foundations as the previous two categories, positioning the science of organization and management on the objective end of the epistemological spectrum.
Core Criteria of Theories in This Category:
- Emphasis on the design and optimization of formal organizational structures.
- Focus on standardized processes and rules.
- Application of structural theories to reduce inefficiencies.
- Emphasis on coordination and control in large and complex organizations.
- Use of formal organizational models to enhance productivity.
- Machine-like modeling of organizations.
- Unit of analysis: the organization as a whole.
Examples of Theories in This Category:
- Bureaucracy Theory (Max Weber)
- Organizational Structuralism (Henry Mintzberg)
- Organizational Design Theory
- Organizational Control Theory
- Systems Management Theory (Kenneth Blanchard)
- Coordination Mechanisms (Henry Mintzberg)
Category 4: Efficiency and Effectiveness as Products of Organizational and Societal Interaction
Theories in this category, influenced by the emerging role of corporate social responsibility, expand the traditional organizational goal of efficiency to include the concept of effectiveness. This category adopts a systemic and interactive view of efficiency and effectiveness, defining the organization not as a closed system but as a dynamic, living entity engaged in continuous interaction with its social environment. Emerging primarily in the 1970s and beyond, this perspective argues that organizational efficiency and effectiveness result not only from internal factors but also from complex, multifaceted interactions between the organization and society.
Thinkers who introduced the concept of open systems, as well as figures like Peter Drucker and Michael Porter who highlighted the importance of creating synergies between organizational and societal interests through social responsibility, transparency, and accountability, belong to this category.
The level of analysis in this category is trans-organizational and inter-organizational, focusing on the intricate relationships between organizations and their external environment. Efficiency and effectiveness are redefined not merely as economic productivity but as an organization’s ability to create social value and align with environmental demands. Theories in this category draw upon disciplines like sociology, institutional economics, and organizational studies, advocating that the boundary between organization and society is not a fixed line but a dynamic, interactive space.
Philosophically, the epistemology and methodology in this category lean towards interpretivist and subjectivist paradigms, though organizations are still largely perceived as real and objective entities. The inclusion of concepts such as social values, however, often necessitates the adoption of nominalist ontologies as well. Furthermore, this category grants individuals a higher level of autonomy within organizational contexts compared to prior categories.
Core Criteria of Theories in This Category:
- Emphasis on the dynamism and adaptability of the external environment.
- Analysis of reciprocal effects between the organization and its environment.
- Use of systemic approaches.
- Focus on aligning organizational actions with societal needs and conditions.
- Unit of analysis: the organizational environment.
Examples of Theories in This Category:
- Contingency Theory
- Open Systems Theory
- Theory of Collective Action
- Social Systems Theory
- Network Theory (Allen & Nau)
- Dynamic Environmental Analysis Theory
- Organizational Ecology Approach
- Organizational Learning Theory
Category 5: Organizational Effectiveness as a Product of Society
In this category, organizational effectiveness is regarded entirely as a product of broader social systems, where organizations have minimal agency in shaping their goals and directions. This radical approach, rooted in critical theories and post-structuralism, perceives organizations not as independent entities but merely as reflections of larger societal structures. Thinkers like Michel Foucault, Jean-François Lyotard, and Jacques Derrida argued that organizations are mere reproductions of dominant discourses, power relations, and societal structures, with little to no autonomous role in defining their objectives and trajectories.
The level of analysis in this category is distinctly macro, focusing on discursive and cultural dimensions and emphasizing the role of organizations in reproducing and mirroring social structures. Effectiveness is not defined by the achievement of organizational objectives but rather as a direct outcome of social, cultural, and historical processes. Theories within this category draw inspiration from critical theories, post-structuralism, and historical sociology, contending that organizations act as passive entities that merely reflect societal structures rather than independently shaping them.
This perspective shifts organizational knowledge entirely towards interpretivist and subjectivist philosophical foundations, spanning ontology, epistemology, methodology, and the view of human agency and values. However, regarding human agency, influenced by anarchist thought, the shift in perspective becomes even more radical.
Core Criteria of Theories in This Category:
- Fundamental critique of the concept of the organization and its societal role.
- Emphasis on the influence of social forces on organizational performance and objectives.
- Questioning the very nature of organizational structures.
- Viewing organizations as tools for social domination and reproduction.
- Employing radical and postmodern perspectives for analyzing organizations.
- Macro-level analysis.
Examples of Theories in This Category:
- Organizations in Marxist Theory
- Organizations in Postmodernism
- Critical Philosophical Theories
- Structuralist Theories
- Phenomenological Approaches to Organizations
- Anti-Organization Theories
- Theories of Social Capital Accumulation
- Anti-Systemic Theories
💬 Please share your views in the comments section. Your insights will be invaluable for refining and enhancing this academic framework!
#Management #Organization #TheoriesOfManagement #OrganizationalDevelopment #Efficiency #Effectiveness #Research #AcademicDiscussion #OrganizationalTheory #ComparativeAnalysis
I hope this academic post resonates with the goals of your research and facilitates meaningful discussions on ResearchGate. Should you require any adjustments, let me know!
DUMBING-DOWN THE WORLD
Who is responsible for the dumbing-down of the world's population? Language is the medium of consensus and of discord. As cultures seek a meaningful world-view, volubility increases with the need for more words. Concepts become redundant and discarded; words are invented and reification changes reality. Modern science, religion – and contemporary education – are based on the presumption of Pluralism (not Realism or Materialism, as is commonly held). The primacy of a metaphysical perspective has all-but been abandoned. The vast body of academic scholarship over the last century was predicated on the catastrophic Oxford Model and is mostly meaningless and redundant. An incomprehensible waste of resources and lives supporting the crumbling edifice of Ivory Tower-based education. That culture, based on erroneous assumptions, must take responsibility for the World's stultified condition. The world-view of the vast majority is based on harm, through ignorance, lies and deception.
Are the foundations of science rock solid? Do we really understand the cosmogonies of the Ancients upon which the development and integrity of modern beliefs rest? Has the Truth been obscured, perhaps corrupted? It is widely held that Democritus was an atomist, but what did his concepts “atom and void” and “reality” mean in those times? Have “atoms” and “matter” been objectified through “reification”; originally understood as concepts, have they become the real things that distort the devolving, reified world-view? Why were the Ancient paradigms corrupted? Is reification the world's – and the individual's – greatest problem? Does matter exist?
I am looking for a quotation of the tale described below. I know it appeared at the beginning of an article or text on paraconsistent logic but I can't find that source nor any other authoritative source.
Two disputants come to a rabbi for a resolution. After hearing the first case, the rabbi says, “You are right.” When he hears the antagonist’s response, he says, “You are also right.” An observer says, “Rabbi, you said person A and person B are both right; they can't both be right!” The rabbi responds, “you are also right!”
Logic plays a crucial role in the process of thinking, analyzing facts, evaluating situations, etc. Many philosophers, psychologists, and others with a scientific background have mentioned the great importance of logic, emphasizing its everyday and timeless applications. However, logic, like other dimensions of human cognition, has its limits. In your perspective, what are the limits of logic?
Differential Logic • 1
Introduction —
Differential logic is the component of logic whose object is the description of variation — focusing on the aspects of change, difference, distribution, and diversity — in universes of discourse subject to logical description. A definition that broad naturally incorporates any study of variation by way of mathematical models, but differential logic is especially charged with the qualitative aspects of variation pervading or preceding quantitative models. To the extent a logical inquiry makes use of a formal system, its differential component governs the use of a “differential logical calculus”, that is, a formal system with the expressive capacity to describe change and diversity in logical universes of discourse.
Simple examples of differential logical calculi are furnished by “differential propositional calculi”. A differential propositional calculus is a propositional calculus extended by a set of terms for describing aspects of change and difference, for example, processes taking place in a universe of discourse or transformations mapping a source universe to a target universe. Such a calculus augments ordinary propositional calculus in the same way the differential calculus of Leibniz and Newton augments the analytic geometry of Descartes.
Resources —
Logic Syllabus
Survey of Differential Logic
Human consciousness includes cognitive patterns, logical notions, as well as rational and emotional elements. The complex process of thinking can be categorized as logical or factual (rational), and emotional. In your perspective, why is rational thinking important?
Comments on “Information = Comprehension × Extension”
Resources
Inquiry Blog • Survey of Pragmatic Semiotic Information
OEIS Wiki • Information = Comprehension × Extension
C.S. Peirce • Upon Logical Comprehension and Extension
We will time-travel to the time of Christ by looking at the language and culture of Afghanistan. In this PowerPoint, you will see a student’s wooden chalk board to help Afghan children learn verses from the Khoran, an Afghan Coloring Book, and a straw picture of Two Important Afghan Words:
“Muhammad” and “Allah”
The Afghan language (Dari) is rich in metaphor. The Afghan word for a Prickly Pear Cactus is “zabane mader showhar” (which translates as “mother-in-law’s tongue.” The Afghan word for Ostrich is “shotor-morgh” which translates as “elephant hen.” The Afghan word for Popcorn is “chos e fil” which means “elephant’s fart. Another Afghan word for Popcorn is “pof e fil” which means “elephant’s puff.” The Afghan word for Lady Bird (the insect) is “kafsh duzak” which means “little shoe-smith.” A Turkey in Afghan Persian is “fil morgh” which means “elephant chicken.” And a Turtle is “sang posht” which means “rock back.”
Our favorite Afghan metaphor is their word for Walnut. A Walnjt is called “chahar maghs” which means “four brains.” If you think about it, that’s what a Walnut actually looks like—four brains.
From 1967-1969, when the Nilsen family lived in Kabul, Afghanistan we saw many beautiful English signs that were mostly misspelled, but the misspellings are perfectly logically. Our favorite sign advertised flowers, and the wording on the sign was “Flower and Buket Maker.” It took us a while to figure this one out. What the sign meant to saywas “Flower and bouquet maker.”
Is there a logic which allows to define functions and relationships both through formulas and by algorithms?
Information = Comprehension × Extension • Preamble
Eight summers ago I hit on what struck me as a new insight into one of the most recalcitrant problems in Peirce’s semiotics and logic of science, namely, the relation between “the manner in which different representations stand for their objects” and the way in which different inferences transform states of information. I roughed out a sketch of my epiphany in a series of blog posts then set it aside for the cool of later reflection. Now looks to be a choice moment for taking another look.
A first pass through the variations of representation and reasoning detects the axes of iconic, indexical, and symbolic manners of representation on the one hand and the axes of abductive, inductive, and deductive modes of inference on the other. Early and often Peirce suggests a natural correspondence between the main modes of inference and the main manners of representation but his early arguments differ from his later accounts in ways deserving close examination, partly for the extra points in his line of reasoning and partly for his explanation of indices as signs constituted by convening the variant conceptions of sundry interpreters.
Resources
Inquiry Blog • Survey of Pragmatic Semiotic Information
OEIS Wiki • Information = Comprehension × Extension
C.S. Peirce • Upon Logical Comprehension and Extension
BACK DOORS MITIGATION IN IoT: The persistent attacks on IoT that is now part of our daily lives as regard sustaining it with the generation of IoMT is growing on daily basis. No matter the different logical layers of controls to mitigate there is always that weak point when put to test. What back door control or measure would most likely assure no collusion with device manufacturers?
Which one is more logical admission age to medical study, UK or US criteria ?
RESPECTFULLY, pan-dualism is more plausible than pan-deism. All entities either are unique, or too different for perfect prediction. Plus, humans may be bound by some rules(genetics, environment, circumstances, etc.) but, without the fundamental choice to focus on life, human reason would be impossible. Plus, humans can lose all their cells yet survive and retain their individual identities. So, at least humans have some immaterial tracker(maybe souls). Pan-deism depends on the unlikely premise that a creator destroyed itself(thus, all existence are dubiously the creator’s debris). We don’t know who created us or how. Thus, pan-dualism has the most evidence, while making the least assumptions.
Let's consider two egregious cases:
- Christian G. Wolf "iSpacetime"
- Gerd Pommerenke "The Metric Universe" and "Accelerated Expansion: A Fallacy"
Christian G. Wolf
Christian Wolf is cynical. When he was presented with the derivation of the "proposed formula" for G, where he hid the value of G in one of the factors, F_g, he just laughed.
In addition to other tautologies and claims that he derived Cosmological Constants with infinite precision (it is a tautology like G=G), the precision depends only on the number of bytes used, he also has a numerology (converting constants to rational numbers.
The reason is simple: The proposed expression for G contains G. The rest of the expression simplifies to 1. I provided the Python Sympy derivations. Anyone can check it.
So, in his "work" you see idiotic ideas like: Time Unit is 1/6961 iSpace-second...:)
Gerd Pommerenke
In the case of Gerd Pommerenke, there the model has a logspiral photon path in a 4D spatial manifold. I explained to Gerd that all logspirals models (where the 4D k-vector is at a constant angle with respect to the 3D hypersurface), produce ZERO Redshift.
You can see that from the attached picture ("LightPropagationin4D_left.png").
So, any claim that the model fits the Supernova Observations is fraudulent.
In addition, the model is a simple one-dimensional waveguide model, where a conductance kappa is arbitrarily introduced and given values that create a tautology H_0=H_0.
In other words, when Gerd claims that he calculated the Hubble Constant, he is faking it.
I say faking it because he has been given this information and decided to disregard it. The same goes for Christian G. Wolf.
Nobody is at blame when one makes a mistake in a calculation. To persist after learning the mistake and refuse to retract the work is scientific fraud.
The problem is that the kappa, when inserted into the proposed formula for the Hubble Constant, yields a tautology H_0=H_0.
The reason is simple: The proposed expression contains Kappa_0. Kappa_0 contains H_0. The rest of the expression simplifies to 1. I provided the Python Sympy derivations. Anyone can check it.
Gerd knows that H_0 cannot be a cosmological constant since it is equal to the inverse of the universe's age. Gerd lifted the Universe as a Lightspeed Expanding Hyperspherical Hypersurface from my work without referencing it.
The problem is that, in my theory, all particles are surfing the Inner Dilation Layer, and thus, they cannot move radially at will. They are being dragged by the Inner Dilation Layer.
That is not the case in Gerd's model, so Gerd cannot explain why particles don't diffuse in 4D or why Gravity and Electromagnetism do not decay with cubic distances.
In addition, Gerd's value of vacuum impedance is imaginary, meaning that light would be absorbed as it travels over billions and billions of light-years, resulting in a totally dark sky.
For the value of the impedance, we wouldn't be able to see the Sun!!!
So, the model grossly failed, and the contrived value of kappa was designed to create the tautology.
Hence, in addition to not reproducing redshifts and containing circular reasoning, Gerd's work claim of deriving Cosmological Constants is fraudulent.
Circular reasoning and tautology fraud are very common, so it is worthwhile to evaluate these two models.
Feel free to ask questions. I provided the Python Notebooks showcasing the tautologies.





If you follow the rules of classical logic, it establishes a structure of the universe that produces the results of the MME experiment, and the the observations of GTR and QM in 3-space. I would be curious if anyone can find a flaw in the theory (page 23).
Hi All .
Recently, I conducted Hall measurement experiments at high temperatures on highly p-doped diamond with a doping concentration of 2.5×10^20 cm−3
During these experiments, I observed two key phenomena:
Carrier Concentration: As the temperature increased, the carrier concentration decreased.
Resistivity: Despite the decrease in carrier concentration, the resistivity of the material decreased with increasing temperature.
Resistivity decreasing is logical. But why the carrier concentrations are decreasing?.
How can you create a logical and coherent structure
for your literature review to guide readers through
complex information?
The delta function seems produce logical contradictions when analyzed on a fundamental level. I would be curious if anyone else agrees.


If not, logic derives from another entity.
Britannica, The Editors of Encyclopaedia. "Yahweh". Encyclopedia Britannica, 22 Jul. 2024, https://www.britannica.com/topic/Yahweh. Accessed 22 July 2024.
Britannica, The Editors of Encyclopaedia. "logos". Encyclopedia Britannica, 1 Jun. 2024, https://www.britannica.com/topic/logos. Accessed 22 July 2024.
Aguirre, Anthony. "multiverse". Encyclopedia Britannica, 25 Jun. 2024, https://www.britannica.com/science/multiverse. Accessed 22 July 2024.
Several researchers (using different approaches) have been obtained the correct fields produced by an electrical charge moving with constant rectilinear velocity without using the relativistic machinery (Heaviside, Landau, Jefimenko, Dimitriyev, Ogiba, …).
On the other hand, if electrical charges are placed in external electromagnetic fields, then appear asymmetries in the Maxwell equations (see the introduction of the ontological Einstein’s paper) that can only be resolved by applying the Lorentz transformations.
I believe that if we get a true answer for the first case (based on first principles), the second case will be easily (and logically) understood. Moreover, the number of relativistic paradoxes will be drastically reduced.
�
I am interested in knowing how does an AI detection tool work and what's the main logic behind it? How it is able to detect the presence of AI work?
Yes because critical rationalism recognizes substance, parsimony and identity(adjusts premises upon contradiction), while skeptical empiricism believes all results from impressions. Skeptical empiricism also believes the self is an illusion.
needed research papers or other sources to learn logical circuits design for trinary logics
In the article linked below, a derivation is done using strictly classical logic and experimental results as the premises. Through logic alone, the processes that mimic superposition, quantum tunneling, black holes, etc are derived implying that quantum mechanics is more very classical... I would be curious if there are any mistakes in the derivations of existence, and time.
Deleted research item The research item mentioned here has been deleted
Critical rationalism respects the law of identity. https://www.researchgate.net/publication/381469939_Critical_Rationalist_Physics
All good derives from bad. Disincentives are everything. Deduction is more rigorous than induction.
My impression:
I: The known empirical facts were tried to explain or describe at least by the known logics and/or respective mathematical relations.
II: The obvious logical or empirical contradictions are tried to exceed by the complex mathematical or experimental procedures, just consisting of respective elementary steps.
The simulation theory is NOT parsimonious because at least partial free will is self-evident. Reason would not exist without the fundamental choice to focus on life. Even animals probably make decisions thus, have souls.
If you examine General Theory of Relativity (GTR), it operates under the premise of a constant speed of light. Similarly, Quantum Mechanics (QM) is built upon the notion of particles existing in multiple states simultaneously. Following the logical pathways from these premises often leads to logically valid conclusions, but the soundness of these conclusions depends on the accuracy of the initial assumptions.
Now, if one were to construct a theory solely on the assumption that classical logic remains consistent, would it necessarily align with all empirical observations? In other words, does soundness imply empirical?
Quantum mechanics focuses more on probability and specific units which seems more empirical. Whereas relativity is more theoretical and thus rationalist.
Violating [(tradition)' = (risk analysis)' = (skin in the game)'] = ethics has many risks.
1)LONG-term higher SELF.
2)Morality is more about concrete empathy than the abstract kind.
3)Criminals risk A LOT.
4)More parsimonous, given the law of identity, and time is an illusion, the individual is more likely eternal than abstract ideas are.
5)We probably realize, upon death, time is an illusion.
6)People evolved to be more easily bored by the abstract than concrete. So, applied mathematics may help teach math.
There is a logical relationship among the three mentioned methods; however, distinguishing which one is comprehensive and which one is specialized is essential in a fluid mechanic context.
When a polymer is soluble in water, how is it able to interact with the adsorbate molecules dispersed in the solution? I need some logical explanation regarding this phenomenon along with relevant literature. Thanks in advance.
We contend that in the Kritik A70-76 (B95-101) Kant attempted to give what in modern terms would be a formal definition of the syntax of his logic (i.e. an inductive definition of judgment).
The question we wish to ask is, given such an analysis of Kant's logic, is the said logic sufficiently expressive (with regards to multiple generality) to formulate Kant's own analogies of experience ?
The original version of the second analogy in A was: for everything that happens there is something which succeeds it, according to a rule. Alles, was geschiet (anhebt zu sein) setzt etwas voraus, worauf es nach einer Regel folgt.
for all x. (if Happens(x) then there exists y such that Follows_by_a_rule(x,y))
We think this question might interest researchers who are interested in how multiple generality might have been dealt with in ancient and medieval logic.
My best strategy is to make my body of work on metaphysics so big and rigorous that, people will ponder "how would he have done this without a doctorate?"
1)
Data Metaphysics BA
2)
Data Metaphysics MA
3)
4)
Data Metaphysics PHD
5)
Showing ABSTRACT moral absolutes probably don’t exist, women evolved to try to fix men’s flaws. Very few women cite Prince Charming as the sexiest man. God maybe designed women to desire the men they could fix or, all females would wait until the second coming to reproduce. Many women are sexually attracted to serial killers.
1)
Results show that these tests are not significant in every personal and functional data even though logically there is difference!
I have gender (male, female)
Position (3 choices)
Experience (4 choices)
How to treat this and did i use the wrong anova test?
Hello everyone,
in our team we have established the use of NGN2 to differentiate hiPSC into neurons. We do so by transducing neurons with two lentiviruses, one with inducible NGN2 and puro resistance and another one with rtTA with neo resistance. We then do a 5 day selection with 2ug/mL of puro and 250ug/mL of G418 for two days and then 3 more days with 1ug/mL and 250ug/mL G418.
We start the differentiation by seeding the cells with mtesr with 4ug/mL of doxy and then next day we move to N2 media (dmem, NEA, N2, bdnf, hnt3, laminin, doxy). Next day we add rat astrocytes and then the next day we change to B27 media with AraC. Adapted from here:
The thing is that during the culture with N2 media we have some clusters of hiPSC developing in the culture that only disappear after treatment with AraC (logic), leaving a lot of debris in the culture. Also it limits us a lot with the densities we can work, because the higher the density, the bigger the clusters and the dirtier the culture.
Has anyone the same problem? Anyone has any recommendation to get rid of these cells? We think that they are cells with basal expression of the neo resistance but with not enough expression to trigger the differentiation once we add the doxy. How would you address this?
Thank you guys.
Many times in physics, the equations accurately model certain aspects of reality, but the explanation is not compatible with philosophical truths. For example, in GTR, time is often believed to have begun with the universe and this violates reason: how did the universe begin without time? In the link shown below, I setup a framework in which logic is never violated, and the conclusions are compatible with STR, GTR, and QM. I would value your opinion on them. Thanks.
Deleted research item The research item mentioned here has been deleted
Hello,o o o o o o o o o
I have master's level training in logic and meta-logic, high marks, A's, but am not a practicing logician at all, I am a neo-empiricist pluralist (epistemic and ontological) who does not think any single CTM based model of the mind can be reduced to logic exactly because of a categorical ambiguity that gets contingently invoked.
Neurons appear and so must be first modelled as objects with endogenous functions rendered over many internal sub-relations describing their emergent dynamics (behaviour). BUT, when we swtich to building a bit model of human cognition in terms of neural patterns and inter-dynamics, especially when thinking of the brain as composed of neural bit maps made morphic to structures in reality "outside", we then need to categorize neurons on more purely exogenous and relational terms too.
I do not believe this is allowed (does not lead to wff's) in any model built over any singular, i.e. a strictly reduced and monist and singular classical logic based model, but I am not versed enough in these technical terms to be certain here about the argumentative or procedural complexities involved, and would welcome any logician's insights here (I am a "fan" of semi-classical model building, although!).
I am looking for a practiced logician, with expertise in Model Theory (building structures of interpretation) who might have a peripheral interest in philosophy of mind, but who is willing to consider non-classical approaches.
Quid pro quo!
If you want a primer on my overall concern with the limits of finalizing totalized logical models (TOE's), please give this a go:
Thanks, Brian
Consider the following quotation by Pierre Boutroux (1880 - 1922):
"Logic is invincible because in order to combat logic it is necessary to use logic."
I would like to add that to prove logical principles, we also need logical principles.
Can logical principles prove their own truthfulness?
I think that the only way is to test their suitability by experience. Accordingly, we only can accept some logical principle provided that it works fine to describe the real world.
Longwinded Speculation 1:
Long winded Speculation 2:
TRYING to BEGIN a concise chart:
Maybe we should identify what is the most parsimonious afterlife. Expanding the law of identity, maybe physics can determine the exact afterlife all have coming.
My previous attempts:
Guessing what the afterlife broadly is:
Guessing what the afterlife is NOT.
3)
4)