Science topics: Mathematics
Science topic

Mathematics - Science topic

Mathematics, Pure and Applied Math
Questions related to Mathematics
  • asked a question related to Mathematics
Question
9 answers
Answer briefly
Relevant answer
Answer
very good
  • asked a question related to Mathematics
Question
3 answers
Need CBSE India 10th board examination data on the mathematics subject.
Relevant answer
Answer
Hello @Durgesh ,
Websites usually have a "Contact Us" option which sometimes provides telephone and e-mail contact info. The website I identified before does so.
For direct entry to that website's Contact US, you can enter this web address: https://www.cbse.gov.in/cbsenew/contact-us.html
You can then direct your informational search where it might do some good.
Best,
Paul
  • asked a question related to Mathematics
Question
4 answers
I want a qualitative scale that contains questions that reveal the mathematics teacher's perception of beauty and simplicity in mathematics?
Relevant answer
Answer
To create a qualitative scale that reveals mathematics teachers’ perceptions of beauty and simplicity in mathematics, consider including open-ended questions such as:
1.How do you define beauty in mathematics?
2.Can you provide an example of a mathematical concept you find particularly elegant or simple?
3.In what ways do you think simplicity enhances understanding in mathematics?
4.Describe a moment when you experienced beauty in a mathematical solution or proof.
5.How does your perception of beauty in mathematics influence your teaching practices?
These questions will facilitate deep reflections on the aesthetic aspects of mathematics and their pedagogical implications.
  • asked a question related to Mathematics
Question
1 answer
In the attached paper titled, "Approximate expressions for BER performance in downlink mmWave MU-MIMO hybrid systems with different data detection approaches", mathematical operator ℝ{.} is used with minus sign in Equation (14). Can anybody help explain the meaning of this operator? Why minus sign is used?
Relevant answer
Answer
IYH Dear Neeraj Sharma
tl/dr: Skip to point 3 Sign Convention: in section Why the Minus Sign?
Anent the meaning of the mathematical operator and the minus sign in Equation (14) of the paper you provided, some preliminaries:
Understanding the Operator ℝ{.}
In the context of complex numbers, the operator ℝ{z} represents the real part of a complex number z. If a complex number z is expressed as z = a + jb, where a and b are real numbers and j is the imaginary unit (√-1), then:
ℝ{z} = a
In other words, the operator extracts the real component of the complex number, discarding the imaginary part.
Context of Equation (14)
Equation (14) in the paper is part of a derivation to approximate the probability of a pairwise error event in a communication system. Let's look at the relevant part of the equation:
-ℝ{(ΔH - Δ)H A_k ñ_k} ~ N(0, (ΔH - Δ)H A_k K_k A_kH (ΔH - Δ) / 2)
  1. Δ: This represents the difference between two possible transmitted signal vectors.
  2. A_k: This is a matrix related to the data detection approach used.
  3. ñ_k: This is a complex vector representing the noise and interference.
  4. The term inside ℝ{}: The expression (ΔH - Δ)H A_k ñ_k is a complex number.
  5. ℝ{}: The real part of this complex number is taken.
  6. ~ N(0, ...): This indicates that the real part of the expression is approximated as a Gaussian random variable with a mean of 0 and a variance given by the expression after the comma.
Now to your actual question:
Why the Minus Sign?
The minus sign in front of the ℝ{.} operator is crucial for the derivation and has a specific purpose:
  1. Error Event Condition: The derivation is based on the condition for a pairwise error event. This event occurs when the decision metric for the incorrect signal is greater than the decision metric for the correct signal. The decision metric is related to the received signal and the data detection approach.
  2. Decision Metric Difference: The term inside the ℝ{.} operator, (ΔH - Δ)H A_k ñ_k, represents the difference between the decision metrics for the two signals.
  3. Sign Convention: The minus sign is introduced to ensure that the error event condition is correctly represented. The error event occurs when the real part of the difference is negative. The minus sign flips the sign of the real part, so that the probability of the error event can be calculated using the Gaussian Q-function.
  4. Gaussian Approximation: The Gaussian approximation is used to calculate the probability of the error event. The Gaussian distribution is defined for real numbers, so the real part of the complex number is used.
  • asked a question related to Mathematics
Question
1 answer
Conventional fragility is defined as probability of failure. Based on concise mathematics, it is found that if fragility is probability of collapse then the design curve is probability of survive. The sum of these two is equal to 1. Consequently, if a member (structure) is designed based on a give curve, then its fragility of collapse is also known!.
Scale the horizontal axes of a fragility curve (s) of a structure, between 0 and 1. Then:
what is the probability of collapse at s=0.5 ?
what is the probability of survive at s=0.5
Don you agree with the above findings? Why ?
Relevant answer
Answer
Semantically collapse and survival do look like mutually exclusive events, thus indeed their probabilities should add up to one (complementarity). Unless of course there were alternative ways of interpreting "collapse" and/or "survival", but if they are both in the same context, I don't see how.
  • asked a question related to Mathematics
Question
3 answers
Can the physical reality be represented mathematically?
Well actual physics, can be represented mathematically with the Basic Systemic Unit, based on Euler’s relation with its most remarkable property of remaining the same in spite of change, that permits to deduce the fundamental equations of physics such as :
* that of the pendulum a real harmonic oscillator
* that of the gravitational field including that of the planet mercury obtained by Einstein, but in this case obtained with a mathematical tool no so much complicated as was done with Tensor Analysis
* those of SR in another approach, in which linear moving is just a special case of the more general solution obtained with the BSU concept in which covariance is included as it is a consequence of the isomorphic property of Euler’s relation mentioned above and finally the
* Schrödinger’s wave equation
For those interested in the way all this is obtained you can see my papers:
QUANTUM PHYSICS
A QUANTUM THEORY OF GRAVITATION
SPECIAL RELATIVITY WITH ANOTHER APPROACH
that I really hope will contribute to overcome the great crisis of physics, because the great incompatibility between QM and GR.
So yes, actual physics, can be represented mathematically in a real coherent way, but for it is necessary to make a real paradigm shift.
Edgar Paternina
retired electrical engineer
Relevant answer
Answer
Thank you for sending your question. From what you wrote, I think you are right.
The article is translated into English:
I haven't uploaded it yet, but if you're interested I'll send it to you.
In China, a simple experiment was conducted that confirms what was imagined. The following video was made before the experiments were conducted:
And there are more things that could be done.
Regards,
Laszlo
  • asked a question related to Mathematics
Question
10 answers
Einstein overcomplicated the theory of special and general relativity simply because he did not define time correctly.
A complete universal or physical space is a space where the Cartesian coordinates x, y, z are mutually orthogonal (independent) and time t is orthogonal to x, y, z.
Once found, this space would be able to solve almost all problems of classical and quantum physics as well as most of mathematics without discontinuities [A*].
Note that R^4 mathematical spaces such as Minkowski, Hilbert, Rieman. . . etc are all incomplete.
Schrödinger space may or may not be complete.
Heisenberg matrix space is neither statistical nor complete.
All the above mathematical constructions are not complete spaces in the sense that they do not satisfy the A* condition.
In conclusion, although Einstein pioneered the 4-dimensional unitary x-t space, he missed the correct definition of time.
Universal time t* must be redefined as an inseparable dimensionless integer woven into a 3D geometric space.
Here, universal time t* = Ndt* where N is the dimensionless integer of iterations or the number of steps/jumps dt*.
Finally, it should be clarified that the purpose of this article is not to underestimate Einstein's great achievements in theoretical physics such as the photoelectric effect equation, the Einstein Bose equation, the laser equation, etc. but only to discuss and explain the main aspects and flaws of his theory of relativity, if any.
Relevant answer
Dear, nothing in Science is FINAL. Hence Science is called SELFCORRECTING SUBJECT. It means there is less possibility of Albert Einstein understanding fully The Theory of Relativity
  • asked a question related to Mathematics
Question
1 answer
Computational topology of solitons
The well-established research area of algebraic topology currently goes interdisciplinary with computer science in many directions. The Topological Data Analysis gives new opportunities in visualization for modeling and special mapping. A study on metrics used or simplicial complexes are reliable for future results in the area of mathematics. Today, the machine learning from one side is a tool for the analysis in topology optimization, topological persistence and optimal homology problems, from other side the topological features in machine learning are new area of research, topological layers in neural networks, topological autoencoders, and topological analysis for the evaluation of generative adversarial networks are in general aspects of topology machine learning. On practical point of view, the results in this area are important for solitary-like waves research, biomedical Image analysis, neuroscience, physics and many others. That gives us opportunity to establish and scale up an interdisciplinary team of researchers to apply for funding for fundamental science research in interdisciplinary field.
Relevant answer
Answer
Dear Dr. Galina Momcheva
I am a PhD candidate at the Department of Mathematics of the University of Rajshahi. My current research interest is Topological Data Analysis and Its Application which is quite parallel to the project you have mentioned here. I am working on this topic since 2021 and have completed 2 projects as a research assistant. Please visit my researchgate profile for a glance on my research outputs. Now-a-days, I am very interested in developing a TDA-based ML/DL model to introduce new framework for data analysis in different field of interests.
I have noticed that the job recruitment is currently closed. However, I am interested to continue my research as a postdoc fellow in a project similar to this. Please feel free to email me at mbshiraj@gmail.com.
  • asked a question related to Mathematics
Question
8 answers
In a project with analysis of log-linear outcomes I have not found the solution to this problem. (log is the natural logaritm)
I assume it is simple, but I am out of clue, and I hope someone more mathematical proficient can help.
Relevant answer
Answer
Thank you, Jan Schulz.
Good to know this website.
Best
  • asked a question related to Mathematics
Question
1 answer
YES IT IS Beauty OF MATH IN NUMBER THEORY
Relevant answer
Answer
2025 = 452 = 272 + 362 = 52 +202 +402
2025 = 52⋅92 = 32⋅152
Your statement 3 is the sum of 8 squares...
  • asked a question related to Mathematics
Question
5 answers
Nominations are expected to open in the early part of the year for the Breakthrough Prize in Fundamental Physics. Historically nominations are accepted from early/mid-January to the end of March, for the following year's award.
Historically, the foundation has also had a partnership with ResearchGate:
The foundation also awards major prizes for Life Sciences and for Mathematics, and has further prizes specific to younger researchers.
So who would you nominate?
Relevant answer
Answer
Dear Bernart Berndt Barkholz ,
Unfortunately, awards are usually used to intellectually manipulate communities! It's nice to see you again! Just two days ago, I was thinking about you, where did you you disappear? So telepathy works. How can we explain this physically?
Dear Eric Eric Baird ,
Do you think our young people are capable of overcoming the nonsense they receive from our education systems?
If a young person breaks out of the vicious circle, they will be fired from their job!
Most talented young people who want to stay in research have to accept the false narrative!
Times are changing! The collective West has lost its way! The Global South has 'already' advanced!
Regards,
Laszlo
  • asked a question related to Mathematics
Question
37 answers
Differential Propositional Calculus • Overview
❝The most fundamental concept in cybernetics is that of “difference”, either that two things are recognisably different or that one thing has changed with time.❞
— W. Ross Ashby • An Introduction to Cybernetics
Differential logic is the component of logic whose object is the description of variation — the aspects of change, difference, distribution, and diversity — in universes of discourse subject to logical description. To the extent a logical inquiry makes use of a formal system, its differential component treats the use of a differential logical calculus — a formal system with the expressive capacity to describe change and diversity in logical universes of discourse.
In accord with the strategy of approaching logical systems in stages, first gaining a foothold in propositional logic and advancing on those grounds, we may set our first stepping stones toward differential logic in “differential propositional calculi” — propositional calculi extended by sets of terms for describing aspects of change and difference, for example, processes taking place in a universe of discourse or transformations mapping a source universe to a target universe.
What follows is the outline of a sketch on differential propositional calculus intended as an intuitive introduction to the larger subject of differential logic, which amounts in turn to my best effort so far at dealing with the ancient and persistent problems of treating diversity and mutability in logical terms.
Note. I'll give just the links to the main topic heads below. Please follow the link at the top of the page for the full outline.
Part 1 —
Casual Introduction
Cactus Calculus
Part 2 —
Formal_Development
Elementary Notions
Special Classes of Propositions
Differential Extensions
Appendices —
References —
Relevant answer
Answer
Differential Propositional Calculus • 37
Foreshadowing Transformations • Extensions and Projections of Discourse —
❝And, despite the care which she took to look behind her at every moment, she failed to see a shadow which followed her like her own shadow, which stopped when she stopped, which started again when she did, and which made no more noise than a well‑conducted shadow should.❞
— Gaston Leroux • The Phantom of the Opera
Many times in our discussion we have occasion to place one universe of discourse in the context of a larger universe of discourse. An embedding of the type [†X†] → [†Y†] is implied any time we make use of one basis †X† which happens to be included in another basis †Y†. When discussing differential relations we usually have in mind the extended alphabet ‡Y‡ has a special construction or a specific lexical relation with respect to the initial alphabet ‡X‡, one which is marked by characteristic types of accents, indices, or inflected forms.
Resources —
Differential Logic and Dynamic Systems
Differential Logic • Foreshadowing Transformations
  • asked a question related to Mathematics
Question
84 answers
It is provable that quantum mechanics, quantum field theory, and general relativity violate the the axioms of the mathematics used to create them. This means that neither of these theories has a mechanism for processes that they describe to be feasible in a way that is consistent wth rules used to develop the math on which that are based . Thus, these theories and mathematics cannot both be true. This is proven with a $500 reward for disproving (details in the link). So I can prove that the above mentioned theories are mathematical nonsense, and I produce a theory that makes the same predictions without the logical mistakes. https://theframeworkofeverything.com/
Relevant answer
Answer
Juan Weisz I am always in a learner role. You are welcome to point out mistakes in my work as it would be appreciated. Thank you
  • asked a question related to Mathematics
Question
2 answers
Within a specific problem, without the whole picture?
Relevant answer
Answer
Algorithm Design and Analysis:
  • Time and Space Complexity: Mathematicians and computer scientists analyze algorithms to determine their efficiency in terms of time and space resources. This involves using techniques like asymptotic analysis (Big O notation) to identify the best algorithms for specific tasks.
  • Graph Theory: Graph theory provides mathematical tools to model and analyze networks, which are essential in many programming applications, from social networks to transportation systems. Optimizing graph-based algorithms often involves finding shortest paths, maximum flows, or minimum spanning trees.
2. Machine Learning and Artificial Intelligence:
  • Optimization Algorithms: Machine learning algorithms, such as gradient descent and stochastic gradient descent, rely on mathematical optimization techniques to minimize error functions and find optimal parameter values.
  • Statistical Modeling: Statistical models, like linear regression and logistic regression, are used to analyze data and make predictions. These models often involve solving optimization problems to find the best-fitting parameters.
3. Game Development:
  • Physics Engines: Physics engines simulate real-world physical phenomena, such as gravity, collisions, and fluid dynamics. These simulations often rely on mathematical models and numerical methods to optimize performance and accuracy.
  • Pathfinding Algorithms: Pathfinding algorithms, like A* search, are used to find the shortest or most efficient path between two points in a game world. These algorithms often involve mathematical techniques like graph theory and heuristic functions.
4. Computer Graphics:
  • Ray Tracing: Ray tracing is a rendering technique that simulates the behavior of light to create realistic images. It involves solving complex mathematical equations to determine the color and intensity of light rays as they interact with surfaces.
  • 3D Modeling: 3D modeling relies on mathematical concepts like linear algebra and geometry to represent and manipulate 3D objects.
  • asked a question related to Mathematics
Question
1 answer
I am currently working on optimizing our inventory management system and need to calculate the monthly safety stock for various SKUs. I have already generated weekly safety stock values based on historical data and lead times. However, I need to adjust these values for a monthly period considering several factors:
1. SKU Contribution Ratio: This ratio indicates the importance of each SKU. A higher ratio means the SKU is more critical and should have a higher safety stock.
2. CCF Factor: This factor reflects our past ability to fulfill orders based on historical order and invoice data.
3. Monthly Stock Reduction Percentage: This percentage shows how much stock is typically left at the end of each month. If this value is 100% for four consecutive months, it indicates no need to keep that much inventory for the respective SKU. Conversely, if the values are decreasing, it suggests that the safety stock has been used and needs to be adjusted.
Given these factors, I need to determine a safety factor for the month, which will be used to adjust the weekly safety stock values to monthly values.
Could you suggest scientific methodologies or models that can effectively integrate these factors to calculate the monthly safety stock?
Relevant answer
Answer
Hi Sachin,
Use the formula SKU=WSV×Ratio
Later, do adjustments with CCF=SKU×CCF(update algorithm repeatedly)
calculate the CCF factor to account for variability and uncertainty.
Then calculate monthly SKU=Adjusted Safety Stock with CCF×Monthly Percentage.
Hope this will help you.
Regards,
IH
  • asked a question related to Mathematics
Question
1 answer
please help me to find a theory that supports my study about storytelling in teaching mathematics
Relevant answer
Answer
Keith Devlin has written a whole book ('The Math Gene') to illustrate the point that story-telling makes learning mathematics easier, and that if used well, story telling can make everyone very talented at understanding and doing math.
  • asked a question related to Mathematics
Question
12 answers
Famous mathematicians are failing each day to prove the Riemann's Hypothesis even if Clay Mathematics Institute proposes a prize of One Million Dollars for the proof.
The proof of Riemann's Hypothesis would allow us to understand better the distribution of prime numbers between all numbers and would also allow its official application in Quantics. However, many famous scientists still refuse the use of Riemann's Hypothesis in Quantics as I read in an article of Quanta Magazine.
Why is this Hypothesis so difficult to prove? And is the Zeta extension really useful for Physics and especially for Quantics ? Are Quantics scientists using the wrong mathematical tools when applying Riemann's Hypothesis ? Is Riemann's Hypothesis announcing "the schism" between abstract mathematics and Physics ? Can anyone propose a disproof of Riemann's Hypothesis based on Physics facts?
Here is the link to the article of Natalie Wolchover:
The zeros of the Riemann zeta function can also be caused by the use of rearrangements when trying to find an image by the extension since the Lévy–Steinitz theorem can happen when fixing a and b.
Suppositions or axioms should be made before trying to use the extension depending on the scientific field where it is demanded, and we should be sure if all the possible methods (rearrangements of series terms) can give the same image for a known s=a+ib.
You should also know that the Lévy–Steinitz theorem was formulated in 1905 and 1913, whereas, the Riemann's Hypothesis was formulated in 1859. This means that Riemann who died in 1866 and even the famous Euler never knew the Lévy–Steinitz theorem.
Relevant answer
  • asked a question related to Mathematics
Question
17 answers
Differential Logic • 1
Introduction —
Differential logic is the component of logic whose object is the description of variation — focusing on the aspects of change, difference, distribution, and diversity — in universes of discourse subject to logical description. A definition that broad naturally incorporates any study of variation by way of mathematical models, but differential logic is especially charged with the qualitative aspects of variation pervading or preceding quantitative models. To the extent a logical inquiry makes use of a formal system, its differential component governs the use of a “differential logical calculus”, that is, a formal system with the expressive capacity to describe change and diversity in logical universes of discourse.
Simple examples of differential logical calculi are furnished by “differential propositional calculi”. A differential propositional calculus is a propositional calculus extended by a set of terms for describing aspects of change and difference, for example, processes taking place in a universe of discourse or transformations mapping a source universe to a target universe. Such a calculus augments ordinary propositional calculus in the same way the differential calculus of Leibniz and Newton augments the analytic geometry of Descartes.
Resources —
Logic Syllabus
Survey of Differential Logic
Relevant answer
Answer
Differential Logic • 18
Tangent and Remainder Maps —
If we follow the classical line which singles out linear functions as ideals of simplicity then we may complete the analytic series of the proposition f = pq : X → B in the following way.
The next venn diagram shows the differential proposition df = d(pq) : EX → B we get by extracting the linear approximation to the difference map Df = D(pq) : EX → B at each cell or point of the universe X. What results is the logical analogue of what would ordinarily be called “the differential” of pq but since the adjective “differential” is being attached to just about everything in sight the alternative name “tangent map” is commonly used for df whenever it's necessary to single it out.
Tangent Map d(pq) : EX → B
To be clear about what's being indicated here, it's a visual way of summarizing the following data.
d(pq)
= p ∙ q ∙ (dp , dq)
+ p ∙ (q) ∙ dq
+ (p) ∙ q ∙ dp
+ (p) ∙ (q) ∙ 0
To understand the extended interpretations, that is, the conjunctions of basic and differential features which are being indicated here, it may help to note the following equivalences.
• (dp , dq) = dp ∙ (dq) + (dp) ∙ dq
• dp = dp ∙ dq + dp ∙ (dq)
• dq = dp ∙ dq + (dp) ∙ dq
Capping the analysis of the proposition pq in terms of succeeding orders of linear propositions, the final venn diagram of the series shows the “remainder map” r(pq) : EX → B, which happens to be linear in pairs of variables.
Remainder r(pq) : EX → B
Reading the arrows off the map produces the following data.
r(pq)
= p ∙ q ∙ dp ∙ dq
+ p ∙ (q) ∙ dp ∙ dq
+ (p) ∙ q ∙ dp ∙ dq
+ (p) ∙ (q) ∙ dp ∙ dq
In short, r(pq) is a constant field, having the value dp ∙ dq at each cell.
Resources —
Logic Syllabus
Survey of Differential Logic
  • asked a question related to Mathematics
Question
1 answer
Program Description:
A program that converts mathematical equations from PDF files into editable equations within Word documents. The program relies on Optical Character Recognition (OCR) technology for mathematical equations, ensuring accuracy in retrieving symbols and mathematical formulas. It allows users to easily edit the equations directly in Word and provides support for various mathematical writing formats, such as LaTeX or MathType.
Program Features:
Accurate Conversion: Can read complex mathematical equations from PDF files.
Word Integration: Offers direct import options into Word documents.
Mathematical Format Support: Supports multiple formats such as MathML and LaTeX.
User-Friendly Interface: A simple design suitable for researchers and students.
Multi-Platform Compatibility: Works on operating systems like Windows and macOS.
Examples of programs that may meet this description include:
Mathpix Snip
InftyReader
You can try one of them to find the best solution for your need
Relevant answer
Answer
Try using mathpix, it does it so well
  • asked a question related to Mathematics
Question
2 answers
I have started an investigation about the utilization of AI for teaching mathematics and physics.
In this framework, I would like any insights and previous findings.
Please send me similar studies.
Thanks you in advance
Relevant answer
Answer
I've some thinking around this subject and would like to work as future project
  1. Personalized & Adaptive Mathematical Learning
  2. Virtual Tutoring along with automated grading
  • asked a question related to Mathematics
Question
1 answer
what about generate 3D shape using different ways: GAN or mathematics with python or LLMs or LSTMs and the related works about this !
Relevant answer
Answer
I think these are the steps involved in using LLM.
  • Craft a detailed prompt describing the 3D shape.
  • Input it to a LLM like Gemini.
  • The model outputs code, let's say- python script
  • Execute the generated code to create the 3D shape.
  • The generated 3D shape may require additional refinement using 3D modeling software.
  • asked a question related to Mathematics
Question
12 answers
Dear Esteemed Colleagues,
I hope this message finds you well. I am writing to invite your review and insights on what I believe to be a significant development in our understanding of the Riemann Hypothesis. After extensive work, I have arrived at a novel proof for the hypothesis, using a generalization of the integral test applicable to non-monotone series, as outlined in the attached document.
As a lead AI specialist at Microsoft, specializing in math-based AI, I have employed both traditional mathematical techniques and AI-based verification algorithms to rigorously validate the logical steps and conclusions drawn in this proof. The AI models have thoroughly checked the derivations, ensuring consistency in the logic and approach.
The essence of my proof hinges on an approximation for the zeta function that results in an error-free evaluation of its imaginary part at $x = \frac{1}{2}$, confirming this as the minimal point for both the real and imaginary components. I am confident that this new method is a significant step forward and stands up to scrutiny, but as always, peer review is a cornerstone of mathematical progress.
I warmly invite your feedback, comments, and any questions you may have regarding the methods or conclusions. I fully stand by this work and look forward to a robust, respectful discussion of the implications it carries. My goal is not to offend or overstate the findings but to contribute meaningfully to this ongoing conversation in the mathematical community.
Thank you for your time and consideration. I look forward to your responses and the productive discussions that follow.
Sincerely,
Rajah Iyer
Lead AI Specialist, Microsoft
Relevant answer
Answer
I was briefly reviewing your proof and noticed something unusual in this part.
  • asked a question related to Mathematics
Question
3 answers
Any answer my question
Relevant answer
Answer
The most important biodiversity measures in field studies are species richness (the count of distinct species), species evenness (distribution uniformity of species), and species diversity indices like Shannon and Simpson indices. Mathematically, species richness is a simple count, while evenness ratios measure uniformity. Diversity indices, like Shannon's, are logarithmic calculations of proportional abundance, whereas Simpson’s index focuses on dominance, calculating the probability of two randomly selected individuals belonging to the same species.
  • asked a question related to Mathematics
Question
1 answer
I am interested in the study of visual subcompetence in education, specifically how visual tools and technologies can be integrated into the educational process to enhance the development of professional competencies in future teachers, particularly in mathematics education.
I am looking for research and definitions that highlight and specify the concept of visual subcompetence in education. Specifically, I am interested in how visual subcompetence is distinguished as part of the broader professional competence, particularly in the context of mathematics teacher education.
Relevant answer
Answer
To think more number of case studies regarding visual subcompetence in education.
  • asked a question related to Mathematics
Question
2 answers
Can you suggest any study that uses Ethnographic Research design?
Relevant answer
Answer
Marjun Abear A notable study using ethnographic research design in teaching and learning is Paul Willis' "Learning to Labor" (1977). Willis observed working-class boys in a British school to explore how their social interactions and cultural attitudes shaped their educational experiences and future job prospects. This ethnography highlights how education can reinforce class inequalities, providing deep insights into the relationship between culture, learning, and social reproduction.
  • asked a question related to Mathematics
Question
60 answers
I apologize to you all! The question was asked incorrectly—my mistake. Now everything is correct:
In a circle with center O, chords AB and CD are drawn, intersecting at point P.
In each segment of the circle, other circles are inscribed with corresponding centers O_1; O_2; O_3; O_4.
Find the measure of angle ∠O_1 PO_2.
Relevant answer
Answer
  • asked a question related to Mathematics
Question
1 answer
Can you explain the mathematical principles behind the Proof of Stake (PoS) algorithm, including how validator selection probabilities, stake adjustments, and reward calculations are determined
Relevant answer
Answer
Dear Hiba, you can look up references I put far below or search Wikipedia if not done, not sure something is there. Here is what I can summarize; first, let’s break down the mathematical principles behind the Proof of Stake (PoS) algorithm as I understood it from existing literatures:
1. Validator Selection Probabilities
In PoS, validators are chosen to create new blocks based on the amount of cryptocurrency they hold and are willing to “stake” as collateral. The selection process is typically pseudo-random and influenced by several factors:
  • Stake Amount: The more coins a validator stakes, the higher their chances of being selected. Mathematically, if a validator (i) stakes (S_i) coins out of a total staked amount (S_{total}), their probability (P_i) of being selected is:Pi​=Stotal​Si​​
  • Coin Age: Some PoS systems also consider the age of the staked coins. The longer the coins have been staked, the higher the chances of selection. This can be represented as:Pi​=∑j=1N​Sj​×Aj​Si​×Ai​​where (A_i) is the age of the coins staked by validator (i).
  • Randomization: To prevent predictability and enhance security, a randomization factor is often introduced. This can be achieved through a hash function or a random number generator.
2. Stake Adjustments
Stake adjustments occur when validators add or remove their staked coins. The total stake (S_{total}) is updated accordingly, which in turn affects the selection probabilities. If a validator adds ( \Delta S ) coins to their stake, their new stake ( S_i’ ) becomes:
Si′​=Si​+ΔS
The new total stake ( S_{total}’ ) is:
Stotal′​=Stotal​+ΔS
3. Reward Calculations
Validators receive rewards for creating new blocks, which are typically proportional to their stake. The reward ( R_i ) for validator (i) can be calculated as:
Ri​=Rtotal​×Stotal​Si​​
where ( R_{total} ) is the total reward distributed for the block.
Some PoS systems also include penalties for malicious behavior or downtime, which can reduce the rewards or even the staked amount.
Example
Let’s consider a simple example with three validators:
  • Validator A stakes 40 coins.
  • Validator B stakes 30 coins.
  • Validator C stakes 30 coins.
The total stake ( S_{total} ) is 100 coins. The selection probabilities are:
  • ( P_A = \frac{40}{100} = 0.4 )
  • ( P_B = \frac{30}{100} = 0.3 )
  • ( P_C = \frac{30}{100} = 0.3 )
If the total reward for a block is 10 coins, the rewards are:
  • ( R_A = 10 \times 0.4 = 4 ) coins
Hope you will find this quite helpful.
  • asked a question related to Mathematics
Question
3 answers
توضيح كيفية التعليم الأخضر في مادة الرياضيات للأطفال
Relevant answer
Answer
Applying green education in mathematics for children involves integrating environmental themes and sustainability principles into math lessons. Here are some strategies to make this connection:
1. **Use Environmental Data in Math Problems**
- **Real-world examples**: Incorporate environmental statistics, such as data on pollution, recycling rates, and energy consumption, into math problems. This not only teaches mathematical concepts like percentages, averages, and data analysis but also raises awareness about environmental issues.
- **Hands-on projects**: Have children collect local environmental data (e.g., water usage, electricity consumption) and analyze it to learn about graphing, patterns, and calculations.
2. **Explore Geometry Through Nature**
- **Shapes in nature**: Use examples from nature like leaves, flowers, and snowflakes to teach geometric concepts like symmetry, fractals, and patterns.
- **Eco-friendly architecture**: Introduce geometric principles through sustainable design, such as how solar panels are angled to maximize sunlight or how certain shapes reduce waste in construction.
3. **Problem Solving with Environmental Impact**
- **Sustainability challenges**: Set up problem-solving activities where students must calculate the environmental impact of various actions. For instance, ask them to calculate the savings in resources when using recycled paper versus new paper.
- **Optimization tasks**: Use problems that involve optimizing energy use or waste reduction, showing how math can help create more sustainable solutions.
4. **Promote Critical Thinking on Environmental Issues**
- **Math and decision making**: Present scenarios where students need to make environmentally conscious decisions, such as calculating carbon footprints for different transportation methods or comparing the efficiency of renewable vs. non-renewable energy sources.
- **Game theory and resource use**: Introduce simple concepts of game theory or optimization to help children think about resource allocation and how different decisions impact the environment.
5. **Project-Based Learning with a Green Focus**
- **Eco-friendly projects**: Encourage students to work on projects like creating a garden, where they can use math for measurement, planning, and budgeting. This not only teaches practical math but also instills responsibility for the environment.
- **Sustainable design challenges**: Have students design eco-friendly solutions like a rainwater collection system, where they calculate the volume of water that can be saved based on local rainfall data.
6. **Use Visual and Interactive Tools**
- **Green apps and games**: Use interactive math apps and games that focus on environmental topics. For instance, apps that simulate resource management or renewable energy can teach math concepts while promoting green education.
- **Field trips and nature walks**: Incorporate math lessons into outdoor activities, where children measure plant growth, calculate the height of trees, or estimate the number of species in a given area.
7. **Introduce Mathematical Concepts Through Climate Change**
- **Climate data analysis**: Analyze real-world data on climate change, like global temperature rise or CO2 emissions. This fosters an understanding of trends and how math can model and predict future changes.
- **Carbon footprint calculation**: Teach students how to calculate their own carbon footprint using math, helping them understand the impact of their actions and encouraging more sustainable behavior.
By integrating green education into math, children not only gain math skills but also learn to think critically about environmental issues and sustainability, which can inspire them to take positive actions for the planet.
  • asked a question related to Mathematics
Question
3 answers
Bonjour,
Je suis actuellement en train de travailler sur un projet de recherche portant sur l'utilisation de l'optimisation mathématique pour déterminer le taux directeur optimal en politique monétaire. J'aimerais savoir s'il existe des travaux de recherche récents ou des modèles spécifiques qui ont abordé ce sujet. De plus, je suis à la recherche de conseils sur la manière de structurer mon modèle et de choisir des variables pertinentes pour ce type d'analyse. Toute suggestion de lecture ou d'expertise serait grandement appréciée.
Merci d'avance pour votre aide
Relevant answer
Answer
Research on the use of mathematical optimization to determine the policy rate includes models such as the Taylor Rule, which sets the policy rate based on inflation and output gaps, and dynamic stochastic general equilibrium (DSGE) models that incorporate optimization techniques to evaluate the impacts of monetary policy. Other studies utilize linear programming and mixed-integer optimization methods to analyze trade-offs in policy decisions and macroeconomic stability. These models help central banks effectively balance inflation control and economic growth.
  • asked a question related to Mathematics
Question
4 answers
As an academic working and pursuing a PhD degree in Egypt, both in private and public universities respectively, I wanted to put forward a simple question:
What is the role of universities, and other academic institutions, today? Was there ever a time where universities were agents of revolutionary action and change, or was it only a subject of the overall consumerist system?
We can take many steps back till the Ancient Egyptian times, where scribes and priests were taught writing, mathematics, and documentation of daily exchanges. All the way till today's era of digital globalization and mass education, where knowledge production process has become more of a virtual canvas rather than actual knowledge. Has knowledge ever served its purpose? Have academic institutions, and of course academic scholars, ever delivered the true purpose of education?
Was, and still, education's main sole purpose is economic prosperity of certain classes, hence socio-economic segregation?
Relevant answer
Answer
Today's global societies are very competitive in so many ways; as a result, this trend has a ripple effect in global educational institutions as well. Specifically speaking, without an MA/MS degree, one cannot compete to get a decent job in the professional level job market. Thus, universities are driven to restructure their institutions to meet this demand
  • asked a question related to Mathematics
Question
6 answers
Scientists believe theories must be proven by experiments. Does their faith in the existence of objective reality mean they are classical scientists who reject quantum mechanics' statements that observers and the observed are permanently and inextricably united? In this case, scientists would unavoidably and unconsciously influence every experiment and form of mathematics. In the end, they may be unavoidably and unconsciously influencing the universe which is the home of all experiments and all mathematics.
Relevant answer
Answer
Dear colleagues,
QM experiments, and probably even higher-level systems, are definitely proving to be affected by observers; e.g., see the research of Dean Radin on the deviation of the mean value of quantum random number generators and rest of his research. 
On the other hand, large systems are often in a state of decoherence, and hence, quantum effects have no impact on the behavior of such macroscopic objects and processes. The line between those two extreme cases is blurry and constantly shifting. 
What is astounding is that the bulk of research confirming that consciousness is impacting reality is constantly growing. It has far-reaching consequences. One of the most profound impacts is our innate ability to alter our well-being and health and even heal from serious diseases. 
A list of important publications describing quantum biology functioning follows. This research has gained impetus in the last couple of years. According to my understanding, from this research, we can start to understand the principles of coupling between consciousness and quantum systems outcomes. 
What is your take on this exciting area of research?
References:
[1] Madl, P.; Renati, P. Quantum Electrodynamics Coherence and Hormesis: Foundations of Quantum Biology. Int. J. Mol. Sci. 2023, 24, 14003. https:// doi.org/10.3390/ijms241814003
[2] Madl, P.; Renati, P. Quantum Electrodynamics Coherence and Hormesis: Foundations of Quantum Biology. Int. J. Mol. Sci. 2023, 24, 14003. https:// doi.org/10.3390/ijms241814003
[3] Lewis Grozinger, Martyn Amos Pablo Carbonell, Thomas E. Gorochowski, Diego A. Oyarzún Harold Fellermann, Ruud Stoof , Paolo Zuliani, Huseyin Tas & Angel Goñi-Moreno: Pathways to cellular supremacy in biocomputing, Nature Communications 10(1) (2019),
DOI: 10.1038/s41467-019-13232-z
[4] Michael P. Robertson & Gerald F. Joyce: The Origins of the RNA World, Cold Spring Harb Perspect Biol 2012;4:a003608, DOI: 10.1101/cshperspect.a003608
  • asked a question related to Mathematics
Question
4 answers
There exists a neural network model designed to predict a specific output, detailed in a published article. The model comprises 14 inputs, each normalized with minimum and maximum parameters specified for normalization. It incorporates six hidden layers, with the article providing the neural network's weight parameters from the input to the hidden layers, along with biases. Similarly, the parameters from the output layer to the hidden layers, including biases, are also documented.
The primary inquiry revolves around extracting the mathematical equation suitable for implementation in Excel or Python to facilitate output prediction.
Relevant answer
Answer
A late reply. I hope this helps you!
  • asked a question related to Mathematics
Question
7 answers
It seems it is common to combine basic observations to create new observable, which are then used for PPP and other applications. Basic observations such as pseudorange and carrier-phase observations are real measurement from GNSS. These real observations are combined to create entirely new observable which is not direct, physical, and real. Amazingly, these new observable solves the real problem such as PPP (e.g. Ionosphere -free combination).
  • What is the theory behind this?
  • Any similar approach like this in other scientific field or any simple analogous explanation?
  • You could direct me to resources such as videos, or literature.
Relevant answer
Answer
furthermore, one a satellite is locked for recording the phases, a counter starts, keep add "1" to the number in the counter whenever a whole cycle carrier wave is passed based on the time a wave length corresponds to. So, the unknow ambiguity of a satellite maintains constant (associated with the time instant when the lock starts) as long as this satellite is being locked without interruption. In case an interruption, a different ambiguity will have to be resolved in the estimation.
  • asked a question related to Mathematics
Question
9 answers
In triangle ∆ABC (with ∠C = 90°), the angle CBA is equal to 2α.
A line AD is drawn to the leg BC at an angle α (∠BAD = α).
The length of the hypotenuse is 6, and the segment CD is equal to 3.
Find the measure of the angle α.
This problem can be solved using three methods: trigonometric, algebraic, and geometric. I suggest you find the geometric method of solution!
Relevant answer
Answer
After trigonotric solution (pretty tedious way, not worth publication) I have found a geometric way same as Dinu Teodorescu, but in a somehow different order:
If one adds to the pic by Liudmyla Hetmanenko the center O of AB , then the following becomes clear:
property 1. AO=CO, which implied ∠CAO = ∠ACO =2 α
property 2. CD=CO, which implies that D and O lie on a circle with center at C
property 3. ∠DBO = α = 0.5 ∠DCO, which implies that D,O and B lie on a circle with center at C, which in turn imples that CB=CD, which means that
3 α = π/4 radians = 15o
  • asked a question related to Mathematics
Question
14 answers
A minion is a low-level official protecting a bureaucracy form challengers.
A Kuhnian minion (after Thomas Kuhn's Structure of Scientific Revolutions) is a low-power scientist who dismisses any challenge to existing paradigm.
A paradigm is a truth structure that partitions scientific statement as true to the paradigm or false.
Recently, I posted a question on Physics Stack Exchange that serves as a summary of the elastic string paradigm. My question was: “Is it possible there can be a non-Fourier model of string vibration? Is there an exact solution?”
To explain, I asked if they knew the Hamiltonian equation for the string vibration. They did not agree it must exist. I pointed out there are problems with the elastic model of vibration with its two degrees of freedom and unsolvable equations of motion can only be approximated by numerical methods. I said elasticity makes superposition the 4th Newtonian law. How can a string vibrate in an infinite number of modes without violating energy conservation?
Here are some comments I got in response:
“What does string is not Fourier mean? – Qmechanic
“ ‘String modes cannot superimpose!’ Yet, empirically, they do.” – John Doty
“ A string has an infinite number of degrees of freedom, since it can be modeled as a continuous medium. If you manage to force only the first harmonic, the dynamics of the system only involve the first harmonic and it’s a standing wave: this solution does depend on time, being (time dependence in the amplitude of the sine). No 4th Newton’s law. I didn’t get the question about Hamilton equation.
“What do you mean with ‘archaic model’? Can I ask you what’s your background that makes you do this sentence? Physics, Math, Engineering? You postulate nothing here. You have continuum mechanics here. You have PDEs under the assumption of continuum only. You have exact solutions in simple problems, you have numerical methods approximating and solving exact equations. And trust me: this is how the branch of physics used in many engineering fields, from mechanical, to civil, to aerospace engineering.” – basics
I want to show the rigid versus elastic dichotomy goes back to the calculus wars. Quoting here from Euler and Modern Science, published by the Mathematical Association of America:
"We now turn to the most famous disagreement between Euler and d’Alembert … over the particular problem of the theory of elasticity concerning a string whose transverse vibrations are expressed through second-order partial differential equations of a hyperbolic type later called the wave equation. The problem had long been of interest to mathematicians. The first approach worthy of note was proposed by B. Taylor, … A decisive step forward was made by d’Alembert in … the differential equation for the vibrations, its general solution in the form of two “arbitrary functions” arrived at by means original with d’Alembert, and a method of determining these functions from any prescribed initial and boundary conditions.”
[Editorial Note: The boundary conditions were taken to be the string endpoints. The use of the word hyperbolic is, I believe, a clear reference to Taylor’s string. A string with constant curvature can only have one mathematic form, which is the cycloid, which is defined by the hyperbolic cosh x function. The cosh x function is the only class of solutions that are allowed if the string cannot elongate. The Taylor/Euler-d’Alembert dispute whether the string is trigonometric or hyperbolic.
Continuing the quote from Euler and Modern Science:
"The most crucial issue dividing d’Alembert and Euler in connection with the vibrating string problem was the compass of the class of functions admissible as solutions of the wave equation, and the boundary problems of mathematical physics generally, D’Alembert regarded it as essential that the admissible initial conditions obey stringent restrictions or, more explicitly, that the functions giving the initial shape and speed of the string should over the whole length of the string be representable by a single analytical expression … and furthermore be twice continuously differentiable (in our terminology). He considered the method invalid otherwise.
"However, Euler was of a different opinion … maintaining that for the purposes of physics it is essential to relax these restrictions: the class of admissible functions or, equivalently, curves should include any curve that one might imagine traced out by a “free motion of the hand”…Although in such cases the analytic method is inapplicable, Euler proposed a geometric construction for obtain the shape of the string at any instant. …
Bernoulli proposed finding a solution by the method of superimposition of simple trigonometric functions, i.e. using trigonometric series, or, as we would now say, Fourier series. Although Daniel Bernoulli’s idea was extremely fruitful—in other hands--, he proved unable to develop it further.
Another example is Euler's manifold of the musical key and pitch values as a torus. To be fair, Euler did not assert the torus but only drew a network show the Key and Pitch can move independently. This was before Mobius's classification theorem.
My point is it should be clear the musical key and pitch do not have different centers of harmonic motion. But in my experience, the minions will not allow Euler to be challenged by someone like me. Never mind Euler's theory of music was crackpot!
Relevant answer
Answer
Physic Stack Exchange is not peer review, it is sneer review. I show then their answers are not correct but I am shut out.
  • asked a question related to Mathematics
Question
15 answers
The need of a paradigm shift in physics
Is it possible in a world as fragmented as ours to present a new concept of Unity in which Science, Philosophy and Spirituality or Ontology can be conceived working in Complete Harmony?
In this respect the late Thomas S. Kuhn wrote in his
The Structure of Scientific Revolutions
"Today research in parts of philosophy, psychology, linguistic, and even art history, all converge to suggest that the traditional paradigm is somehow askew. That failure to fit is also increasingly apparent by the historical study of science to which most of our attention is necessarily directed here."
And even the father of Quantum Physics complained strongly in his 1952 colloquia, when he wrote:
"Let me say at the outset, that in this speech, I am opposing not a few special statements claims of quantum mechanics held today, I am opposing its basic views that has been shaped 25 years ago, when Max Born put forward his probability interpretation, which was accepted by almost everybody. It has been worked out in great detail to form a scheme of admirable logical consistency which has since been inculcated in all young students of theoretical physics."
Where is the source of this "crisis of physics" as has been called?
Certainly the great incompatibility between General Relativity and Quantum Mechanics is in a certain sense, one of the reasons, of that great crisis, and that shows clearly the real need of a paradigm shift.
As one that comes from the Judeo-Christian tradition, that need of a real paradigm shift was of course a real need too. Philosophers such as Teilhard de Chardin, Henry Bergson, Charles Pierce and Ken Wilber, all of them worked for it!.
Ken Wilber said that goal of postmodernity should be the Integration of the Big Three, Science, Philosophy and Spirituality, and a scientist as Eric J. Lerner in his The Big Bang Never Happened, show clearly in it, how a paradigm shift was in cosmology is a real need too.
My work about that need started in 1968, when I found for the first time, an equation that was declared the most beautiful equation of mathematics, I mean Euler's relation found by him in 1745, when working with infinite series. It was this equation that took me in 1991, to define what I now call a Basic Systemic Unit, that has the most remarkable property to remain the same in spite of change, exactly the same definition of a Quantum as defined by professor Art Hobson in his book The Tales of Quantum, and that the University of Ottawa found when working with that strange concept that frightened Einstein, the entanglement concept, that seemed to violate Special Relativity.
Where is the real cause of the incompatibility between GR and QM?
For GR Tensor Analysis was used, a mathematical tool based on real numbers, and with it there was the need to solve ten functions representing the gravitational field:
"Thus, according to the general theory of relativity, gravitation occupies an exceptional position with regards to other forces, particularly the electromagnetic forces, since the ten functions representing the gravitational field at the same time define the metrical properties of the space measured."
THE FOUNDATION OF THE GENERAL THEORY OF RELATIVITY
By A. Einstein
Well the point is that, in that metrics that define the GR, time is just another variable, just as space, and as so with the same symmetrical properties, at the point that is can take both signs positive and negative, so time travel could be conceived just as a space travel, and any direction, in fact Stephen Hawking in his A BRIEFER HISTORY OF TIME, writes:
"It is possible to travel to the future. That is, relativity shows that it is possible to create a time machine that will jump you forward in time." Page 105
This is exactly the point that has made physics some sort of metaphysics, and as so created the great crisis of physics. While QM is based on the complex Schrödinger's wave equation or on complex numbers, in which the symbol sqr(-1), is a symbol to separate two different orders of reality, such as Time and Space, GR is based just on real numbers.
The Basic Systemic Unit concept, based on Euler's relation is in fact the definition of a Quantum, and as so it can be used to deduce all fundamental equations of physics as can be seen in my paper... resolving in this way that great crisis of physics
Quantum Physics
Edgar Paternina
retired electrical engineer
Relevant answer
Answer
In fact in IE in Power Systems, when dealing with three phase systems, we reduced them to one phase system, and for the power system to work properly in steady state the three phases must be balanced to avoid blackout.
  • asked a question related to Mathematics
Question
6 answers
I have been seeing and following a lot of work on these topics, it even seems that there are more results on them than on the corresponding classical topics, particularly on general topology.
What could be the cause of such results?
Relevant answer
Answer
Dear Colleagues,
If U and E are fixed sets, A is a subset of E, and F is a function
of A to the power set of U , then F should be called a soft set
over U and E , instead of the pair (F, A).
Thus, since set-valued functions can be identified
vith relations, a soft set over U and E is actually a relation
on E to X. That is, a subset of the product set of E and U.
Therefore, several defininitions and theorems on soft sets,
consuming a lot of pages, are superfluous. Moreover, notations
and terminology can be simplified.
  • asked a question related to Mathematics
Question
23 answers
Relevant answer
Answer
<<Einstein's Geometrical versus Feynman's Quantum-Field Approaches to Gravity Physics>>
If we turn to the already mentioned simplification of space in the form of a helix of a cylinder, then gravity is the force generated by the limit cycle, which tightens the pitch of the helix to zero at the point where the helix degenerates into a circle. As for quantized fields, these are limit cycles in dual space, so they are not responsible for gravity.
  • asked a question related to Mathematics
Question
6 answers
Has our mathematical knowledge progressed as much as contemporary science?
1- Assume a rectangle in the second dimension; this rectangle's components are lines. Its geometric characteristics are perimeter and area.
2- Assume a cube in the third dimension. Its components are the plane. Its geometric characteristics are area and volume.
3- What are the names of its components by transferring this figure to the 4th dimension? And what is the name of its geometric characteristics? And with the transfer to the 5th and higher dimensions, our mathematics has nothing to say.rectangle is just a simple shape how about complex geometric shapes?
According to new physical theories such as strings theory, we need to study different dimensions.
Relevant answer
Answer
Dear Yousef, we can not give "names" for each dimension n>3, because we would need an infinite number of names! ( How to name the cube in the space with 357 dimensions? ) If n>3, it's sufficient to add prefix "hyper", and every mathematician will understand correctly the sense!
The best description is in the case of dimension n=3. We have the cube, having as faces 6 bounded pieces from planes( that is 6 equal squares situated in 6 different planes).
The analogue of cube in dimension n=2 is the square( not the rectangle) having as "faces" 4 equal segments situated on 4 different lines.
The analogue of cube in all other n - dimensional space Rn with n>3 is called hypercube.
The hypercube in 4 dimensions has equal cubes as faces and each such face is situated in a 3 - dimensional space R3.
The hypercube in 5 dimensions has equal hypercubes from R4 as faces.
....................................................................................................................
No contradiction, all clear!
Analogue regarding the sphere! Sphere in 3 dimensions, circle in 2 dimensions, hypersphere in all dimension n>3 . Here the equations defining all such math objects are extremely obviously similar.
Hypercubes and hyperspheres have hypervolumes !
So, tu study efficiently and seriously string theory you need more and more advanced mathematics!
  • asked a question related to Mathematics
Question
4 answers
Modifying the original Feistel structure will it be feasible to design a lightweight and robust encryption algorithm. Somehow changing the structure's original flow and adding some mathematical functions there. I welcome everyone's view.
Relevant answer
Answer
Yes, it is indeed feasible to design a lightweight algorithm based on the Feistel structure. The Feistel network is a popular symmetric structure used in many modern cryptographic algorithms, such as DES (Data Encryption Standard). The design of a lightweight Feistel-based algorithm can effectively balance security and efficiency, making it suitable for environments with constrained resources, such as IoT devices and resource-limited systems.
Key Considerations for Designing a Lightweight Feistel-Based Algorithm
Feistel Structure Basics:
The Feistel structure divides the data into two halves and applies a series of rounds where the right half is modified using a function (often called the round function) combined with a subkey derived from the main key.
The left and right halves are then swapped after each round, employing the same round function iteratively over several rounds.
Lightweight Design Goals:
Reduced Resource Usage: The algorithm should minimize memory and processing requirements, which are crucial in lightweight applications.
Efficient Implementation: It should have efficient implementations in hardware (e.g., FPGAs, ASICs) as well as software (e.g., microcontrollers).
Security: While optimizing for lightweight design, the algorithm must maintain a sufficient level of security against common attacks (such as differential and linear cryptanalysis).
Steps in Designing a Lightweight Feistel Algorithm
Key Design Choices:
Number of Rounds: Determine the optimal number of rounds needed to achieve desired security without excessive computational cost. For lightweight applications, 4 to 8 rounds may be sufficient.
Block Size: Choose a block size that is suitable for the intended application. Smaller block sizes (e.g., 64 or 128 bits) may be appropriate for constrained environments.
Key Size: Develop a flexible key size that provides adequate security while keeping the implementation lightweight. A key size between 80 and 128 bits is commonly used for lightweight designs.
Round Function Design:
Simplicity and Efficiency: The round function should be computationally efficient, possibly utilizing modular arithmetic or simple logical operations (AND, OR, XOR) to enhance speed and reduce footprint.
Subkey Generation: Efficient and secure key scheduling is essential to generate round keys from the primary key, ensuring that each round has a unique key.
Attack Resistance:
Differential and Linear Cryptanalysis: Analyze the design for vulnerabilities to these forms of attacks. The choice of S-boxes in the round function can significantly enhance resistance.
Avalanche Effect: Ensure that a small change in the input or the key results in a significant change in the output.
Performance Optimization:
Implementation Flexibility: Design the algorithm to allow for easy adaptation for different platforms (hardware vs. software) to maximize performance.
Minimalistic Approach: Reduce unnecessary complexity in the algorithm to lower resource consumption, focusing on only essential component
Example Lightweight Feistel Structure
While developing a specific algorithm, you could consider a structure similar to the following:
function LightweightFeistelEncrypt(plaintext, key): Split plaintext into left (L0) and right (R0) For i from 1 to n (number of rounds): Ri = Li−1 XOR F(Ri−1, Ki) Li = Ri−1 return (Ln, Rn) function F(input, k): // Simple round function using lightweight operations // Example could include small S-boxes and XOR operations return output
  • asked a question related to Mathematics
Question
1 answer
Homomorphic encryption is a type of encryption that lets you perform mathematical operations on encrypted data without decrypting it first. This means that the raw data remains encrypted while it's being processed, analyzed, and run through algorithms
Relevant answer
Answer
Homomorphic encryption (HE) is a form of encryption that allows computations to be performed on ciphertext, producing an encrypted result that, when decrypted, matches the result of operations performed on the corresponding plaintext. This property makes HE particularly appealing in the context of forensic investigations, where data privacy and confidentiality are paramount.
Overview of Latest Research on Homomorphic Encryption in Forensics:
  1. Privacy-Preserving Data Processing:Recent research focuses on using HE to enable privacy-preserving analysis of forensic data. This involves allowing investigators to conduct queries or calculations on encrypted data without needing to decrypt it, thus protecting sensitive information during the analysis process.
  2. Secure Data Sharing:HE facilitates secure data sharing among different forensic entities (e.g., law enforcement, legal teams, and forensic analysts) without exposing the underlying sensitive data. Research is exploring protocols that leverage HE for collaborative forensic investigations while maintaining data integrity and confidentiality.
  3. Digital Evidence Management:HE is being investigated for use in managing digital evidence, particularly in cases involving cloud storage or shared environments. Ensuring that digital evidence can be processed while remaining encrypted is crucial for maintaining chain-of-custody and preventing tampering.
  4. Forensic Data Analysis:Some studies focus on applying HE to specific forensic analysis tasks, such as statistical analysis of digital evidence. This allows investigators to perform necessary computations (e.g., data aggregation, anomaly detection) on encrypted data sets, thereby increasing the privacy of the involved parties.
  5. Adoption of Machine Learning:Research is being conducted on leveraging HE in conjunction with machine learning models for forensic analysis. For instance, HE can allow training and inference on sensitive datasets without exposing the actual data, which is crucial when dealing with personal information, such as in cases involving digital crimes or cyberbullying.
  6. Performance Optimization:A significant focus of recent research has been on improving the efficiency and practical applicability of HE in forensic scenarios. This includes optimizing encryption schemes, reducing computation time, and minimizing resource consumption, making HE feasible for real-time forensic applications.
  7. Non-Interactive Proofs and HE:Research is also exploring the use of non-interactive proofs combined with HE to provide verification of computations while keeping the data encrypted. This approach can ensure that the results obtained from forensic computations are valid without revealing the underlying data.
  8. Legal and Regulatory Considerations:There is growing attention to the legal implications of using HE in forensics, including compliance with data protection regulations such as GDPR. Research is exploring how HE can align forensic practices with legal requirements for data security and privacy.
Challenges and Future Directions:
  • Performance Limitations: While HE offers strong privacy guarantees, it has inherent computational overheads. Future research will need to focus on advancing cryptographic techniques to enhance performance without compromising security.
  • Standardization Issues: Developing standards for using HE in forensic applications is vital. Researchers are working on frameworks that can guide the integration of HE into existing forensic practices.
  • Interoperability: Ensuring that HE-based forensic tools can work seamlessly with existing forensic software and tools remains a challenge that needs to be addressed.
  • asked a question related to Mathematics
Question
10 answers
Yes it is .
Relevant answer
Answer
The idea of ​​this Q&A work is to replace current mathematics and theoretical physics that live and operate in an R^4 spatial manifold with those operating in a modern 4D x-t space.
this would make it possible to resolve in a simple manner almost all classical and quantum physical situations and also to introduce new general rules in science.
We first define the discrete 4D unit space from the numerical statistical theory of matrix chains B of the Cairo techniques and compare it to the current 4D unit spaces of Einstein and Markov.
Next, you need to introduce what is called the modern Laplacian theorem in 4D unit space.
Finally, you explain the unexpected and striking relationship between the speed of light c=3E8 and the thermal diffusivity of metals when both live in 4D unit space.
The accuracy and precision of the numerical results show beyond doubt that the proposed 4D unit space is the one in which mother nature operates. This space forms the basis of a unified field theory of all types of energy density, whereas the classical manifold space R^4 is inferior and incomplete.
In conclusion, we recommend the proposed 4D unit space which constitutes a real breakthrough in the search for a new 4D numerical statistical theory to replace the classical incomplete R^4 mathematical space.
Note that throughout this work, the author uses his own double precision algorithm. In other words,
Python or MATLAB library is not required.
  • asked a question related to Mathematics
Question
6 answers
Given:
In an isosceles triangle, the lengths of the two equal sides are each 1, and the base of the triangle is m.
A circle is circumscribed around the triangle.
Find the chord of the circle that intersects the two equal sides of the triangle and is divided into three equal segments by the points of intersection.
Relevant answer
Answer
Alex Ravsky Sorry for not read carefully the last part of your comment! Yes, in fact a lot of triangles with lenghts 1, 1, m do not admit the second solution! To have 2 solutions, condition 2-2cosA<=1/2 i.e. cosA>=3/4 is necessary! Very restrictive and cosA>=3/4 an weird limitation for angle A.
Liudmyla Hetmanenko A nice and interesting problem!
  • asked a question related to Mathematics
Question
2 answers
What is the mathematic difference between AI and AC?
Standardization of AI and AC based on the DIKWP model(初学者版)
Relevant answer
Answer
Thanks for asking such a thought-provoking question. From my knowledge, the exact mathematical distinction between AI and AC is best described in terms of complexity and the depth of the cognitive function emulated. AI, by and large, remains confined to DIK levels wherein AI functions through computations on data patterns to arrive at decisions. Yet, according to the DIKWP model, AC adds respectably more wisdom and purpose, respectively, thereby emulating higher-order cognitive processes somewhat similar to human consciousness. DIKWP model insists on purposeful transformation of data, hence making AC more sophisticated and adaptive than AI (MDPI).
  • MDPI Applied Sciences - Exploring Artificial Consciousness through DIKWP Model
  • asked a question related to Mathematics
Question
3 answers
I am interested in the existence of intelligent tutoring systems for teaching physics and mathematics in secondary schools or artificial intelligence tools that can be used in the classroom for student-teacher collaboration in these subjects, preferably with free access.
I am also interested in any relevant studies/research or information on the above topic.
Relevant answer
Answer
Dear Aikaterini Bounou , there are many AI tools available, as most of them are free of charge.
"How can AI help you improve Mathematics learning and teaching? In this post, we will dive into various use cases of AI for learning math and what are some of the popular AI tools for learning and teaching math. Mathematics can be a overwhelming for some, but with AI's personalization powers, it can become an accessible and fun topic to learn for all. Learners of any age can be better at math with AIs help..."
Also, have a look in this article where you will find more info on AI tools for science:
  • asked a question related to Mathematics
Question
1 answer
National Achievement Test are given to learners like in Grade 10 on the following subjects like Science, Mathematics and English, to assess how much students learned in a specific disciplines
Relevant answer
Answer
First recognize the "tests" are testing the acceptance of the academic indoctrination of the instruction. However, as this indoctrination differs from the student's real world experience, the interests and the test result will fade. The more academic the instruction, the less interest for all.
Make high school applicable to student's lives.
  • asked a question related to Mathematics
Question
3 answers
Let me share a quote from my own essay:
"Dynamic flows on a seven-dimensional sphere that do not coincide with the globally minimal vector field, but remain locally minimal vector fields of matter velocities, we interpret as physical fields and particles. At the same time, if in the space of the evolving 3-sphere $\mathbb{R}^{4}$ the vector field forms singularities (compact inertial manifolds in which flows are closed), then they are associated with fermions, and if flows are closed only in the dual space $\mathbb{R}^{4}$ with an inverse metric, then the singularities are associated with bosons. For example, a photon is a limit cycle (circle) of a dual space, which in Minkowski space translationally moves along an isotropic straight line lying in an arbitrary plane $(z,t)$, and rotates in the planes $(x,t)$, $(y,t)$." (p. 12 MATHEMATICAL NOTES ON THE NATURE OF THINGS)
Relevant answer
Answer
Unlike the Kaluza theory, where the fifth dimension serves as a source of electromagnetic vector potential, we have a ring (circle) in additional dimensions, the movement of which in Minkowski space is equivalent to the movement of a polarized photon. However, this does not mean that our dynamical system is unable to cope, like Kaluza's theory, with the assignment of a vector potential.
  • asked a question related to Mathematics
Question
3 answers
Hi all
i was wondering if anyone knew of a valid and reliable assessment of task-based engagement that could be used to compare student engagement across different types of tasks in mathematics?
thanks
James
Relevant answer
Answer
Hello James,
Here are a few suggestions.
How about Motivated Strategies for Learning Questionnaire (MSLQ) by Pintrich et al. (1991). It includes scales for cognitive engagement. Sample items: “When I study for this class, I try to put together the information from class and from the book” and “I often find that I have to reread material several times to understand it.” The scale has been adapted for different subjects including mathematics.
There is Engagement vs. Disaffection with Learning (EvsD) by Skinner et al. (2009). It measures both behavioural and emotional engagement, along with disaffection (i.e., withdrawal and disengagement). Example items: Engagement: "When we work on something in class, I get involved."; Disaffection: "When I’m in class, I think about other things."
Academic Engagement Scale by Reeve & Tseng (2011) assess four dimensions of student engagement: behavioural, emotional, cognitive, and agentic. Example items: Behavioural: "I try hard to do well in this class."; Emotional: "I enjoy learning new things in this class."; Cognitive: "When I study, I try to understand the material better."; Agentic: "I ask the teacher questions when I don’t understand."
Classroom Engagement Inventory (CEI) by Wang et al. (2014). This inventory scale assesses different dimensions of student engagement, including cognitive, emotional, behavioural, and social engagement. Example items: Cognitive: "I try to connect what I am learning to things I have learned before."; Behavioural: "I pay attention in class."; Emotional: "I feel happy when I am in class." It’s a bit more tuned to the classroom setting.
A slightly different tack may be looking at how immersed they are in the task; sometimes referred to as “flow”. The Flow State Scale (FSS) by Jackson and Marsh (1996) measures student “flow”. Flow is associated with deep cognitive engagement and intrinsic motivation. Some example items: "I feel just the right amount of challenge." and “I am completely focused on the task at hand." While originally developed for sports and physical activities, the FSS has been adapted for educational settings and can be used to assess engagement in challenging mathematics tasks.
These are a bit general for learning tasks. I’m not sure, but you might be looking at comparing different approaches to teaching mathematics in a classroom. In which case you might prefer something more task-specific in the measure (?). How about some of these:
There is Mathematics Engagement Instrument (MEI) by Wang et al. (2016) which deals with different aspects of student engagement in mathematics tasks. It includes scales for behavioural, emotional, and cognitive engagement more tailored to mathematics activities.
Example items: Behavioral Engagement: "I try to understand the mathematics problems even when they are hard."; Emotional Engagement: "I feel happy when I solve a math problem."; Cognitive Engagement: "I put a lot of effort into learning the math material."
The Task Engagement Survey (TES) by Sinatra et al. (2015) really zeroes in on the specified task. It assesses how deeply students are involved in the task, how much effort they exert, and how interested they are in the task. Example items: "I found this task interesting."; "I worked hard to complete the task."; "I was fully engaged while working on this task." This might be useful to compare engagement across different activities or instructional strategies.
The Task Motivation and Engagement Scale (TMES) was developed by Martin et al. (2017) to assess students’ motivation and engagement in specific academic tasks. It includes items on effort, persistence, task absorption, and emotional engagement specific to a task. Example items: "I tried to do my best on this task."; "I kept working even when the task was difficult."; "I enjoyed working on this task." This scale can be used to compare student engagement across different mathematics tasks or teaching methods within the classroom.
In the same vein, the Task-Specific Engagement Measure (TSEM) by Scherer et al. (2016) is a scale developed to measure engagement specifically in learning tasks. Example items: "I felt interested in the task I was doing."; "I focused hard on solving the task."; "I felt absorbed in the task." The TSEM is particularly relevant for mathematics education, where different types of tasks (e.g., problem-solving, conceptual understanding, procedural fluency) require different levels of engagement.
I hope these are helpful to your research.
Kind regards,
Tim
  • asked a question related to Mathematics
Question
4 answers
In triangle ABC, the median BM_2 intersects the bisector AL_1 at point P.
The side BC is divided by the base of the bisector AL_1 into segments CL_1=m and BL_1=n.
Determine the ratio of the segments AP to PL_1.
Relevant answer
Answer
Dear colleagues,
I am grateful for your interest in my geometric problem. Both of your solutions are correct and they are excellent!
I also want to express my heartfelt appreciation for your words of support for my country during these difficult times.
  • asked a question related to Mathematics
Question
1 answer
Dear research community members I would like to post a preprint of my article in Mathematics but need an endorsement If anyone can do this for me I will greatly appreciate for once Secondly I will send you my new paper with explanations
Endorsement Code: SP84WZ
Thanks a lot
P.S I don't know the procedure The moderator sent me a link
Ruslan Pozinkevych should forward this email to someone who's registered as an endorser for the cs.IT (Information Theory) subject class of arXiv
or alt visit
and enter SP84WZ
Once again thank you and apologize for bothering
Relevant answer
Answer
Прям там на Архиве среди авторов статей найдите тех, кто может дать эндорсмент. Внизу каждой статьи написано: Which authors of this paper are endorsers? Нажимайте ищите эндорсеров и отсылайте им статью.
  • asked a question related to Mathematics
Question
3 answers
How do we improve the inflation prediction using mathematics?
Relevant answer
Answer
To assess inflation dynamics mathematically, economists use models like the Phillips Curve to relate inflation to unemployment, monetary policy rules like the Taylor Rule to link interest rates with inflation and output, DSGE models to capture the effects of economic shocks, and time series methods like ARIMA or VAR to analyze historical data and forecast future inflation
  • asked a question related to Mathematics
Question
2 answers
Ate there any Research Project and Grants for individuals without involvement of employer in Mathematics?
Relevant answer
Answer
@Pratheepa Cs
I am not able to get into what you want to convey
  • asked a question related to Mathematics
Question
18 answers
We assume that the difference is huge and that it is not possible to compare the two spaces.
The R^4 mathematical space considers time as an external controller and the space itself is immobile in its description or definition in the face of curl and divergence operators.
On the other hand, the unit space 4 D x-t time t is woven into the 3D geometric space as a dimensionless integer.
Here, the curl and divergence operators are just extensions of their original definitions ​​in 3D geometric space.
Relevant answer
Answer
Finally, here is a simple understanding of spaces:
The Laplacian theorem lives and functions in the 4D-unit space but not in the classical 3D+t space.
The theory of relativity and the speed of light c lives and functions in 4D unit space but not in classical 3D+t space.
. . etc.
  • asked a question related to Mathematics
Question
1 answer
Is there a galactic rotation anomaly? Is it possible to find out the speed and time of the galactic rotation anomaly?
Is there a galactic rotation anomaly? Is it possible to find out the speed and time of the galactic rotation anomaly?
Abstract: Orbital speeds of stars, far from centre of a galaxy, are found roughly constant, instead of reductions predicted by current gravitational theories (applied on galactic and cosmological scales). This is called the anomalous rotation of galaxies. This article intends to show that constant angular speeds of all macro bodies in a galaxy are natural phenomenon and there is no mystery about it.
Keywords: Galaxy, Stable galaxy, rotational anomaly.
A planetary system is a group of macro bodies, moving at certain linear speed in circular path around galactic centre. Central body of planetary system is by far the largest and controls mean linear speeds of all other members. Gravitational attractions between macro bodies of planetary system cause perturbations in their directions of motion, resulting in additional curvatures of their paths. When perturbed paths of smaller macro bodies are related to central body in assumed static state, we get apparent orbital paths of planetary bodies. They appear to revolve around static central body in elliptical/circular paths. Apparent orbital paths are unreal constructs about imaginary static state of central body. They are convenient to find relative positions of macro bodies in the system and to predict cyclic phenomena occurring annually. In reality, planetary bodies do not orbit around central body but they move in wavy paths about the central body. Central and planetary bodies move at a mean linear speed along their curved path around galactic centre.
Perturbations of orbital paths of macro bodies in planetary system are related directly to their matter-content and inverse square of distance from central body. Distance from central body has greater effect of magnitudes of perturbations. Hence, normally, paths of planetary bodies at greater distance from central body are perturbed by lesser magnitudes. Curvatures and thus angular speeds of their apparent orbits reduce as distance from central body increases. Since planetary system has no real spin motion, this is an imaginary phenomenon. However, many learned cosmologists seem to take spin motion of planetary system as real phenomenon and consider that members of all spinning group pf macro bodies should behave in similar manner, i.e. angular (spin) speed of members should reduce as their distance from centre of system increases.
Stable galaxy consists of many macro bodies revolving around its centre. This group can be considered as a spinning fluid macro body, rotating at a constant angular speed. Gravitational collapse initiates spin motion of galactic cloud and maintains constant spin speed of outer parts of stable galaxy. Centre part of galaxy, which is usually hidden, may or may not be spinning. We can observe only visible stars and their angular speeds about galactic centre. Linear motions of macro bodies, caused by gravitational attractions towards other macro bodies in the system, have two components each. One component, due to additional linear work invested in association with it, produces macro body’s linear motion, in a direction slightly deflected away from centre of circular path. Other component, towards centre of its circular path, is caused by additional angular work invested in association with it. This component produces angular motion of macro body.
All matter-particles in a fluid macro body, spinning at constant speed, have constant angular speeds. Consider a matter-particle at O, in figure 1, moving in circular path AOB. XX is tangent to circular path at O. Instantaneous linear speed of matter-particle is represented by arrow OC, in magnitude and direction. It has two components; OD, along tangent XX and DC, perpendicular to tangent XX and away from centre of circular path. This component, DC, represents centrifugal action on matter- C particle due to its motion in circular path. In
order to maintain constant curvature of path, X D O X matter-particle has to have instantaneous A linear (centripetal) motion equal to CE E
toward centre of circular path. If magnitudes B Figure 1 and directions of instantaneous motions are as shown in figure 1, matter-particle maintains its motion along circular path AOB at constant angular speed.
Should the matter-particle increase its instantaneous linear speed for any reason, both components OD and DC would increase. Component OD tends to move matter-particle at greater linear speed along tangent XX. Outward component DC tends to move matter-particle away from centre of its circular path. The matter particle tends to increase radius of curvature of its path. This action is usually assigned to imaginary ‘centrifugal force’. In reality expansion of radius of curvature of path is caused by centrifugal component of linear motion. Reduction in centripetal action also produces similar results.
Should the matter-particle decrease its instantaneous linear speed for any reason, both components OD and DC would reduce. Component OD tends to move matter-particle at lesser linear speed along tangent XX. Reduction in outward component DC tends to move matter-particle towards centre of its circular path. The matter particle tends to reduce radius of curvature of its path. Reduction of radius of curvature of path is caused by reduction in centrifugal component of linear motion. Increase in centripetal action also produces similar results.
In other words, matter-particle regulates its distance from centre of its circular path so that its angular speed remains constant. This is the reason for action of centrifuges. As linear speeds of matterparticles increase, they move outwards, in an effort to maintain their angular speed constant.
Additional work, done for linear motion of a matter-particle and additional work, done for its angular motion are entirely separate and distinct. Additional work for linear motion of a matter-particle can produce only linear motion and additional work for angular motion can produce only angular motion. In the case, explained above, increased in linear speed of matter-particle is considered. That is, additional work invested in association with matter-particle is of linear nature. It can only increase its linear motion. As no additional work for angular motion is invested matter-particle cannot change its angular speed. Instead, matter-particle is compelled to move away from centre of its rotation, so that it can increase magnitude of linear motion while keeping magnitude of angular motion constant.
Similarly, increase in centripetal effort invests additional work required for angular motion of matterparticle. Matter-particle tends to increase magnitude of its angular motion. Curvature of its path
increases by reducing its distance from centre of circular path. Matter-particle tends to move towards centre of circular path, so that it can increase its angular speed while keeping its linear speed constant.
Every macro body in a stable galaxy behaves in a manner similar to matter-particle, represented in figure 1. They tend to position themselves in the system, so that their linear and angular speeds match corresponding works associated with them. Macro bodies strive to maintain their angular speeds constant by keeping appropriate distance from centre of rotation. Macro bodies towards the central region may experience additional centripetal effort. They might increase their angular motion and move towards central point to merge with black hole present there. In due course of time, macro bodies on outer fringes move away from galaxy and destroy its stability.
In a galaxy, various macro bodies arrive at their relative position gradually by error and trial, during which their relative positions and linear and angular speeds are stabilized. Galaxy, as a whole, stabilizes only when constituent macro bodies have reached their steady relative positions and motions. In order to maintain stability, it is essential to maintain relative positions of all constituent macro bodies by having constant and equal angular speeds and linear speeds corresponding to their distances from galactic centre. Change in relative position or linear or angular speed of even one macro body is liable to destabilize the galaxy.
As and when superior 3D matter-particles at the fringe of galaxies attain linear speeds approaching speed of light, they break-down into primary 3D matter-particles and produce halo around equatorial region. Halos of neighbouring stable galaxies interact to prevent their translational movements and maintain steady state of universe.
Therefore constant angular speeds of constituent macro bodies of stable galaxies are their natural states. There are no mysteries or anomalies about them. This phenomenon is mystified by those who consider imaginary spin motions of planetary systems are real. Therefore, assumptions of dark matter, time dilation, modification of gravitational laws, etc and complicated mathematical exercises are irrational and unnecessary to prove non-existing rotation anomaly of galaxies.
Conclusion:
Galactic rotation anomaly is a non-existing phenomenon derived from imaginary spin motions of planetary systems about their central bodies in assumed static states. Constant angular speeds of stars in a galaxy confirm static state of galactic center (in space), rather than produce an anomaly.
Reference:
[1] Nainan K. Varghese, MATTER (Re-examined), http://www.matterdoc.info
Reply to this discussion
Chuck A Arize added a reply
5 hours ago
Yes, there is a galactic rotation anomaly observed as the discrepancy between the predicted and actual rotation speeds of galaxies. This anomaly, often attributed to dark matter, shows that the outer regions of galaxies rotate faster than expected. Measuring the speed and time of this rotation anomaly involves detailed observations of galactic rotation curves and modeling, which reveal the velocity profile and suggest the presence of unseen mass influencing the rotation.
Abdul Malek added a reply
3 hours ago
Abbas Kashani > "Is there a galactic rotation anomaly?"
There is a galactic rotation anomaly, but only according to officially accepted theories of gravity and the (Big Bang) theory of the formation of the galaxies inferred for a finite, closed and a created (in the finite past) universe.
But all these theories based on causality and theology are wrong! The dialectical and scientific view is that the universe is Infinite, Eternal and Ever-changing, mediated by dialectical chance and necessity. Gravity is a dialectical contradiction of the unity of the opposites of attraction and repulsion (due to inherent free motion of matter particles, vis viva). In short (human) time scale, new galaxies are seen to be formed through the dissipation and/or ejection of matter in the form of stars, star clusters or even a large part of the galaxy as quasars from the existing galaxies.
So, the observed high orbital velocities of the starts, star clusters etc. at the periphery of the galaxies and of the planets at the periphery of the planetary systems within the galaxies is just a natural phenomena and there is no anomaly!
"Ambartsumian, Arp and the Breeding Galaxies" : http://redshift.vif.com/JournalFiles/V12NO2PDF/V12N2MAL.pdf
KEPLER -NEWTON -LEIBNIZ -HEGEL Portentous and Conflicting Legacies in Theoretical Physics, Cosmology and in Ruling https://www.rajpub.com/index.php/jap/article/view/9106
"THE CONCEPTUAL DEFECT OF THE LAW OF UNIVERSAL GRAVITATION OR ‘FREE FALL’: A DIALECTICAL REASSESSMENT OF KEPLER’S LAWS":
Article THE CONCEPTUAL DEFECT OF THE LAW OF UNIVERSAL GRAVITATION OR...
📷
Preston Guynn added a reply
4 days ago
Your discussion question statement is:
  • "Is there a galactic rotation anomaly? Is it possible to find out the speed and time of the galactic rotation anomaly? Orbital speeds of stars, far from centre of a galaxy, are found roughly constant, instead of reductions predicted by current gravitational theories (applied on galactic and cosmological scales). This is called the anomalous rotation of galaxies."
The limit of galactic rotation velocity is expected because rotation minus precession has a maximum velocity. Our solar system's relative rotation velocity with respect to the Milky Way galaxy is at this maximum, and as a fraction of speed of light the observed velocity can be designated vg/c, and is determined in the single page proof of the quantum of resistance:
Article The Physical Basis of the Quantum of Resistance is Relativis...
The detailed proofs are in:
Article Thomas Precession is the Basis for the Structure of Matter and Space
Note that the observed velocity is the difference between rotation and precession.
📷
Dale Fulton added a reply
3 days ago
The galactic rotation "anomaly" (flat rotation curve) is actually a misinterpretation of the measurements of the galactic rotations, when performed with spectrographic (redshift) measurements. This has misled astronomers since the inception of the spectrographic velocity measurements, as being totally doppler shift, whereas they contain many non-linear components of redshift due to gases and other effects from each galaxy. Recent measurements of the Milky Way galaxy rotation curve prove that this is the case, i.e, that spectrographic velocities are misleading, and that proper motion or parallax is the only way to accurately measure those velocities.
📷
André Michaud added a reply
20 hours ago
There is no galactic rotation anomaly. Such a concept emerges from the lack of careful study of past historical discoveries about orbital structures in the universe established since Ticho Brahe first collected his data about the planetary orbits in the solar system, from which Johannes Kepler abstracted his 3 laws, that were then mathematically confirmed by Newton.
The galactic rotation parameters are well known by those who studied the true foundation of astrophysics. Put in perspective in this article:
Article Inside Planets and Stars Masses
📷
Abbas Kashani added a reply
44 seconds ago
Preston Guynn
Dale Fulton
André Michaud
Greetings and politeness and respect to the great and respected professors and astronomers, I am very grateful for your efforts, dear ones. Thank you and thank you
Abbas Kashani
Mohaghegh Ardabili University
Relevant answer
Answer
Of course there is. It can be easily determined for spiral galaxies. Many have what is called a flat rotation curve, meaning the velocity of its stars is the same all over the galaxy, regardless of their distances from the galactic center, totally contrary to mainstream gravity theory.
The anomaly is simply stellar velocities which is distance traveled per unit time. The mainstream presently attributes this anomaly to what they assert to be unknown matter. But this is a very week hypothesis since it requires about 6 times more unseen matter than observable matter, and even then it is a very poor predictor of stellar velocities.There are many more better predictors of stellar velocities than dark matter which may be the worst of all predictors. But most alternatives are Modified Gravity proposals, which are usually much better predictors, but have there own serious theoretical problems.
  • asked a question related to Mathematics
Question
1 answer
It will be better if there will be any mathematical relation or suggest some research articles?
Relevant answer
The total number of members in a vehicle platoon can significantly affect the overall cybersecurity of the platoon. As the number of vehicles in the platoon increases, the complexity of maintaining secure communication and ensuring overall cybersecurity also increases. Here are some key considerations:
### 1. **Increased Attack Surface**:
- **More Entry Points**: Each additional vehicle adds an entry point for potential cyberattacks. Attackers might exploit vulnerabilities in one vehicle's communication system, sensors, or control units, which could compromise the entire platoon.
- **Network Complexity**: With more vehicles, the communication network becomes more complex. Managing secure data transmission among a larger number of vehicles increases the risk of data breaches, man-in-the-middle attacks, or jamming.
### 2. **Inter-Vehicle Communication (V2V)**:
- **Bandwidth and Latency**: A larger platoon requires more data to be exchanged between vehicles. This can strain the network, potentially leading to delays in communication, which attackers might exploit. Securing the communication channel against eavesdropping or spoofing becomes more challenging as the number of vehicles increases.
- **Synchronization and Consistency**: Ensuring that all vehicles in a large platoon receive and process data simultaneously and consistently is difficult. Any lag or inconsistency can be exploited by attackers to create confusion or to insert malicious commands.
### 3. **Consensus and Decision-Making**:
- **Complexity in Consensus Algorithms**: In larger platoons, decision-making (such as adjusting speed or direction) must account for more vehicles, making the consensus algorithms more complex. This complexity can lead to vulnerabilities if the algorithms are not properly secured.
- **Potential for Misinformation**: In a large platoon, if an attacker compromises one vehicle, they could inject false information into the network, leading to incorrect decisions by other vehicles. This could result in collisions or unsafe maneuvers.
### 4. **Scalability of Security Protocols**:
- **Challenges in Encryption**: As the number of vehicles grows, the encryption and decryption processes for secure communication need to scale efficiently. More vehicles mean more keys and certificates to manage, which can increase the risk of key management failures or exploitation of weaker encryption protocols.
- **Authentication Overhead**: Ensuring that every vehicle in a large platoon is authenticated and trusted adds overhead to the system. The complexity of maintaining a robust authentication process can increase the risk of vulnerabilities.
### 5. **Impact of a Single Compromised Vehicle**:
- **Cascading Effects**: In larger platoons, the impact of a single compromised vehicle can be more severe. A hacked vehicle might send malicious commands or false data, affecting the behavior of other vehicles. The more vehicles there are, the more widespread the potential disruption.
- **Difficulty in Isolation**: Identifying and isolating a compromised vehicle in a large platoon is more challenging. The larger the platoon, the more difficult it becomes to quickly detect and mitigate the impact of a cybersecurity breach.
### 6. **Cooperative Adaptive Cruise Control (CACC)**:
- **Increased Dependency**: Larger platoons often rely on Cooperative Adaptive Cruise Control, where vehicles communicate to maintain speed and distance. If the cybersecurity of CACC is compromised, the entire platoon could be at risk. The complexity of securing CACC increases with the number of vehicles.
### 7. **Redundancy and Fault Tolerance**:
- **Need for Redundant Systems**: As the platoon grows, redundancy in communication and control systems becomes more important to ensure that the platoon can operate safely even if some vehicles are compromised. However, implementing and managing redundancy increases the system's complexity and potential points of failure.
- **Resilience to Attacks**: A larger platoon must be more resilient to attacks. Ensuring that the platoon can continue to function safely even if part of the system is under attack is critical, but more difficult to achieve as the number of vehicles increases.
### 8. **Human Factors and Response**:
- **Increased Coordination Challenges**: In larger platoons, coordinating human responses to cybersecurity threats becomes more difficult. Drivers or operators may have less control or understanding of the overall platoon’s state, complicating responses to potential attacks.
- **Training and Awareness**: Ensuring that all drivers or operators in a large platoon are adequately trained in cybersecurity practices is more challenging, leading to potential weaknesses in the human element of security.
In summary, as the number of vehicles in a platoon increases, so do the challenges in maintaining cybersecurity. The increased attack surface, complexity of communication, and need for robust authentication, encryption, and redundancy make it more difficult to secure the platoon against cyber threats. Addressing these challenges requires advanced security protocols, real-time monitoring, and resilient system designs.
  • asked a question related to Mathematics
Question
4 answers
During Kurt Gödel's lecture on the occasion of Albert Einstein's birthday in 1945, this question was already raised by John Wheeler. Gödel did not comment on it. Both Einstein and Gödel did not believe in quantum theory. Is there currently any reference or article that relates to this question? From today's scientific perspective, is there a relationship between Heisenberg's Uncertainty Principle and Gödel's Incompleteness Theorem? Even so, when both the principles arise from different theoretical frameworks and serve different purposes within their respective domains. Please provide references.
Relevant answer
Answer
The uncertainty principle is very linked
to fourier properties of waves.
Not so with incompletness in arithmetic.
There are a few similarities. For example addition and multiplication are non commutative as operations
  • asked a question related to Mathematics
Question
2 answers
Can we apply the theoretical computer science for proofs of theorems in Math?
Relevant answer
Answer
The pumping lemma is a valuable theoretical tool for understanding the limitations of finite automata and regular languages. It is not used for solving computational problems directly but is important for proving non-regularity and understanding the boundaries of regular languages.
  • asked a question related to Mathematics
Question
1 answer
Comment les enseignants de mathémattiques du cycle primaire conçoivent-ils leurs évaluations? Quelles conceptions ont-ils pour le concept d'évaluation?
Relevant answer
Answer
À partir d'une grille d'élaboration d'une évaluation préétablie qu'ils essaient de respecter d'une façon formelle.
  • asked a question related to Mathematics
Question
6 answers
Would someone be kind enough to answer the question why there is a Pareto Principle related to Grades in Primary Education in Mathematics, but please without counter-questions such as who says or where it is written that it is so. If Primary Education in Mathematics lasts, for example, 8 years (the number varies between countries), then in the first three grades 80% have excellent and very good grades in Mathematics, and 20% good and bad. In the fourth grade, this ratio is approximately 50%:50%. However, from the fifth to the eighth grade, the relationship is reversed and only 20% have excellent and very good grades, and 80% good and bad. Sometimes, for some reason, the ratio is 70:30 instead of 80:20, but the relationship and regularity exists. I thank you in advance for your reply, as well as for your kindness and time.
Relevant answer
Answer
Reply for George McIlvaine from ChatGPT
The analogy of students as "producers of math grades" within the framework of the Law of Diminishing Returns (LODR) offers a valuable perspective on why mathematics performance might decline as complexity increases. To explore this hypothesis and determine if it provides a better explanation than the Pareto Principle (PP), we can consider the following steps and implications:
### Understanding LODR in Mathematics Education
The Law of Diminishing Returns suggests that as students continue to invest effort into learning mathematics, the incremental improvement in their grades diminishes over time due to increasing complexity and the natural limits of their cognitive abilities. This concept can be visualized through a smooth logarithmic decline in performance:
1. **Initial High Returns**: In the early stages of education, students quickly grasp basic concepts, leading to high grades.
2. **Decreasing Returns**: As the curriculum becomes more complex, each additional unit of effort results in smaller improvements in grades.
3. **Plateauing Performance**: Eventually, many students reach a point where additional effort does not significantly improve their grades, leading to a plateau or gradual decline.
### Testing the Hypotheses: PP vs. LODR
To determine whether the PP or LODR better describes the observed decline in math grades, we can analyze the grade distribution and performance trends over time:
1. **Data Collection**: Collect longitudinal data on math grades from a cohort of students across multiple years (e.g., from first grade to eighth grade).
2. **Analyze Decline Patterns**: Plot the grade distributions and overall performance trends to see if they follow a smooth logarithmic decline (indicative of LODR) or show a clear 80/20 distribution at different stages (indicative of PP).
3. **Statistical Modeling**: Apply statistical models to fit the data to both hypotheses. A logarithmic regression model can test the LODR, while a distribution analysis can test for PP patterns.
### Implications of LODR
If the LODR hypothesis holds true, it suggests several educational strategies to mitigate the diminishing returns and support students' continuous improvement:
1. **Adaptive Learning**: Implement adaptive learning technologies that tailor the complexity of problems to the individual student's level, providing a more personalized learning experience.
2. **Incremental Challenge**: Design the curriculum to introduce complexity in smaller, more manageable increments to avoid overwhelming students and to sustain their motivation and performance.
3. **Continuous Support**: Provide ongoing support and resources, such as tutoring and mentoring, to help students navigate the increasing complexity of the material.
4. **Emphasize Mastery**: Focus on mastery learning, where students are given the time and support needed to fully understand each concept before moving on to the next.
### Combining PP and LODR
It is also possible that both principles play a role in mathematics education:
- **Initial Stages (PP)**: In the early years, the Pareto Principle might dominate, with a small group of students quickly excelling while others keep up at a more basic level.
- **Later Stages (LODR)**: As complexity increases, the Law of Diminishing Returns might take over, leading to a gradual decline in performance as students reach their cognitive limits and the returns on additional effort decrease.
### Conclusion
By examining the patterns in student performance data and applying appropriate models, educators can better understand the underlying principles affecting math grades. This understanding can inform targeted interventions and curriculum design to help all students maximize their potential in mathematics. Ultimately, the goal is to create a learning environment where students can achieve a level of mathematical proficiency that allows them to contribute meaningfully to society.
  • asked a question related to Mathematics
Question
3 answers
With the term “gravity”, we refer to the phenomenon of the gravitational interaction between material bodies.
How that phenomenon manifests itself in the case of the interaction of two mass particles at rest relative to an inertial reference frame (IRF) has, in the framework of classical physics, mathematically been described by Isaac Newton. And Oliver Heaviside, Oleg Jefimenko and others did the same in the case of bodies moving relative to an IRF. They described the effects of the kinematics of the gravitating objects assuming that the interaction between massive objects in space is possible through the mediation of “the gravitational field”.
In that context, the gravitational field is defined as a vector field having a field- and an induction-component (Eg and Bg) simultaneously created by their common sources: time-variable masses and mass flows. This vector-field (a mathematical construction) is an essential element of the mathematical description of the gravitational phenomena, and as such an element of our thinking about nature.
One cannot avoid the question of whether or not a physical entity is being described by the vector field (Eg, Bg) and what, if any, is the nature of that entity.
In the framework of “the theory of informatons”[1],[2],[3], the substance of the gravitational field – that in that context is considered as a substantial element of nature - is identified as “gravitational information” or g-information” i.e. information carried by informatons. The term “informaton” refers to the constituent element of g-information. It is a mass and energy less granular entity rushing through space at the speed of light and carrying information about the position and the velocity of its source, a mass-element of a material body.
References
[1] Acke, A. (2024) Newtons Law of Universal Gravitation Explained by the Theory of Informatons. https://doi.org/10.4236/jhepgc.2024.103056
[2] Acke, A. (2024) The Gravitational Interaction between Moving Mass Particles Explained by the Theory of Informatons. https://doi.org/10.4236/jhepgc.2024.103060
[3] Acke, A. (2024) The Maxwell-Heaviside Equations Explained by the Theory of Informatons. https://doi.org/10.4236/jhepgc.2024.103061
Relevant answer
Answer
Dear Preston,
Thanks for your quick response.
1. You wrote: ”The "informatons" in your theory are not necessary for the interaction of matter because space and time alone are sufficient and a theory of "informations" is therefore redundant or superfluous.”In the context of classical physics time and space are considered as elements of our thinking about nature. They do not participate in what is happening. They are conceived as constructions of our thinking that allow us to locate and date events in an objective manner. (art. [1], §2). So they cannot be the cause of the physical phenomena.
2. You wrote: “If you look at my one page summary paper
I will look at your papers but I need more than a few hours to form my opinion.
Regards,
Antoine