Science topics: Mathematics
Science topic

Mathematics - Science topic

Mathematics, Pure and Applied Math
Questions related to Mathematics
  • asked a question related to Mathematics
Question
3 answers
Any answer my question
Relevant answer
Answer
The most important biodiversity measures in field studies are species richness (the count of distinct species), species evenness (distribution uniformity of species), and species diversity indices like Shannon and Simpson indices. Mathematically, species richness is a simple count, while evenness ratios measure uniformity. Diversity indices, like Shannon's, are logarithmic calculations of proportional abundance, whereas Simpson’s index focuses on dominance, calculating the probability of two randomly selected individuals belonging to the same species.
  • asked a question related to Mathematics
Question
1 answer
I am interested in the study of visual subcompetence in education, specifically how visual tools and technologies can be integrated into the educational process to enhance the development of professional competencies in future teachers, particularly in mathematics education.
I am looking for research and definitions that highlight and specify the concept of visual subcompetence in education. Specifically, I am interested in how visual subcompetence is distinguished as part of the broader professional competence, particularly in the context of mathematics teacher education.
Relevant answer
Answer
To think more number of case studies regarding visual subcompetence in education.
  • asked a question related to Mathematics
Question
2 answers
Can you suggest any study that uses Ethnographic Research design?
Relevant answer
Answer
Marjun Abear A notable study using ethnographic research design in teaching and learning is Paul Willis' "Learning to Labor" (1977). Willis observed working-class boys in a British school to explore how their social interactions and cultural attitudes shaped their educational experiences and future job prospects. This ethnography highlights how education can reinforce class inequalities, providing deep insights into the relationship between culture, learning, and social reproduction.
  • asked a question related to Mathematics
Question
60 answers
I apologize to you all! The question was asked incorrectly—my mistake. Now everything is correct:
In a circle with center O, chords AB and CD are drawn, intersecting at point P.
In each segment of the circle, other circles are inscribed with corresponding centers O_1; O_2; O_3; O_4.
Find the measure of angle ∠O_1 PO_2.
Relevant answer
Answer
  • asked a question related to Mathematics
Question
1 answer
Can you explain the mathematical principles behind the Proof of Stake (PoS) algorithm, including how validator selection probabilities, stake adjustments, and reward calculations are determined
Relevant answer
Answer
Dear Hiba, you can look up references I put far below or search Wikipedia if not done, not sure something is there. Here is what I can summarize; first, let’s break down the mathematical principles behind the Proof of Stake (PoS) algorithm as I understood it from existing literatures:
1. Validator Selection Probabilities
In PoS, validators are chosen to create new blocks based on the amount of cryptocurrency they hold and are willing to “stake” as collateral. The selection process is typically pseudo-random and influenced by several factors:
  • Stake Amount: The more coins a validator stakes, the higher their chances of being selected. Mathematically, if a validator (i) stakes (S_i) coins out of a total staked amount (S_{total}), their probability (P_i) of being selected is:Pi​=Stotal​Si​​
  • Coin Age: Some PoS systems also consider the age of the staked coins. The longer the coins have been staked, the higher the chances of selection. This can be represented as:Pi​=∑j=1N​Sj​×Aj​Si​×Ai​​where (A_i) is the age of the coins staked by validator (i).
  • Randomization: To prevent predictability and enhance security, a randomization factor is often introduced. This can be achieved through a hash function or a random number generator.
2. Stake Adjustments
Stake adjustments occur when validators add or remove their staked coins. The total stake (S_{total}) is updated accordingly, which in turn affects the selection probabilities. If a validator adds ( \Delta S ) coins to their stake, their new stake ( S_i’ ) becomes:
Si′​=Si​+ΔS
The new total stake ( S_{total}’ ) is:
Stotal′​=Stotal​+ΔS
3. Reward Calculations
Validators receive rewards for creating new blocks, which are typically proportional to their stake. The reward ( R_i ) for validator (i) can be calculated as:
Ri​=Rtotal​×Stotal​Si​​
where ( R_{total} ) is the total reward distributed for the block.
Some PoS systems also include penalties for malicious behavior or downtime, which can reduce the rewards or even the staked amount.
Example
Let’s consider a simple example with three validators:
  • Validator A stakes 40 coins.
  • Validator B stakes 30 coins.
  • Validator C stakes 30 coins.
The total stake ( S_{total} ) is 100 coins. The selection probabilities are:
  • ( P_A = \frac{40}{100} = 0.4 )
  • ( P_B = \frac{30}{100} = 0.3 )
  • ( P_C = \frac{30}{100} = 0.3 )
If the total reward for a block is 10 coins, the rewards are:
  • ( R_A = 10 \times 0.4 = 4 ) coins
Hope you will find this quite helpful.
  • asked a question related to Mathematics
Question
10 answers
Dear Esteemed Colleagues,
I hope this message finds you well. I am writing to invite your review and insights on what I believe to be a significant development in our understanding of the Riemann Hypothesis. After extensive work, I have arrived at a novel proof for the hypothesis, using a generalization of the integral test applicable to non-monotone series, as outlined in the attached document.
As a lead AI specialist at Microsoft, specializing in math-based AI, I have employed both traditional mathematical techniques and AI-based verification algorithms to rigorously validate the logical steps and conclusions drawn in this proof. The AI models have thoroughly checked the derivations, ensuring consistency in the logic and approach.
The essence of my proof hinges on an approximation for the zeta function that results in an error-free evaluation of its imaginary part at $x = \frac{1}{2}$, confirming this as the minimal point for both the real and imaginary components. I am confident that this new method is a significant step forward and stands up to scrutiny, but as always, peer review is a cornerstone of mathematical progress.
I warmly invite your feedback, comments, and any questions you may have regarding the methods or conclusions. I fully stand by this work and look forward to a robust, respectful discussion of the implications it carries. My goal is not to offend or overstate the findings but to contribute meaningfully to this ongoing conversation in the mathematical community.
Thank you for your time and consideration. I look forward to your responses and the productive discussions that follow.
Sincerely,
Rajah Iyer
Lead AI Specialist, Microsoft
Relevant answer
Answer
Continuously differentiable. That part was not included in the 2019 version.
  • asked a question related to Mathematics
Question
2 answers
Please, I need this paper:
Banas, J. and Lecko, M. (2002) Fixed Points of the Product of Operators in Banach Algebras. Panamerican Mathematical Journal, 12, 101-109
Relevant answer
Answer
Try to request full-text from the author(s)...
  • asked a question related to Mathematics
Question
3 answers
توضيح كيفية التعليم الأخضر في مادة الرياضيات للأطفال
Relevant answer
Answer
Applying green education in mathematics for children involves integrating environmental themes and sustainability principles into math lessons. Here are some strategies to make this connection:
1. **Use Environmental Data in Math Problems**
- **Real-world examples**: Incorporate environmental statistics, such as data on pollution, recycling rates, and energy consumption, into math problems. This not only teaches mathematical concepts like percentages, averages, and data analysis but also raises awareness about environmental issues.
- **Hands-on projects**: Have children collect local environmental data (e.g., water usage, electricity consumption) and analyze it to learn about graphing, patterns, and calculations.
2. **Explore Geometry Through Nature**
- **Shapes in nature**: Use examples from nature like leaves, flowers, and snowflakes to teach geometric concepts like symmetry, fractals, and patterns.
- **Eco-friendly architecture**: Introduce geometric principles through sustainable design, such as how solar panels are angled to maximize sunlight or how certain shapes reduce waste in construction.
3. **Problem Solving with Environmental Impact**
- **Sustainability challenges**: Set up problem-solving activities where students must calculate the environmental impact of various actions. For instance, ask them to calculate the savings in resources when using recycled paper versus new paper.
- **Optimization tasks**: Use problems that involve optimizing energy use or waste reduction, showing how math can help create more sustainable solutions.
4. **Promote Critical Thinking on Environmental Issues**
- **Math and decision making**: Present scenarios where students need to make environmentally conscious decisions, such as calculating carbon footprints for different transportation methods or comparing the efficiency of renewable vs. non-renewable energy sources.
- **Game theory and resource use**: Introduce simple concepts of game theory or optimization to help children think about resource allocation and how different decisions impact the environment.
5. **Project-Based Learning with a Green Focus**
- **Eco-friendly projects**: Encourage students to work on projects like creating a garden, where they can use math for measurement, planning, and budgeting. This not only teaches practical math but also instills responsibility for the environment.
- **Sustainable design challenges**: Have students design eco-friendly solutions like a rainwater collection system, where they calculate the volume of water that can be saved based on local rainfall data.
6. **Use Visual and Interactive Tools**
- **Green apps and games**: Use interactive math apps and games that focus on environmental topics. For instance, apps that simulate resource management or renewable energy can teach math concepts while promoting green education.
- **Field trips and nature walks**: Incorporate math lessons into outdoor activities, where children measure plant growth, calculate the height of trees, or estimate the number of species in a given area.
7. **Introduce Mathematical Concepts Through Climate Change**
- **Climate data analysis**: Analyze real-world data on climate change, like global temperature rise or CO2 emissions. This fosters an understanding of trends and how math can model and predict future changes.
- **Carbon footprint calculation**: Teach students how to calculate their own carbon footprint using math, helping them understand the impact of their actions and encouraging more sustainable behavior.
By integrating green education into math, children not only gain math skills but also learn to think critically about environmental issues and sustainability, which can inspire them to take positive actions for the planet.
  • asked a question related to Mathematics
Question
3 answers
Bonjour,
Je suis actuellement en train de travailler sur un projet de recherche portant sur l'utilisation de l'optimisation mathématique pour déterminer le taux directeur optimal en politique monétaire. J'aimerais savoir s'il existe des travaux de recherche récents ou des modèles spécifiques qui ont abordé ce sujet. De plus, je suis à la recherche de conseils sur la manière de structurer mon modèle et de choisir des variables pertinentes pour ce type d'analyse. Toute suggestion de lecture ou d'expertise serait grandement appréciée.
Merci d'avance pour votre aide
Relevant answer
Answer
Research on the use of mathematical optimization to determine the policy rate includes models such as the Taylor Rule, which sets the policy rate based on inflation and output gaps, and dynamic stochastic general equilibrium (DSGE) models that incorporate optimization techniques to evaluate the impacts of monetary policy. Other studies utilize linear programming and mixed-integer optimization methods to analyze trade-offs in policy decisions and macroeconomic stability. These models help central banks effectively balance inflation control and economic growth.
  • asked a question related to Mathematics
Question
4 answers
As an academic working and pursuing a PhD degree in Egypt, both in private and public universities respectively, I wanted to put forward a simple question:
What is the role of universities, and other academic institutions, today? Was there ever a time where universities were agents of revolutionary action and change, or was it only a subject of the overall consumerist system?
We can take many steps back till the Ancient Egyptian times, where scribes and priests were taught writing, mathematics, and documentation of daily exchanges. All the way till today's era of digital globalization and mass education, where knowledge production process has become more of a virtual canvas rather than actual knowledge. Has knowledge ever served its purpose? Have academic institutions, and of course academic scholars, ever delivered the true purpose of education?
Was, and still, education's main sole purpose is economic prosperity of certain classes, hence socio-economic segregation?
Relevant answer
Answer
Today's global societies are very competitive in so many ways; as a result, this trend has a ripple effect in global educational institutions as well. Specifically speaking, without an MA/MS degree, one cannot compete to get a decent job in the professional level job market. Thus, universities are driven to restructure their institutions to meet this demand
  • asked a question related to Mathematics
Question
4 answers
There exists a neural network model designed to predict a specific output, detailed in a published article. The model comprises 14 inputs, each normalized with minimum and maximum parameters specified for normalization. It incorporates six hidden layers, with the article providing the neural network's weight parameters from the input to the hidden layers, along with biases. Similarly, the parameters from the output layer to the hidden layers, including biases, are also documented.
The primary inquiry revolves around extracting the mathematical equation suitable for implementation in Excel or Python to facilitate output prediction.
Relevant answer
Answer
A late reply. I hope this helps you!
  • asked a question related to Mathematics
Question
7 answers
It seems it is common to combine basic observations to create new observable, which are then used for PPP and other applications. Basic observations such as pseudorange and carrier-phase observations are real measurement from GNSS. These real observations are combined to create entirely new observable which is not direct, physical, and real. Amazingly, these new observable solves the real problem such as PPP (e.g. Ionosphere -free combination).
  • What is the theory behind this?
  • Any similar approach like this in other scientific field or any simple analogous explanation?
  • You could direct me to resources such as videos, or literature.
Relevant answer
Answer
furthermore, one a satellite is locked for recording the phases, a counter starts, keep add "1" to the number in the counter whenever a whole cycle carrier wave is passed based on the time a wave length corresponds to. So, the unknow ambiguity of a satellite maintains constant (associated with the time instant when the lock starts) as long as this satellite is being locked without interruption. In case an interruption, a different ambiguity will have to be resolved in the estimation.
  • asked a question related to Mathematics
Question
9 answers
In triangle ∆ABC (with ∠C = 90°), the angle CBA is equal to 2α.
A line AD is drawn to the leg BC at an angle α (∠BAD = α).
The length of the hypotenuse is 6, and the segment CD is equal to 3.
Find the measure of the angle α.
This problem can be solved using three methods: trigonometric, algebraic, and geometric. I suggest you find the geometric method of solution!
Relevant answer
Answer
After trigonotric solution (pretty tedious way, not worth publication) I have found a geometric way same as Dinu Teodorescu, but in a somehow different order:
If one adds to the pic by Liudmyla Hetmanenko the center O of AB , then the following becomes clear:
property 1. AO=CO, which implied ∠CAO = ∠ACO =2 α
property 2. CD=CO, which implies that D and O lie on a circle with center at C
property 3. ∠DBO = α = 0.5 ∠DCO, which implies that D,O and B lie on a circle with center at C, which in turn imples that CB=CD, which means that
3 α = π/4 radians = 15o
  • asked a question related to Mathematics
Question
6 answers
Scientists believe theories must be proven by experiments. Does their faith in the existence of objective reality mean they are classical scientists who reject quantum mechanics' statements that observers and the observed are permanently and inextricably united? In this case, scientists would unavoidably and unconsciously influence every experiment and form of mathematics. In the end, they may be unavoidably and unconsciously influencing the universe which is the home of all experiments and all mathematics.
Relevant answer
Answer
If (once again: if!) thoughts can influence objective reality, then both the thoughts of the adherents of the Copenhagen interpretation and the thoughts of its opponents possess this property. Let us hope that these influences are in dynamic equilibrium.
  • asked a question related to Mathematics
Question
14 answers
The need of a paradigm shift in physics
Is it possible in a world as fragmented as ours to present a new concept of Unity in which Science, Philosophy and Spirituality or Ontology can be conceived working in Complete Harmony?
In this respect the late Thomas S. Kuhn wrote in his
The Structure of Scientific Revolutions
"Today research in parts of philosophy, psychology, linguistic, and even art history, all converge to suggest that the traditional paradigm is somehow askew. That failure to fit is also increasingly apparent by the historical study of science to which most of our attention is necessarily directed here."
And even the father of Quantum Physics complained strongly in his 1952 colloquia, when he wrote:
"Let me say at the outset, that in this speech, I am opposing not a few special statements claims of quantum mechanics held today, I am opposing its basic views that has been shaped 25 years ago, when Max Born put forward his probability interpretation, which was accepted by almost everybody. It has been worked out in great detail to form a scheme of admirable logical consistency which has since been inculcated in all young students of theoretical physics."
Where is the source of this "crisis of physics" as has been called?
Certainly the great incompatibility between General Relativity and Quantum Mechanics is in a certain sense, one of the reasons, of that great crisis, and that shows clearly the real need of a paradigm shift.
As one that comes from the Judeo-Christian tradition, that need of a real paradigm shift was of course a real need too. Philosophers such as Teilhard de Chardin, Henry Bergson, Charles Pierce and Ken Wilber, all of them worked for it!.
Ken Wilber said that goal of postmodernity should be the Integration of the Big Three, Science, Philosophy and Spirituality, and a scientist as Eric J. Lerner in his The Big Bang Never Happened, show clearly in it, how a paradigm shift was in cosmology is a real need too.
My work about that need started in 1968, when I found for the first time, an equation that was declared the most beautiful equation of mathematics, I mean Euler's relation found by him in 1745, when working with infinite series. It was this equation that took me in 1991, to define what I now call a Basic Systemic Unit, that has the most remarkable property to remain the same in spite of change, exactly the same definition of a Quantum as defined by professor Art Hobson in his book The Tales of Quantum, and that the University of Ottawa found when working with that strange concept that frightened Einstein, the entanglement concept, that seemed to violate Special Relativity.
Where is the real cause of the incompatibility between GR and QM?
For GR Tensor Analysis was used, a mathematical tool based on real numbers, and with it there was the need to solve ten functions representing the gravitational field:
"Thus, according to the general theory of relativity, gravitation occupies an exceptional position with regards to other forces, particularly the electromagnetic forces, since the ten functions representing the gravitational field at the same time define the metrical properties of the space measured."
THE FOUNDATION OF THE GENERAL THEORY OF RELATIVITY
By A. Einstein
Well the point is that, in that metrics that define the GR, time is just another variable, just as space, and as so with the same symmetrical properties, at the point that is can take both signs positive and negative, so time travel could be conceived just as a space travel, and any direction, in fact Stephen Hawking in his A BRIEFER HISTORY OF TIME, writes:
"It is possible to travel to the future. That is, relativity shows that it is possible to create a time machine that will jump you forward in time." Page 105
This is exactly the point that has made physics some sort of metaphysics, and as so created the great crisis of physics. While QM is based on the complex Schrödinger's wave equation or on complex numbers, in which the symbol sqr(-1), is a symbol to separate two different orders of reality, such as Time and Space, GR is based just on real numbers.
The Basic Systemic Unit concept, based on Euler's relation is in fact the definition of a Quantum, and as so it can be used to deduce all fundamental equations of physics as can be seen in my paper... resolving in this way that great crisis of physics
Quantum Physics
Edgar Paternina
retired electrical engineer
Relevant answer
Answer
In fact in IE in Power Systems, when dealing with three phase systems, we reduced them to one phase system, and for the power system to work properly in steady state the three phases must be balanced to avoid blackout.
  • asked a question related to Mathematics
Question
6 answers
I have been seeing and following a lot of work on these topics, it even seems that there are more results on them than on the corresponding classical topics, particularly on general topology.
What could be the cause of such results?
Relevant answer
Answer
Dear Colleagues,
If U and E are fixed sets, A is a subset of E, and F is a function
of A to the power set of U , then F should be called a soft set
over U and E , instead of the pair (F, A).
Thus, since set-valued functions can be identified
vith relations, a soft set over U and E is actually a relation
on E to X. That is, a subset of the product set of E and U.
Therefore, several defininitions and theorems on soft sets,
consuming a lot of pages, are superfluous. Moreover, notations
and terminology can be simplified.
  • asked a question related to Mathematics
Question
23 answers
Relevant answer
Answer
<<Einstein's Geometrical versus Feynman's Quantum-Field Approaches to Gravity Physics>>
If we turn to the already mentioned simplification of space in the form of a helix of a cylinder, then gravity is the force generated by the limit cycle, which tightens the pitch of the helix to zero at the point where the helix degenerates into a circle. As for quantized fields, these are limit cycles in dual space, so they are not responsible for gravity.
  • asked a question related to Mathematics
Question
6 answers
Has our mathematical knowledge progressed as much as contemporary science?
1- Assume a rectangle in the second dimension; this rectangle's components are lines. Its geometric characteristics are perimeter and area.
2- Assume a cube in the third dimension. Its components are the plane. Its geometric characteristics are area and volume.
3- What are the names of its components by transferring this figure to the 4th dimension? And what is the name of its geometric characteristics? And with the transfer to the 5th and higher dimensions, our mathematics has nothing to say.rectangle is just a simple shape how about complex geometric shapes?
According to new physical theories such as strings theory, we need to study different dimensions.
Relevant answer
Answer
Dear Yousef, we can not give "names" for each dimension n>3, because we would need an infinite number of names! ( How to name the cube in the space with 357 dimensions? ) If n>3, it's sufficient to add prefix "hyper", and every mathematician will understand correctly the sense!
The best description is in the case of dimension n=3. We have the cube, having as faces 6 bounded pieces from planes( that is 6 equal squares situated in 6 different planes).
The analogue of cube in dimension n=2 is the square( not the rectangle) having as "faces" 4 equal segments situated on 4 different lines.
The analogue of cube in all other n - dimensional space Rn with n>3 is called hypercube.
The hypercube in 4 dimensions has equal cubes as faces and each such face is situated in a 3 - dimensional space R3.
The hypercube in 5 dimensions has equal hypercubes from R4 as faces.
....................................................................................................................
No contradiction, all clear!
Analogue regarding the sphere! Sphere in 3 dimensions, circle in 2 dimensions, hypersphere in all dimension n>3 . Here the equations defining all such math objects are extremely obviously similar.
Hypercubes and hyperspheres have hypervolumes !
So, tu study efficiently and seriously string theory you need more and more advanced mathematics!
  • asked a question related to Mathematics
Question
4 answers
Modifying the original Feistel structure will it be feasible to design a lightweight and robust encryption algorithm. Somehow changing the structure's original flow and adding some mathematical functions there. I welcome everyone's view.
Relevant answer
Answer
Yes, it is indeed feasible to design a lightweight algorithm based on the Feistel structure. The Feistel network is a popular symmetric structure used in many modern cryptographic algorithms, such as DES (Data Encryption Standard). The design of a lightweight Feistel-based algorithm can effectively balance security and efficiency, making it suitable for environments with constrained resources, such as IoT devices and resource-limited systems.
Key Considerations for Designing a Lightweight Feistel-Based Algorithm
Feistel Structure Basics:
The Feistel structure divides the data into two halves and applies a series of rounds where the right half is modified using a function (often called the round function) combined with a subkey derived from the main key.
The left and right halves are then swapped after each round, employing the same round function iteratively over several rounds.
Lightweight Design Goals:
Reduced Resource Usage: The algorithm should minimize memory and processing requirements, which are crucial in lightweight applications.
Efficient Implementation: It should have efficient implementations in hardware (e.g., FPGAs, ASICs) as well as software (e.g., microcontrollers).
Security: While optimizing for lightweight design, the algorithm must maintain a sufficient level of security against common attacks (such as differential and linear cryptanalysis).
Steps in Designing a Lightweight Feistel Algorithm
Key Design Choices:
Number of Rounds: Determine the optimal number of rounds needed to achieve desired security without excessive computational cost. For lightweight applications, 4 to 8 rounds may be sufficient.
Block Size: Choose a block size that is suitable for the intended application. Smaller block sizes (e.g., 64 or 128 bits) may be appropriate for constrained environments.
Key Size: Develop a flexible key size that provides adequate security while keeping the implementation lightweight. A key size between 80 and 128 bits is commonly used for lightweight designs.
Round Function Design:
Simplicity and Efficiency: The round function should be computationally efficient, possibly utilizing modular arithmetic or simple logical operations (AND, OR, XOR) to enhance speed and reduce footprint.
Subkey Generation: Efficient and secure key scheduling is essential to generate round keys from the primary key, ensuring that each round has a unique key.
Attack Resistance:
Differential and Linear Cryptanalysis: Analyze the design for vulnerabilities to these forms of attacks. The choice of S-boxes in the round function can significantly enhance resistance.
Avalanche Effect: Ensure that a small change in the input or the key results in a significant change in the output.
Performance Optimization:
Implementation Flexibility: Design the algorithm to allow for easy adaptation for different platforms (hardware vs. software) to maximize performance.
Minimalistic Approach: Reduce unnecessary complexity in the algorithm to lower resource consumption, focusing on only essential component
Example Lightweight Feistel Structure
While developing a specific algorithm, you could consider a structure similar to the following:
function LightweightFeistelEncrypt(plaintext, key): Split plaintext into left (L0) and right (R0) For i from 1 to n (number of rounds): Ri = Li−1 XOR F(Ri−1, Ki) Li = Ri−1 return (Ln, Rn) function F(input, k): // Simple round function using lightweight operations // Example could include small S-boxes and XOR operations return output
  • asked a question related to Mathematics
Question
1 answer
Homomorphic encryption is a type of encryption that lets you perform mathematical operations on encrypted data without decrypting it first. This means that the raw data remains encrypted while it's being processed, analyzed, and run through algorithms
Relevant answer
Answer
Homomorphic encryption (HE) is a form of encryption that allows computations to be performed on ciphertext, producing an encrypted result that, when decrypted, matches the result of operations performed on the corresponding plaintext. This property makes HE particularly appealing in the context of forensic investigations, where data privacy and confidentiality are paramount.
Overview of Latest Research on Homomorphic Encryption in Forensics:
  1. Privacy-Preserving Data Processing:Recent research focuses on using HE to enable privacy-preserving analysis of forensic data. This involves allowing investigators to conduct queries or calculations on encrypted data without needing to decrypt it, thus protecting sensitive information during the analysis process.
  2. Secure Data Sharing:HE facilitates secure data sharing among different forensic entities (e.g., law enforcement, legal teams, and forensic analysts) without exposing the underlying sensitive data. Research is exploring protocols that leverage HE for collaborative forensic investigations while maintaining data integrity and confidentiality.
  3. Digital Evidence Management:HE is being investigated for use in managing digital evidence, particularly in cases involving cloud storage or shared environments. Ensuring that digital evidence can be processed while remaining encrypted is crucial for maintaining chain-of-custody and preventing tampering.
  4. Forensic Data Analysis:Some studies focus on applying HE to specific forensic analysis tasks, such as statistical analysis of digital evidence. This allows investigators to perform necessary computations (e.g., data aggregation, anomaly detection) on encrypted data sets, thereby increasing the privacy of the involved parties.
  5. Adoption of Machine Learning:Research is being conducted on leveraging HE in conjunction with machine learning models for forensic analysis. For instance, HE can allow training and inference on sensitive datasets without exposing the actual data, which is crucial when dealing with personal information, such as in cases involving digital crimes or cyberbullying.
  6. Performance Optimization:A significant focus of recent research has been on improving the efficiency and practical applicability of HE in forensic scenarios. This includes optimizing encryption schemes, reducing computation time, and minimizing resource consumption, making HE feasible for real-time forensic applications.
  7. Non-Interactive Proofs and HE:Research is also exploring the use of non-interactive proofs combined with HE to provide verification of computations while keeping the data encrypted. This approach can ensure that the results obtained from forensic computations are valid without revealing the underlying data.
  8. Legal and Regulatory Considerations:There is growing attention to the legal implications of using HE in forensics, including compliance with data protection regulations such as GDPR. Research is exploring how HE can align forensic practices with legal requirements for data security and privacy.
Challenges and Future Directions:
  • Performance Limitations: While HE offers strong privacy guarantees, it has inherent computational overheads. Future research will need to focus on advancing cryptographic techniques to enhance performance without compromising security.
  • Standardization Issues: Developing standards for using HE in forensic applications is vital. Researchers are working on frameworks that can guide the integration of HE into existing forensic practices.
  • Interoperability: Ensuring that HE-based forensic tools can work seamlessly with existing forensic software and tools remains a challenge that needs to be addressed.
  • asked a question related to Mathematics
Question
10 answers
Yes it is .
Relevant answer
Answer
The idea of ​​this Q&A work is to replace current mathematics and theoretical physics that live and operate in an R^4 spatial manifold with those operating in a modern 4D x-t space.
this would make it possible to resolve in a simple manner almost all classical and quantum physical situations and also to introduce new general rules in science.
We first define the discrete 4D unit space from the numerical statistical theory of matrix chains B of the Cairo techniques and compare it to the current 4D unit spaces of Einstein and Markov.
Next, you need to introduce what is called the modern Laplacian theorem in 4D unit space.
Finally, you explain the unexpected and striking relationship between the speed of light c=3E8 and the thermal diffusivity of metals when both live in 4D unit space.
The accuracy and precision of the numerical results show beyond doubt that the proposed 4D unit space is the one in which mother nature operates. This space forms the basis of a unified field theory of all types of energy density, whereas the classical manifold space R^4 is inferior and incomplete.
In conclusion, we recommend the proposed 4D unit space which constitutes a real breakthrough in the search for a new 4D numerical statistical theory to replace the classical incomplete R^4 mathematical space.
Note that throughout this work, the author uses his own double precision algorithm. In other words,
Python or MATLAB library is not required.
  • asked a question related to Mathematics
Question
14 answers
A minion is a low-level official protecting a bureaucracy form challengers.
A Kuhnian minion (after Thomas Kuhn's Structure of Scientific Revolutions) is a low-power scientist who dismisses any challenge to existing paradigm.
A paradigm is a truth structure that partitions scientific statement as true to the paradigm or false.
Recently, I posted a question on Physics Stack Exchange that serves as a summary of the elastic string paradigm. My question was: “Is it possible there can be a non-Fourier model of string vibration? Is there an exact solution?”
To explain, I asked if they knew the Hamiltonian equation for the string vibration. They did not agree it must exist. I pointed out there are problems with the elastic model of vibration with its two degrees of freedom and unsolvable equations of motion can only be approximated by numerical methods. I said elasticity makes superposition the 4th Newtonian law. How can a string vibrate in an infinite number of modes without violating energy conservation?
Here are some comments I got in response:
“What does string is not Fourier mean? – Qmechanic
“ ‘String modes cannot superimpose!’ Yet, empirically, they do.” – John Doty
“ A string has an infinite number of degrees of freedom, since it can be modeled as a continuous medium. If you manage to force only the first harmonic, the dynamics of the system only involve the first harmonic and it’s a standing wave: this solution does depend on time, being (time dependence in the amplitude of the sine). No 4th Newton’s law. I didn’t get the question about Hamilton equation.
“What do you mean with ‘archaic model’? Can I ask you what’s your background that makes you do this sentence? Physics, Math, Engineering? You postulate nothing here. You have continuum mechanics here. You have PDEs under the assumption of continuum only. You have exact solutions in simple problems, you have numerical methods approximating and solving exact equations. And trust me: this is how the branch of physics used in many engineering fields, from mechanical, to civil, to aerospace engineering.” – basics
I want to show the rigid versus elastic dichotomy goes back to the calculus wars. Quoting here from Euler and Modern Science, published by the Mathematical Association of America:
"We now turn to the most famous disagreement between Euler and d’Alembert … over the particular problem of the theory of elasticity concerning a string whose transverse vibrations are expressed through second-order partial differential equations of a hyperbolic type later called the wave equation. The problem had long been of interest to mathematicians. The first approach worthy of note was proposed by B. Taylor, … A decisive step forward was made by d’Alembert in … the differential equation for the vibrations, its general solution in the form of two “arbitrary functions” arrived at by means original with d’Alembert, and a method of determining these functions from any prescribed initial and boundary conditions.”
[Editorial Note: The boundary conditions were taken to be the string endpoints. The use of the word hyperbolic is, I believe, a clear reference to Taylor’s string. A string with constant curvature can only have one mathematic form, which is the cycloid, which is defined by the hyperbolic cosh x function. The cosh x function is the only class of solutions that are allowed if the string cannot elongate. The Taylor/Euler-d’Alembert dispute whether the string is trigonometric or hyperbolic.
Continuing the quote from Euler and Modern Science:
"The most crucial issue dividing d’Alembert and Euler in connection with the vibrating string problem was the compass of the class of functions admissible as solutions of the wave equation, and the boundary problems of mathematical physics generally, D’Alembert regarded it as essential that the admissible initial conditions obey stringent restrictions or, more explicitly, that the functions giving the initial shape and speed of the string should over the whole length of the string be representable by a single analytical expression … and furthermore be twice continuously differentiable (in our terminology). He considered the method invalid otherwise.
"However, Euler was of a different opinion … maintaining that for the purposes of physics it is essential to relax these restrictions: the class of admissible functions or, equivalently, curves should include any curve that one might imagine traced out by a “free motion of the hand”…Although in such cases the analytic method is inapplicable, Euler proposed a geometric construction for obtain the shape of the string at any instant. …
Bernoulli proposed finding a solution by the method of superimposition of simple trigonometric functions, i.e. using trigonometric series, or, as we would now say, Fourier series. Although Daniel Bernoulli’s idea was extremely fruitful—in other hands--, he proved unable to develop it further.
Another example is Euler's manifold of the musical key and pitch values as a torus. To be fair, Euler did not assert the torus but only drew a network show the Key and Pitch can move independently. This was before Mobius's classification theorem.
My point is it should be clear the musical key and pitch do not have different centers of harmonic motion. But in my experience, the minions will not allow Euler to be challenged by someone like me. Never mind Euler's theory of music was crackpot!
Relevant answer
Answer
I have not got the chance to go through each step/detail but it looks like you may be right.
  • asked a question related to Mathematics
Question
6 answers
Given:
In an isosceles triangle, the lengths of the two equal sides are each 1, and the base of the triangle is m.
A circle is circumscribed around the triangle.
Find the chord of the circle that intersects the two equal sides of the triangle and is divided into three equal segments by the points of intersection.
Relevant answer
Answer
Alex Ravsky Sorry for not read carefully the last part of your comment! Yes, in fact a lot of triangles with lenghts 1, 1, m do not admit the second solution! To have 2 solutions, condition 2-2cosA<=1/2 i.e. cosA>=3/4 is necessary! Very restrictive and cosA>=3/4 an weird limitation for angle A.
Liudmyla Hetmanenko A nice and interesting problem!
  • asked a question related to Mathematics
Question
2 answers
What is the mathematic difference between AI and AC?
Standardization of AI and AC based on the DIKWP model(初学者版)
Relevant answer
Answer
Thanks for asking such a thought-provoking question. From my knowledge, the exact mathematical distinction between AI and AC is best described in terms of complexity and the depth of the cognitive function emulated. AI, by and large, remains confined to DIK levels wherein AI functions through computations on data patterns to arrive at decisions. Yet, according to the DIKWP model, AC adds respectably more wisdom and purpose, respectively, thereby emulating higher-order cognitive processes somewhat similar to human consciousness. DIKWP model insists on purposeful transformation of data, hence making AC more sophisticated and adaptive than AI (MDPI).
  • MDPI Applied Sciences - Exploring Artificial Consciousness through DIKWP Model
  • asked a question related to Mathematics
Question
3 answers
I am interested in the existence of intelligent tutoring systems for teaching physics and mathematics in secondary schools or artificial intelligence tools that can be used in the classroom for student-teacher collaboration in these subjects, preferably with free access.
I am also interested in any relevant studies/research or information on the above topic.
Relevant answer
Answer
Dear Aikaterini Bounou , there are many AI tools available, as most of them are free of charge.
"How can AI help you improve Mathematics learning and teaching? In this post, we will dive into various use cases of AI for learning math and what are some of the popular AI tools for learning and teaching math. Mathematics can be a overwhelming for some, but with AI's personalization powers, it can become an accessible and fun topic to learn for all. Learners of any age can be better at math with AIs help..."
Also, have a look in this article where you will find more info on AI tools for science:
  • asked a question related to Mathematics
Question
1 answer
National Achievement Test are given to learners like in Grade 10 on the following subjects like Science, Mathematics and English, to assess how much students learned in a specific disciplines
Relevant answer
Answer
First recognize the "tests" are testing the acceptance of the academic indoctrination of the instruction. However, as this indoctrination differs from the student's real world experience, the interests and the test result will fade. The more academic the instruction, the less interest for all.
Make high school applicable to student's lives.
  • asked a question related to Mathematics
Question
3 answers
Let me share a quote from my own essay:
"Dynamic flows on a seven-dimensional sphere that do not coincide with the globally minimal vector field, but remain locally minimal vector fields of matter velocities, we interpret as physical fields and particles. At the same time, if in the space of the evolving 3-sphere $\mathbb{R}^{4}$ the vector field forms singularities (compact inertial manifolds in which flows are closed), then they are associated with fermions, and if flows are closed only in the dual space $\mathbb{R}^{4}$ with an inverse metric, then the singularities are associated with bosons. For example, a photon is a limit cycle (circle) of a dual space, which in Minkowski space translationally moves along an isotropic straight line lying in an arbitrary plane $(z,t)$, and rotates in the planes $(x,t)$, $(y,t)$." (p. 12 MATHEMATICAL NOTES ON THE NATURE OF THINGS)
Relevant answer
Answer
Unlike the Kaluza theory, where the fifth dimension serves as a source of electromagnetic vector potential, we have a ring (circle) in additional dimensions, the movement of which in Minkowski space is equivalent to the movement of a polarized photon. However, this does not mean that our dynamical system is unable to cope, like Kaluza's theory, with the assignment of a vector potential.
  • asked a question related to Mathematics
Question
3 answers
Hi all
i was wondering if anyone knew of a valid and reliable assessment of task-based engagement that could be used to compare student engagement across different types of tasks in mathematics?
thanks
James
Relevant answer
Answer
Hello James,
Here are a few suggestions.
How about Motivated Strategies for Learning Questionnaire (MSLQ) by Pintrich et al. (1991). It includes scales for cognitive engagement. Sample items: “When I study for this class, I try to put together the information from class and from the book” and “I often find that I have to reread material several times to understand it.” The scale has been adapted for different subjects including mathematics.
There is Engagement vs. Disaffection with Learning (EvsD) by Skinner et al. (2009). It measures both behavioural and emotional engagement, along with disaffection (i.e., withdrawal and disengagement). Example items: Engagement: "When we work on something in class, I get involved."; Disaffection: "When I’m in class, I think about other things."
Academic Engagement Scale by Reeve & Tseng (2011) assess four dimensions of student engagement: behavioural, emotional, cognitive, and agentic. Example items: Behavioural: "I try hard to do well in this class."; Emotional: "I enjoy learning new things in this class."; Cognitive: "When I study, I try to understand the material better."; Agentic: "I ask the teacher questions when I don’t understand."
Classroom Engagement Inventory (CEI) by Wang et al. (2014). This inventory scale assesses different dimensions of student engagement, including cognitive, emotional, behavioural, and social engagement. Example items: Cognitive: "I try to connect what I am learning to things I have learned before."; Behavioural: "I pay attention in class."; Emotional: "I feel happy when I am in class." It’s a bit more tuned to the classroom setting.
A slightly different tack may be looking at how immersed they are in the task; sometimes referred to as “flow”. The Flow State Scale (FSS) by Jackson and Marsh (1996) measures student “flow”. Flow is associated with deep cognitive engagement and intrinsic motivation. Some example items: "I feel just the right amount of challenge." and “I am completely focused on the task at hand." While originally developed for sports and physical activities, the FSS has been adapted for educational settings and can be used to assess engagement in challenging mathematics tasks.
These are a bit general for learning tasks. I’m not sure, but you might be looking at comparing different approaches to teaching mathematics in a classroom. In which case you might prefer something more task-specific in the measure (?). How about some of these:
There is Mathematics Engagement Instrument (MEI) by Wang et al. (2016) which deals with different aspects of student engagement in mathematics tasks. It includes scales for behavioural, emotional, and cognitive engagement more tailored to mathematics activities.
Example items: Behavioral Engagement: "I try to understand the mathematics problems even when they are hard."; Emotional Engagement: "I feel happy when I solve a math problem."; Cognitive Engagement: "I put a lot of effort into learning the math material."
The Task Engagement Survey (TES) by Sinatra et al. (2015) really zeroes in on the specified task. It assesses how deeply students are involved in the task, how much effort they exert, and how interested they are in the task. Example items: "I found this task interesting."; "I worked hard to complete the task."; "I was fully engaged while working on this task." This might be useful to compare engagement across different activities or instructional strategies.
The Task Motivation and Engagement Scale (TMES) was developed by Martin et al. (2017) to assess students’ motivation and engagement in specific academic tasks. It includes items on effort, persistence, task absorption, and emotional engagement specific to a task. Example items: "I tried to do my best on this task."; "I kept working even when the task was difficult."; "I enjoyed working on this task." This scale can be used to compare student engagement across different mathematics tasks or teaching methods within the classroom.
In the same vein, the Task-Specific Engagement Measure (TSEM) by Scherer et al. (2016) is a scale developed to measure engagement specifically in learning tasks. Example items: "I felt interested in the task I was doing."; "I focused hard on solving the task."; "I felt absorbed in the task." The TSEM is particularly relevant for mathematics education, where different types of tasks (e.g., problem-solving, conceptual understanding, procedural fluency) require different levels of engagement.
I hope these are helpful to your research.
Kind regards,
Tim
  • asked a question related to Mathematics
Question
4 answers
In triangle ABC, the median BM_2 intersects the bisector AL_1 at point P.
The side BC is divided by the base of the bisector AL_1 into segments CL_1=m and BL_1=n.
Determine the ratio of the segments AP to PL_1.
Relevant answer
Answer
Dear colleagues,
I am grateful for your interest in my geometric problem. Both of your solutions are correct and they are excellent!
I also want to express my heartfelt appreciation for your words of support for my country during these difficult times.
  • asked a question related to Mathematics
Question
1 answer
Dear research community members I would like to post a preprint of my article in Mathematics but need an endorsement If anyone can do this for me I will greatly appreciate for once Secondly I will send you my new paper with explanations
Endorsement Code: SP84WZ
Thanks a lot
P.S I don't know the procedure The moderator sent me a link
Ruslan Pozinkevych should forward this email to someone who's registered as an endorser for the cs.IT (Information Theory) subject class of arXiv
or alt visit
and enter SP84WZ
Once again thank you and apologize for bothering
Relevant answer
Answer
Прям там на Архиве среди авторов статей найдите тех, кто может дать эндорсмент. Внизу каждой статьи написано: Which authors of this paper are endorsers? Нажимайте ищите эндорсеров и отсылайте им статью.
  • asked a question related to Mathematics
Question
3 answers
How do we improve the inflation prediction using mathematics?
Relevant answer
Answer
To assess inflation dynamics mathematically, economists use models like the Phillips Curve to relate inflation to unemployment, monetary policy rules like the Taylor Rule to link interest rates with inflation and output, DSGE models to capture the effects of economic shocks, and time series methods like ARIMA or VAR to analyze historical data and forecast future inflation
  • asked a question related to Mathematics
Question
2 answers
Ate there any Research Project and Grants for individuals without involvement of employer in Mathematics?
Relevant answer
Answer
@Pratheepa Cs
I am not able to get into what you want to convey
  • asked a question related to Mathematics
Question
1 answer
Is there a galactic rotation anomaly? Is it possible to find out the speed and time of the galactic rotation anomaly?
Is there a galactic rotation anomaly? Is it possible to find out the speed and time of the galactic rotation anomaly?
Abstract: Orbital speeds of stars, far from centre of a galaxy, are found roughly constant, instead of reductions predicted by current gravitational theories (applied on galactic and cosmological scales). This is called the anomalous rotation of galaxies. This article intends to show that constant angular speeds of all macro bodies in a galaxy are natural phenomenon and there is no mystery about it.
Keywords: Galaxy, Stable galaxy, rotational anomaly.
A planetary system is a group of macro bodies, moving at certain linear speed in circular path around galactic centre. Central body of planetary system is by far the largest and controls mean linear speeds of all other members. Gravitational attractions between macro bodies of planetary system cause perturbations in their directions of motion, resulting in additional curvatures of their paths. When perturbed paths of smaller macro bodies are related to central body in assumed static state, we get apparent orbital paths of planetary bodies. They appear to revolve around static central body in elliptical/circular paths. Apparent orbital paths are unreal constructs about imaginary static state of central body. They are convenient to find relative positions of macro bodies in the system and to predict cyclic phenomena occurring annually. In reality, planetary bodies do not orbit around central body but they move in wavy paths about the central body. Central and planetary bodies move at a mean linear speed along their curved path around galactic centre.
Perturbations of orbital paths of macro bodies in planetary system are related directly to their matter-content and inverse square of distance from central body. Distance from central body has greater effect of magnitudes of perturbations. Hence, normally, paths of planetary bodies at greater distance from central body are perturbed by lesser magnitudes. Curvatures and thus angular speeds of their apparent orbits reduce as distance from central body increases. Since planetary system has no real spin motion, this is an imaginary phenomenon. However, many learned cosmologists seem to take spin motion of planetary system as real phenomenon and consider that members of all spinning group pf macro bodies should behave in similar manner, i.e. angular (spin) speed of members should reduce as their distance from centre of system increases.
Stable galaxy consists of many macro bodies revolving around its centre. This group can be considered as a spinning fluid macro body, rotating at a constant angular speed. Gravitational collapse initiates spin motion of galactic cloud and maintains constant spin speed of outer parts of stable galaxy. Centre part of galaxy, which is usually hidden, may or may not be spinning. We can observe only visible stars and their angular speeds about galactic centre. Linear motions of macro bodies, caused by gravitational attractions towards other macro bodies in the system, have two components each. One component, due to additional linear work invested in association with it, produces macro body’s linear motion, in a direction slightly deflected away from centre of circular path. Other component, towards centre of its circular path, is caused by additional angular work invested in association with it. This component produces angular motion of macro body.
All matter-particles in a fluid macro body, spinning at constant speed, have constant angular speeds. Consider a matter-particle at O, in figure 1, moving in circular path AOB. XX is tangent to circular path at O. Instantaneous linear speed of matter-particle is represented by arrow OC, in magnitude and direction. It has two components; OD, along tangent XX and DC, perpendicular to tangent XX and away from centre of circular path. This component, DC, represents centrifugal action on matter- C particle due to its motion in circular path. In
order to maintain constant curvature of path, X D O X matter-particle has to have instantaneous A linear (centripetal) motion equal to CE E
toward centre of circular path. If magnitudes B Figure 1 and directions of instantaneous motions are as shown in figure 1, matter-particle maintains its motion along circular path AOB at constant angular speed.
Should the matter-particle increase its instantaneous linear speed for any reason, both components OD and DC would increase. Component OD tends to move matter-particle at greater linear speed along tangent XX. Outward component DC tends to move matter-particle away from centre of its circular path. The matter particle tends to increase radius of curvature of its path. This action is usually assigned to imaginary ‘centrifugal force’. In reality expansion of radius of curvature of path is caused by centrifugal component of linear motion. Reduction in centripetal action also produces similar results.
Should the matter-particle decrease its instantaneous linear speed for any reason, both components OD and DC would reduce. Component OD tends to move matter-particle at lesser linear speed along tangent XX. Reduction in outward component DC tends to move matter-particle towards centre of its circular path. The matter particle tends to reduce radius of curvature of its path. Reduction of radius of curvature of path is caused by reduction in centrifugal component of linear motion. Increase in centripetal action also produces similar results.
In other words, matter-particle regulates its distance from centre of its circular path so that its angular speed remains constant. This is the reason for action of centrifuges. As linear speeds of matterparticles increase, they move outwards, in an effort to maintain their angular speed constant.
Additional work, done for linear motion of a matter-particle and additional work, done for its angular motion are entirely separate and distinct. Additional work for linear motion of a matter-particle can produce only linear motion and additional work for angular motion can produce only angular motion. In the case, explained above, increased in linear speed of matter-particle is considered. That is, additional work invested in association with matter-particle is of linear nature. It can only increase its linear motion. As no additional work for angular motion is invested matter-particle cannot change its angular speed. Instead, matter-particle is compelled to move away from centre of its rotation, so that it can increase magnitude of linear motion while keeping magnitude of angular motion constant.
Similarly, increase in centripetal effort invests additional work required for angular motion of matterparticle. Matter-particle tends to increase magnitude of its angular motion. Curvature of its path
increases by reducing its distance from centre of circular path. Matter-particle tends to move towards centre of circular path, so that it can increase its angular speed while keeping its linear speed constant.
Every macro body in a stable galaxy behaves in a manner similar to matter-particle, represented in figure 1. They tend to position themselves in the system, so that their linear and angular speeds match corresponding works associated with them. Macro bodies strive to maintain their angular speeds constant by keeping appropriate distance from centre of rotation. Macro bodies towards the central region may experience additional centripetal effort. They might increase their angular motion and move towards central point to merge with black hole present there. In due course of time, macro bodies on outer fringes move away from galaxy and destroy its stability.
In a galaxy, various macro bodies arrive at their relative position gradually by error and trial, during which their relative positions and linear and angular speeds are stabilized. Galaxy, as a whole, stabilizes only when constituent macro bodies have reached their steady relative positions and motions. In order to maintain stability, it is essential to maintain relative positions of all constituent macro bodies by having constant and equal angular speeds and linear speeds corresponding to their distances from galactic centre. Change in relative position or linear or angular speed of even one macro body is liable to destabilize the galaxy.
As and when superior 3D matter-particles at the fringe of galaxies attain linear speeds approaching speed of light, they break-down into primary 3D matter-particles and produce halo around equatorial region. Halos of neighbouring stable galaxies interact to prevent their translational movements and maintain steady state of universe.
Therefore constant angular speeds of constituent macro bodies of stable galaxies are their natural states. There are no mysteries or anomalies about them. This phenomenon is mystified by those who consider imaginary spin motions of planetary systems are real. Therefore, assumptions of dark matter, time dilation, modification of gravitational laws, etc and complicated mathematical exercises are irrational and unnecessary to prove non-existing rotation anomaly of galaxies.
Conclusion:
Galactic rotation anomaly is a non-existing phenomenon derived from imaginary spin motions of planetary systems about their central bodies in assumed static states. Constant angular speeds of stars in a galaxy confirm static state of galactic center (in space), rather than produce an anomaly.
Reference:
[1] Nainan K. Varghese, MATTER (Re-examined), http://www.matterdoc.info
Reply to this discussion
Chuck A Arize added a reply
5 hours ago
Yes, there is a galactic rotation anomaly observed as the discrepancy between the predicted and actual rotation speeds of galaxies. This anomaly, often attributed to dark matter, shows that the outer regions of galaxies rotate faster than expected. Measuring the speed and time of this rotation anomaly involves detailed observations of galactic rotation curves and modeling, which reveal the velocity profile and suggest the presence of unseen mass influencing the rotation.
Abdul Malek added a reply
3 hours ago
Abbas Kashani > "Is there a galactic rotation anomaly?"
There is a galactic rotation anomaly, but only according to officially accepted theories of gravity and the (Big Bang) theory of the formation of the galaxies inferred for a finite, closed and a created (in the finite past) universe.
But all these theories based on causality and theology are wrong! The dialectical and scientific view is that the universe is Infinite, Eternal and Ever-changing, mediated by dialectical chance and necessity. Gravity is a dialectical contradiction of the unity of the opposites of attraction and repulsion (due to inherent free motion of matter particles, vis viva). In short (human) time scale, new galaxies are seen to be formed through the dissipation and/or ejection of matter in the form of stars, star clusters or even a large part of the galaxy as quasars from the existing galaxies.
So, the observed high orbital velocities of the starts, star clusters etc. at the periphery of the galaxies and of the planets at the periphery of the planetary systems within the galaxies is just a natural phenomena and there is no anomaly!
"Ambartsumian, Arp and the Breeding Galaxies" : http://redshift.vif.com/JournalFiles/V12NO2PDF/V12N2MAL.pdf
KEPLER -NEWTON -LEIBNIZ -HEGEL Portentous and Conflicting Legacies in Theoretical Physics, Cosmology and in Ruling https://www.rajpub.com/index.php/jap/article/view/9106
"THE CONCEPTUAL DEFECT OF THE LAW OF UNIVERSAL GRAVITATION OR ‘FREE FALL’: A DIALECTICAL REASSESSMENT OF KEPLER’S LAWS":
Article THE CONCEPTUAL DEFECT OF THE LAW OF UNIVERSAL GRAVITATION OR...
📷
Preston Guynn added a reply
4 days ago
Your discussion question statement is:
  • "Is there a galactic rotation anomaly? Is it possible to find out the speed and time of the galactic rotation anomaly? Orbital speeds of stars, far from centre of a galaxy, are found roughly constant, instead of reductions predicted by current gravitational theories (applied on galactic and cosmological scales). This is called the anomalous rotation of galaxies."
The limit of galactic rotation velocity is expected because rotation minus precession has a maximum velocity. Our solar system's relative rotation velocity with respect to the Milky Way galaxy is at this maximum, and as a fraction of speed of light the observed velocity can be designated vg/c, and is determined in the single page proof of the quantum of resistance:
Article The Physical Basis of the Quantum of Resistance is Relativis...
The detailed proofs are in:
Article Thomas Precession is the Basis for the Structure of Matter and Space
Note that the observed velocity is the difference between rotation and precession.
📷
Dale Fulton added a reply
3 days ago
The galactic rotation "anomaly" (flat rotation curve) is actually a misinterpretation of the measurements of the galactic rotations, when performed with spectrographic (redshift) measurements. This has misled astronomers since the inception of the spectrographic velocity measurements, as being totally doppler shift, whereas they contain many non-linear components of redshift due to gases and other effects from each galaxy. Recent measurements of the Milky Way galaxy rotation curve prove that this is the case, i.e, that spectrographic velocities are misleading, and that proper motion or parallax is the only way to accurately measure those velocities.
📷
André Michaud added a reply
20 hours ago
There is no galactic rotation anomaly. Such a concept emerges from the lack of careful study of past historical discoveries about orbital structures in the universe established since Ticho Brahe first collected his data about the planetary orbits in the solar system, from which Johannes Kepler abstracted his 3 laws, that were then mathematically confirmed by Newton.
The galactic rotation parameters are well known by those who studied the true foundation of astrophysics. Put in perspective in this article:
Article Inside Planets and Stars Masses
📷
Abbas Kashani added a reply
44 seconds ago
Preston Guynn
Dale Fulton
André Michaud
Greetings and politeness and respect to the great and respected professors and astronomers, I am very grateful for your efforts, dear ones. Thank you and thank you
Abbas Kashani
Mohaghegh Ardabili University
Relevant answer
Answer
Of course there is. It can be easily determined for spiral galaxies. Many have what is called a flat rotation curve, meaning the velocity of its stars is the same all over the galaxy, regardless of their distances from the galactic center, totally contrary to mainstream gravity theory.
The anomaly is simply stellar velocities which is distance traveled per unit time. The mainstream presently attributes this anomaly to what they assert to be unknown matter. But this is a very week hypothesis since it requires about 6 times more unseen matter than observable matter, and even then it is a very poor predictor of stellar velocities.There are many more better predictors of stellar velocities than dark matter which may be the worst of all predictors. But most alternatives are Modified Gravity proposals, which are usually much better predictors, but have there own serious theoretical problems.
  • asked a question related to Mathematics
Question
1 answer
It will be better if there will be any mathematical relation or suggest some research articles?
Relevant answer
The total number of members in a vehicle platoon can significantly affect the overall cybersecurity of the platoon. As the number of vehicles in the platoon increases, the complexity of maintaining secure communication and ensuring overall cybersecurity also increases. Here are some key considerations:
### 1. **Increased Attack Surface**:
- **More Entry Points**: Each additional vehicle adds an entry point for potential cyberattacks. Attackers might exploit vulnerabilities in one vehicle's communication system, sensors, or control units, which could compromise the entire platoon.
- **Network Complexity**: With more vehicles, the communication network becomes more complex. Managing secure data transmission among a larger number of vehicles increases the risk of data breaches, man-in-the-middle attacks, or jamming.
### 2. **Inter-Vehicle Communication (V2V)**:
- **Bandwidth and Latency**: A larger platoon requires more data to be exchanged between vehicles. This can strain the network, potentially leading to delays in communication, which attackers might exploit. Securing the communication channel against eavesdropping or spoofing becomes more challenging as the number of vehicles increases.
- **Synchronization and Consistency**: Ensuring that all vehicles in a large platoon receive and process data simultaneously and consistently is difficult. Any lag or inconsistency can be exploited by attackers to create confusion or to insert malicious commands.
### 3. **Consensus and Decision-Making**:
- **Complexity in Consensus Algorithms**: In larger platoons, decision-making (such as adjusting speed or direction) must account for more vehicles, making the consensus algorithms more complex. This complexity can lead to vulnerabilities if the algorithms are not properly secured.
- **Potential for Misinformation**: In a large platoon, if an attacker compromises one vehicle, they could inject false information into the network, leading to incorrect decisions by other vehicles. This could result in collisions or unsafe maneuvers.
### 4. **Scalability of Security Protocols**:
- **Challenges in Encryption**: As the number of vehicles grows, the encryption and decryption processes for secure communication need to scale efficiently. More vehicles mean more keys and certificates to manage, which can increase the risk of key management failures or exploitation of weaker encryption protocols.
- **Authentication Overhead**: Ensuring that every vehicle in a large platoon is authenticated and trusted adds overhead to the system. The complexity of maintaining a robust authentication process can increase the risk of vulnerabilities.
### 5. **Impact of a Single Compromised Vehicle**:
- **Cascading Effects**: In larger platoons, the impact of a single compromised vehicle can be more severe. A hacked vehicle might send malicious commands or false data, affecting the behavior of other vehicles. The more vehicles there are, the more widespread the potential disruption.
- **Difficulty in Isolation**: Identifying and isolating a compromised vehicle in a large platoon is more challenging. The larger the platoon, the more difficult it becomes to quickly detect and mitigate the impact of a cybersecurity breach.
### 6. **Cooperative Adaptive Cruise Control (CACC)**:
- **Increased Dependency**: Larger platoons often rely on Cooperative Adaptive Cruise Control, where vehicles communicate to maintain speed and distance. If the cybersecurity of CACC is compromised, the entire platoon could be at risk. The complexity of securing CACC increases with the number of vehicles.
### 7. **Redundancy and Fault Tolerance**:
- **Need for Redundant Systems**: As the platoon grows, redundancy in communication and control systems becomes more important to ensure that the platoon can operate safely even if some vehicles are compromised. However, implementing and managing redundancy increases the system's complexity and potential points of failure.
- **Resilience to Attacks**: A larger platoon must be more resilient to attacks. Ensuring that the platoon can continue to function safely even if part of the system is under attack is critical, but more difficult to achieve as the number of vehicles increases.
### 8. **Human Factors and Response**:
- **Increased Coordination Challenges**: In larger platoons, coordinating human responses to cybersecurity threats becomes more difficult. Drivers or operators may have less control or understanding of the overall platoon’s state, complicating responses to potential attacks.
- **Training and Awareness**: Ensuring that all drivers or operators in a large platoon are adequately trained in cybersecurity practices is more challenging, leading to potential weaknesses in the human element of security.
In summary, as the number of vehicles in a platoon increases, so do the challenges in maintaining cybersecurity. The increased attack surface, complexity of communication, and need for robust authentication, encryption, and redundancy make it more difficult to secure the platoon against cyber threats. Addressing these challenges requires advanced security protocols, real-time monitoring, and resilient system designs.
  • asked a question related to Mathematics
Question
4 answers
During Kurt Gödel's lecture on the occasion of Albert Einstein's birthday in 1945, this question was already raised by John Wheeler. Gödel did not comment on it. Both Einstein and Gödel did not believe in quantum theory. Is there currently any reference or article that relates to this question? From today's scientific perspective, is there a relationship between Heisenberg's Uncertainty Principle and Gödel's Incompleteness Theorem? Even so, when both the principles arise from different theoretical frameworks and serve different purposes within their respective domains. Please provide references.
Relevant answer
Answer
The uncertainty principle is very linked
to fourier properties of waves.
Not so with incompletness in arithmetic.
There are a few similarities. For example addition and multiplication are non commutative as operations
  • asked a question related to Mathematics
Question
2 answers
Can we apply the theoretical computer science for proofs of theorems in Math?
Relevant answer
Answer
The pumping lemma is a valuable theoretical tool for understanding the limitations of finite automata and regular languages. It is not used for solving computational problems directly but is important for proving non-regularity and understanding the boundaries of regular languages.
  • asked a question related to Mathematics
Question
17 answers
We assume that the difference is huge and that it is not possible to compare the two spaces.
The R^4 mathematical space considers time as an external controller and the space itself is immobile in its description or definition in the face of curl and divergence operators.
On the other hand, the unit space 4 D x-t time t is woven into the 3D geometric space as a dimensionless integer.
Here, the curl and divergence operators are just extensions of their original definitions ​​in 3D geometric space.
Relevant answer
Answer
I suppose we are within the continuum hypothesis to accept the concept of continuity and differentiation. If you introduce time as discrete variable the same must be assumed for space. And no classic partial derivative should be introduced.
General Relativity used a proper metric for time but within the hypothesis of differentiable time.
  • asked a question related to Mathematics
Question
1 answer
Comment les enseignants de mathémattiques du cycle primaire conçoivent-ils leurs évaluations? Quelles conceptions ont-ils pour le concept d'évaluation?
Relevant answer
Answer
À partir d'une grille d'élaboration d'une évaluation préétablie qu'ils essaient de respecter d'une façon formelle.
  • asked a question related to Mathematics
Question
6 answers
Would someone be kind enough to answer the question why there is a Pareto Principle related to Grades in Primary Education in Mathematics, but please without counter-questions such as who says or where it is written that it is so. If Primary Education in Mathematics lasts, for example, 8 years (the number varies between countries), then in the first three grades 80% have excellent and very good grades in Mathematics, and 20% good and bad. In the fourth grade, this ratio is approximately 50%:50%. However, from the fifth to the eighth grade, the relationship is reversed and only 20% have excellent and very good grades, and 80% good and bad. Sometimes, for some reason, the ratio is 70:30 instead of 80:20, but the relationship and regularity exists. I thank you in advance for your reply, as well as for your kindness and time.
Relevant answer
Answer
Reply for George McIlvaine from ChatGPT
The analogy of students as "producers of math grades" within the framework of the Law of Diminishing Returns (LODR) offers a valuable perspective on why mathematics performance might decline as complexity increases. To explore this hypothesis and determine if it provides a better explanation than the Pareto Principle (PP), we can consider the following steps and implications:
### Understanding LODR in Mathematics Education
The Law of Diminishing Returns suggests that as students continue to invest effort into learning mathematics, the incremental improvement in their grades diminishes over time due to increasing complexity and the natural limits of their cognitive abilities. This concept can be visualized through a smooth logarithmic decline in performance:
1. **Initial High Returns**: In the early stages of education, students quickly grasp basic concepts, leading to high grades.
2. **Decreasing Returns**: As the curriculum becomes more complex, each additional unit of effort results in smaller improvements in grades.
3. **Plateauing Performance**: Eventually, many students reach a point where additional effort does not significantly improve their grades, leading to a plateau or gradual decline.
### Testing the Hypotheses: PP vs. LODR
To determine whether the PP or LODR better describes the observed decline in math grades, we can analyze the grade distribution and performance trends over time:
1. **Data Collection**: Collect longitudinal data on math grades from a cohort of students across multiple years (e.g., from first grade to eighth grade).
2. **Analyze Decline Patterns**: Plot the grade distributions and overall performance trends to see if they follow a smooth logarithmic decline (indicative of LODR) or show a clear 80/20 distribution at different stages (indicative of PP).
3. **Statistical Modeling**: Apply statistical models to fit the data to both hypotheses. A logarithmic regression model can test the LODR, while a distribution analysis can test for PP patterns.
### Implications of LODR
If the LODR hypothesis holds true, it suggests several educational strategies to mitigate the diminishing returns and support students' continuous improvement:
1. **Adaptive Learning**: Implement adaptive learning technologies that tailor the complexity of problems to the individual student's level, providing a more personalized learning experience.
2. **Incremental Challenge**: Design the curriculum to introduce complexity in smaller, more manageable increments to avoid overwhelming students and to sustain their motivation and performance.
3. **Continuous Support**: Provide ongoing support and resources, such as tutoring and mentoring, to help students navigate the increasing complexity of the material.
4. **Emphasize Mastery**: Focus on mastery learning, where students are given the time and support needed to fully understand each concept before moving on to the next.
### Combining PP and LODR
It is also possible that both principles play a role in mathematics education:
- **Initial Stages (PP)**: In the early years, the Pareto Principle might dominate, with a small group of students quickly excelling while others keep up at a more basic level.
- **Later Stages (LODR)**: As complexity increases, the Law of Diminishing Returns might take over, leading to a gradual decline in performance as students reach their cognitive limits and the returns on additional effort decrease.
### Conclusion
By examining the patterns in student performance data and applying appropriate models, educators can better understand the underlying principles affecting math grades. This understanding can inform targeted interventions and curriculum design to help all students maximize their potential in mathematics. Ultimately, the goal is to create a learning environment where students can achieve a level of mathematical proficiency that allows them to contribute meaningfully to society.
  • asked a question related to Mathematics
Question
3 answers
With the term “gravity”, we refer to the phenomenon of the gravitational interaction between material bodies.
How that phenomenon manifests itself in the case of the interaction of two mass particles at rest relative to an inertial reference frame (IRF) has, in the framework of classical physics, mathematically been described by Isaac Newton. And Oliver Heaviside, Oleg Jefimenko and others did the same in the case of bodies moving relative to an IRF. They described the effects of the kinematics of the gravitating objects assuming that the interaction between massive objects in space is possible through the mediation of “the gravitational field”.
In that context, the gravitational field is defined as a vector field having a field- and an induction-component (Eg and Bg) simultaneously created by their common sources: time-variable masses and mass flows. This vector-field (a mathematical construction) is an essential element of the mathematical description of the gravitational phenomena, and as such an element of our thinking about nature.
One cannot avoid the question of whether or not a physical entity is being described by the vector field (Eg, Bg) and what, if any, is the nature of that entity.
In the framework of “the theory of informatons”[1],[2],[3], the substance of the gravitational field – that in that context is considered as a substantial element of nature - is identified as “gravitational information” or g-information” i.e. information carried by informatons. The term “informaton” refers to the constituent element of g-information. It is a mass and energy less granular entity rushing through space at the speed of light and carrying information about the position and the velocity of its source, a mass-element of a material body.
References
[1] Acke, A. (2024) Newtons Law of Universal Gravitation Explained by the Theory of Informatons. https://doi.org/10.4236/jhepgc.2024.103056
[2] Acke, A. (2024) The Gravitational Interaction between Moving Mass Particles Explained by the Theory of Informatons. https://doi.org/10.4236/jhepgc.2024.103060
[3] Acke, A. (2024) The Maxwell-Heaviside Equations Explained by the Theory of Informatons. https://doi.org/10.4236/jhepgc.2024.103061
Relevant answer
Answer
Dear Preston,
Thanks for your quick response.
1. You wrote: ”The "informatons" in your theory are not necessary for the interaction of matter because space and time alone are sufficient and a theory of "informations" is therefore redundant or superfluous.”In the context of classical physics time and space are considered as elements of our thinking about nature. They do not participate in what is happening. They are conceived as constructions of our thinking that allow us to locate and date events in an objective manner. (art. [1], §2). So they cannot be the cause of the physical phenomena.
2. You wrote: “If you look at my one page summary paper
I will look at your papers but I need more than a few hours to form my opinion.
Regards,
Antoine
  • asked a question related to Mathematics
Question
10 answers
In his article "More is different", Anderson said that new laws of physics "emerge" at each physical level and new properties appear [1]; Wheeler, when claiming that "law without law" and "order comes out of disorder", argued that chaotic phenomena " generate" different laws of physics [2][3]. What they mean is that the laws, parameters, and constants of the upper level of physics appear to be independent of the laws of physics of the lower level. Is this really the case? Are we ignoring the conditions that form the physical hierarchy, thus leading to this illusion?
Let's suppose a model. The conditions for the formation of new levels are at least two: i. Existence of low-level things A,B ...... , the existence of interaction modes a, b,...... ; two, the existence of a sufficient number of low-level things, NxA, MxB....... Then when they are brought together, there are many possible combinations, e.g., (AA), (AAA), (AAA)', ...... , (AB), (BA), (AAB)', (BAB), ........ Then it escalates to [(AA)(AA)], [(AB)(ABA)], ....... What this actually leads to is a change in the structure of things and a corresponding change in the way they interact. The result of the "change" is the appearance of new physical phenomena, new forces, and so on.
Physics is an exact match for math, so let's use math as an example of this phenomenon. Suppose we have a number of strings (threads) that can be regarded as underlying things, then, when a string is curled into a circle, L=2πR, the law of the relationship between the length of the string and its radius, and the irrational constant π appear; when two strings are in cascade, L=l1+l2, the law that the total length of the string is equal to the sum of the individual string lengths (Principle of superposition) appears; and, when three strings form a right triangle, the law of Pythagoras, c2=a2+b2, the law of sums of interior angles of triangles ∠A + ∠B + ∠C = 180° , and the irrational constant √2 appear ...... ; and the transcendental number e appears when the string length L grows in a fixed proportion (continuous compound interest)[4] ...... ; when the string vibrates, sine waves (sinωt) appear; when two strings are orthogonal, i appears ...... ; and when more kinds of vibrating strings are superimposed under specific conditions, more phenomena appear *.......
All these "qualitative changes" do not seem to be caused by "quantitative changes", but more by the need to change the structure. As mathematical theorems emerge, so must the laws of physics, and it is impossible for physics to transcend mathematics. Therefore, as long as there is a change of structure in physics, i.e. the possibility of symmetry breaking [5]**, new "symmetries", new "laws", new "forces", new "constants", new "parameters" are almost inevitable.
Can we try to attribute all physical phenomena to emergence under hierarchical structural conditions? For example, the fine structure constant‡‡and the Pauli exclusion principle emerge because of the formation of atomic structure; the "nuclear force" emerges because of the combination of protons and neutrons; The "strong interaction force" and "weak interaction force" appeared because of the structure of protons and neutrons. We should pay attention to the causal relationship here. Without structure, there would be no new phenomena; it is the more fundamental interactions that form structure, not these new "phenomena".
-----------------------------
Notes
* e.g. Blackbody radiation law, Bose statistics, Fermi statistics, etc.
** Should there be "spontaneous symmetry breaking"? Any change in symmetry should have a cause and a condition.
‡ What does it mean in physics if e will appear everywhere and the individual mathematical constants appear so simply? They must likewise appear at the most fundamental level of physics.
-----------------------------
2024-07-27 补充
In addition to the structure and statistics generated by the interactions that result in new laws of physics, the expression of the different orders of differentials and integrals of such generating processes is another important way of making the laws of physics emerge.
Typical examples of such expressions can be seen @ Ingo D. Mane: “On the Origin and Unification of Electromagnetism, Gravitation, and Quantum Mechanics“:
-----------------------------
Referencs
[1] Anderson, P. W. (1972). More Is Different: broken symmetry and the nature of the hierarchical structure of science.
. Science, 177(4047), 393-396. https://doi.org/doi:10.1126/science.177.4047.393
[2] Wheeler, J. A. (1983). ‘‘On recognizing ‘law without law,’’’Oersted Medal Response at the joint APS–AAPT Meeting, New York, 25 January 1983. American Journal of Physics, 51(5), 398-404.
[3] Wheeler, J. A. (2018). Information, physics, quantum: The search for links. Feynman and computation, 309-336.
[4] Reichert, S. (2019). e is everywhere. Nature Physics, 15(9), 982-982. https://doi.org/10.1038/s41567-019-0655-9;
[5] Nambu, Y. (2009). Nobel Lecture: Spontaneous symmetry breaking in particle physics: A case of cross fertilization. Reviews of Modern Physics, 81(3), 1015.
Relevant answer
Answer
The way the laws of physics emerge
In addition to the structure and statistics generated by the interactions that result in new laws of physics, the expression of the different orders of differentials and integrals of such generating processes is another important way of making the laws of physics emerge.
One of examples of such expressions can be seen in Ingo D. Mane 's paper : “On the Origin and Unification of Electromagnetism, Gravitation, and Quantum Mechanics“:
Preprint On the Origin and Unification of Electromagnetism, Gravitati...
  • asked a question related to Mathematics
Question
1 answer
Please suggest me, I want to present my research work.
Relevant answer
Answer
I didn't know exactly about that, but I had known some telegram channel. They were updating about conferences .
These are the links. You can refer
  • asked a question related to Mathematics
Question
2 answers
how i can calculate mathematically COD concentration if i add 1gr from glucose to 1 L distilled water
Relevant answer
Answer
Unfortunately, you cannot directly calculate the COD concentration (Chemical Oxygen Demand) solely based on the amount of glucose added to distilled water.
Here's why:
  • COD is a measure of the total amount of oxygen required by microorganisms to decompose organic matter in water. While glucose is readily biodegradable, its contribution to COD depends on various factors not captured by its concentration.
  • These factors include: Presence of other organic compounds: Distilled water wouldn't have any other organic matter, but in real-world scenarios, water samples might contain additional organic materials that influence COD. Biodegradability: Not all organic matter is equally biodegradable. Even for glucose, some complex structures might take longer for microbes to break down, impacting the oxygen demand measured in COD.
However, there are alternative approaches to estimate COD:
  1. Conduct a Standard COD Test: This laboratory test is the most accurate method for determining COD concentration. It involves a specific digestion process and measurement of oxygen depletion.
  2. Literature Values: If your scenario involves a specific type of water with known characteristics (e.g., wastewater from a particular industry), you might find literature values for the COD/glucose ratio. However, applying these values to other situations requires caution due to potential variations.
  • asked a question related to Mathematics
Question
6 answers
I am need someone to collaborate with me to make article. Especially in international journal.
Relevant answer
Answer
You have to wait for that. When you submit your work in any international journal you will reviewer's response. They will analyze everything. There after you change your thoughts or stay with your own work. If it rejected then you have paraphrase the abstract and title and go for another journal. The work is challenging. You have to wait for atleat 5-6 months to get reviewer response.
If you want to do PhD, then you must cleeared NET exam and then find an university with proper accomodation or you can go abroad. You need a proper guide for this.
Most importantly your contribution in your paper is must. If you think it's valid then you have to work on it, otherwise research is not for you. Negative comments from reviewers and rejection is a common thing here. It's totally depends on you.
  • asked a question related to Mathematics
Question
3 answers
The video "The Biggest Question Physicists Aren't Asking" (https://www.youtube.com/watch?v=iVyl8pGd44I) says the Michelson-Morley experiment of 1887 didn't disprove the existence of the aether. It reminds us that Hendrik Lorentz explained the experiment's negative results with his Lorentz ether theory (LET), which he initially developed in 1892 and 1895. The theory was improved in 1905 and 1906 by Henri Poincaré - and it was based on the aether theory of Augustin-Jean Fresnel, Maxwell's equations, and the electron theory of Rudolf Clausius. The video says Albert Einstein's alternative to the aether in Special Relativity is preferred by physicists because it makes fewer assumptions. Interestingly, Einstein wondered in later years about the possibility of the aether actually existing.
Just as the building blocks in chemistry are atoms and molecules, the building blocks physics could use to determine what the aether is might be binary digits and topology. There are two clues to this conclusion. First - in his book "A Brief History of Time", Stephen Hawking states that quantum spin tells us what particles actually look like. A particle of matter has spin 1/2 and must be completely rotated twice (720 degrees) to look the same. Added to this is - a Mobius strip must be travelled around twice in order to reach the starting point. The second clue was supplied by a paper Einstein published in 1919. That paper asks if gravitation and electromagnetism play a role in forming elementary particles.
The clues can produce the following hypothesis. The BITS or binary digits of one and zero code for a Mobius strip (similar to the way that topological figure can be viewed on the Internet). Then two Mobius figures are joined to create a Klein bottle: possibly, the doughnut-shaped figure-8 version of the Klein. The Klein bottle is immersed in the 3rd dimension, with binary digits filling in any holes or gaps to produce a technically flat and simply-connected result. This procedure is similar to computer art's Sky Replacement, where the 1s and 0s can make a smooth blue sky stretching from horizon to horizon. The 1s and 0s naturally exist on quantum scales, and imaginary numbers are essential in quantum mechanics. So the complex (real+imaginary) numbers of Wick rotation could be given a practical use by being a subroutine of the Mobius strips and becoming the 4th dimension of time which can't be separated from the dimensions of space. Trillions of Mobius strips could form a photon while trillions of more complicated figure-8 Klein bottles might form the more complicated graviton. Interaction of photons and gravitons (in a process called Vector-Tensor-Scalar [VTS] Geometry) creates the Mobius-based matter particles. In this scenario, the aether - the medium waves travel through - wouldn't be an abstract thing called space filled with alleged Virtual Particles which can't be detected and may not even exist. The medium would be a sea filled with photons and gravitons.
Another possibility is that there is no medium for the gravitational and electromagnetic waves, and that there truly is no aether. In that case, waves would not merely be described by mathematics but would literally be the result of maths. A 3D (three dimensional) cube can be regarded as a reality coded on a 2D surface - in other words, the cube is a projection from a square. The 2D square would be a nonlinear (angular) math object resulting from adding four lines, each one being separated from those adjoining it by 90 degrees. The cubic shape would result from adding, in one direction, multiple layers of the information in the square. Instead of programming a set of points to follow a straight line, they can be represented curvilinearly as a waveform and described by Fourier analysis, v=f(lambda), etc. Interacting particles can produce waves just as masses can curve spacetime to produce gravity and gravitational waves. VTS Geometry plausibly explains the inverse - it doesn't solely regard mass as the producer of gravity but also regards gravity, partnering with electromagnetism, as producer of mass. Inverting quantum mechanics, gravitational and electromagnetic waves create particles with mass (protons, neutrons, quarks, electrons, etc - even the Higgs boson). As Stephen Hawking and Leonard Mlodinow point out in their book "The Grand Design", ultimate reality does not have to be described with quarks though it certainly can be. In this paragraph, the idea of curved space is replaced by gravitational and electromagnetic waveforms travelling on curved trajectories.
Relevant answer
Answer
The Michelson-Morley experiment DID measure an effect, but it was in the wrong direction and a much lower value than a luminiferous ether would cause. But this doesn't reject all ether models - just the luminiferous model.
The transparent mask experiment shows the ether is a real thing in our universe. ( )
  • asked a question related to Mathematics
Question
2 answers
what is difference between green production and regular production in inventory control with theoretical expression and mathematical term ?
Relevant answer
Answer
Green production integrates environmental considerations into inventory control, leading to additional costs and altered metrics compared to regular production. These adjustments are necessary to achieve sustainability goals and comply with environmental regulations. The mathematical expressions for green production often include additional terms to account for environmental costs, reflecting the theoretical emphasis on sustainability.
  • asked a question related to Mathematics
Question
14 answers
What is going with physics?
Two excellent books written by experts in the field:
- Eric J. Lerner “THE BIG BANG NEVER HAPPENED,
- and Lee Smolin and his The Trouble with Physics,
show that the Great Crisis of Physics, will not be solved if we don’t change our frame of references, abandoning once and for all, the way we are trying to solve those fundamental problems of physics. Institution such as ResearchgGate, certainly are way to make easier to obtain that goal, but then it is necessary, that people open their minds to new ways to “seeing reality”… not the one they are using in mainstream physics, such as GR and Big Bang and so on…
“THE STRANGE CAREER OF MODERN COSMOLOGY
In our century the cosmological pendulum has swung back. The universe of present-day cosmology is more like that of Ptolemy and Augustine than that of Galileo and Kepler. Like the medieval cosmos, the modern universe is finite in time-it began in the Big Bang, and will end either in a Big Crunch or in a slow decay and dissipation of all matter. Many versions, like Stephen Hawking's, are finite in space as well, a perfect self-enclosed four-dimensional sphere. There is a gap between the heavens and the earth: in space there exist strange entities, governed by the pure and ethereal mathematics of general relativity-black holes, cosmic strings, axions-which cannot, even in principle, be studied on earth.
The nineteenth-century universe evolved by laws still in action today, as did that of the Jonians, yet the universe of modern cosmology is the product of a single, unique event, qualitatively different from anything occurring today-just as the medieval cosmos was the product of the creation. While scientists of a century ago saw a universe of continuous change, evolution, and progress, today's researchers see a degenerating universe, the ashes of a primordial explosion.
To earlier scientists, and to most of today's scientists outside cosmology, mathematical laws are descriptions of nature, not the true reality that lies behind appearances. Yet today cosmologists assume, as did Plato and Ptolemy, that the universe is the embodiment of preexisting mathematical laws, that a few simple equations, a Theory of Everything, can explain the cosmos except for what "breathed fire" into these equations to make them come alive.
Big Bang cosmology does not begin with observations but with mathematical derivations from unquestionable assumptions. When further observations conflict with theory, as they have repeatedly during the past decades, new concepts are introduced to "save the phenomenon"-dark matter, WIMPs, cosmic strings-the "epicycles" of current astronomy.”
“Alfven wrote sixty years later, "The people were told that the true nature of the physical world could not be understood except by Einstein and a few other geniuses who were able to think in four dimensions. Science was something to believe in, not something which should be understood. Soon the bestsellers among the popular science books became those that presented scientific results as insults to common sense. One of the consequences was that the limit between science and pseudo-science began to be erased. To most people it was increasingly difficult to find any difference between science and science fiction."^ Worse still, the constant reiteration of science's incomprehensibility could not fail to turn many against science and encourage anti-intellectualism.”
THE BIG BANG NEVER HAPPENED
Eric J. Lerner
“THE TROUBLE WITH PHYSICS
In this illuminating book, the renowned theoretical physicist Lee Smolin argues that fundamental physics – the search for the laws of nature- is loosing its way. Ambitious ideas about extra dimensions, exotic particles, multiple universes, and strings have captured the public’s imagination- and the imagination of experts. But these ideas have not been tested experimentally, and some, like string theory, seem to offer no possibility of been tested. Yet these speculations dominate the field… As Smolin points out, the situation threatens to impede the very progress of science…” Brian Appleyard, Sunday Times(London)” "
Edgar Paternina
Retired electrical engineer
Relevant answer
Answer
YES, being a researcher in new physics, I completely agree that these are extremely simple and I would even say banal means of calculation which were the basis of the various calculations which led to the creation of the universe of which there are infinitely small objects and infinitely large objects.
  • asked a question related to Mathematics
Question
209 answers
Updated information of my thoughts and activities.
This is meant to be a one-way blog, albeit you can contribute with your recommendations and comments.
Relevant answer
Answer
Use (free PDF, also in print) the book QUICKEST CALCULUS with programmed instruction. Integral is immrdiate, just the inverse of differentiation plus a constant.
  • asked a question related to Mathematics
Question
10 answers
How do mathematicians recognize the article entitled "On the Nature of Some Euler's Double Equations Equivalent to Fermat's Last Theorem", published in Mathematics, Volume 10, Issue 23 (December-1 2022) by MDPI?
Relevant answer
Answer
Dear Spiros Konstantogiannis,
The author has already produced and sent a note that clarifies all the issues relating to his work.
Let's give the right amount of time (we hope it doesn't exceed more than three months) to the editorial committee of the magazine to complete its investigation in the best possible way.
I am confident of a surprising result.
W P. DE FERMAT and W LEONHARD EULER !!!
ANDREA OSSICINI, AL HASIB
  • asked a question related to Mathematics
Question
2 answers
btw have u ever know about math bounded and reality bounded? for example it happen when student interpret the solution of problem toward mathematics. if you have experience or research about that, please let me know and lets discuss it. Thannk you!
Relevant answer
Answer
Perhaps, you may check on
Bounded rationality was coined by Herbert A. Simon, where it was proposed as an alternative basis for the mathematical and neoclassical economic modelling of decision-making, as used in economics, political science, and related disciplines.
and, related research
  • asked a question related to Mathematics
Question
6 answers
Greetings mathematicians and physicists,
As we know, the idea of ​​assuming a higher space dimension than the natural 3 dimension is effective in solving many physical and mathematical problems.
Here, I'll pose some philosophical questions about " time ", and I don't know where they'll lead us when we try to answer them.
  1. Why isn't time expressed in two or more variables, like space?
  2. Do we really live in one time?
  3. What happens to ODE and PDE when we impose temporal duality in the unknown, such as u(t,s) and u(t,s,x)?
  4. Now, what is the exact solution of this differential equation: u_t(t,s)+u_s(t,s)-u(t,s)=0, u(0,0)=u_0, t,s>0.
I look forward to hearing from you soon. Thank you so much.
Best from Algeria,
Khaldi Said, Phd student in mathematics.
Relevant answer
Answer
These questions are tricky in physics but not sure pure maths have limitations:
These questions are tricky in physics, but not sure pure maths have limitations:
1. Isn't because time flows only toward the future or past, which makes it stays on one axis, hence, 1 single axis or dimension?
2. If someone's time flows in several time axes, it basically means the person leaves in parallel time realities simultaneously, I have not heard of an experiment proving this, my understanding is that the Einstein general relativity proves time can flow (faster or slower) due to gravity (slower as you approach a black hole for instance, shown in the Schwarzschild's equation), maybe I am wrong.
3. Unless you have specific cases with limited intervals and boundaries, the maths solving methods sholud apply, for instance, what will be wrong with the Laplace Transform to solve your point 4. equation?
4. Use the Laplace transform.
  • asked a question related to Mathematics
Question
1 answer
Recent research papers and journals on indigenous knowledge system
Relevant answer
Answer
Indigenous knowledge systems (IKS) offer a rich and practical lens for approaching mathematics problems. Here's how:
Estimation and Approximation: Indigenous communities often rely on honed observation skills and approximation techniques to navigate their environment and manage resources. These skills translate well to estimating quantities, solving for unknowns, and developing a feel for proportions.
Holistic Problem-Solving: IKS emphasizes connections and relationships within a system. This encourages students to consider the bigger picture and approach problems from different angles, fostering critical thinking and creative solutions.
By incorporating IKS into math education, we can:
- Improve student engagement and understanding, particularly for those who struggle with traditional methods.
- Bridge the gap between classroom learning and real-world application.
In general, IKS provides valuable tools to enhance math problem-solving and promote a more inclusive learning environment.
  • asked a question related to Mathematics
Question
2 answers
Is there any geometrically derived mathematical expression for the duration of the natural day at any latitude and any time of the year?
  • asked a question related to Mathematics
Question
9 answers
In the isosceles triangle ABC (AC=AB), the angle at the vertex is 20°.
Point D is chosen on the side AB such that AD=BC.
Find the measure of angle CDB.
Relevant answer
Answer
another way ... construct an equilateral triangle ADE with side AD where point E is located outside the triangle ABC (on the right side of line AC) ... find 2 congruent triangles:
ABE and BAC
because AB is a common side, AE = BC, and the angle between these corresponding sides is 80 degrees, ... therefore BE=AC
next, triangle ADB is congruent to to triangle EDB
because both triangles have 3 equal sides,
thus, angle ABD = angle EBD = a half of 20 ... etc.
  • asked a question related to Mathematics
Question
2 answers
Please give answer. Also explain mathematical equations behind this.
Relevant answer
Answer
Describes how innovations (products, opinions, attitudes) spread from one person to another over time within a social network.
  • asked a question related to Mathematics
Question
2 answers
In the recently published JCR, the journal Mathematical Biosciences and Engineering does not appear. I have searched everywhere and have found nothing, it does not appear in the list of excluded journals. The journal had an impact factor of 2.6 in the previous JCR (Mathematics and Computational Biology). I have contacted the journal, but they have not responded.
Does anyone know anything or can advise where to look?
Regards
Relevant answer
Answer
I am afraid they are delisted from Clarivate’s SCIE index.
In the enclosed file you see the journal “Mathematical Biosciences and Engineering” is delisted or more precise it states “Editorial delisting” this means and I quote from the first link “If a journal is removed from coverage because it no longer meets the quality criteria, it will be removed from the Master Journal List and appear as an ‘Editorial De-listing’ in the next Monthly Changes file available from the Monthly Changes Archive.”.
I think it has to do with the suspicious increase in published papers the last two years (as can be seen in for example the “Scpous Content Coverage” here https://www.scopus.com/sourceid/5200152802 ). Most likely this is caused by the excessive use of special issues which has been the cause of a large number of journal titles last year, see for example https://www.researchgate.net/post/Web_of_Science_de-listed_82_journals_Is_it_the_beginning
Best regards.
  • asked a question related to Mathematics
Question
1 answer
Consider a circle of radius R with center O.
Two other circles are internally tangent to this circle and intersect at points A and B.
Find the sum of the radii of the other two circles, given that ∠OAB = 90°.
Relevant answer
Answer
R = short answer = AO_1 + AO_2
Let points D and E denote the common tangent points where the big circle touches the two small circles, and point C be the intersection point of the tangent lines at the points D and E.
First, there is a circle which passes through the points O, D, C, E, and A, because the line segment OC is seen from the points D, E, A under right angle.
Next, angle O_2AE = angle AEO_2 = say = a = angle ADO = angle DAO_1, thus angle OO_1A = angle OO_2A = 2a.
Second, angle OAO_2 = angle AOO_1 because
angle OAO_2 = is measured by the following arcs = -OA + OA + AEC + CD = -OA + pi/2 + CD,
and similarly,
angle AOO_1 = measured by the arc DCEA = DC + CEAO - AO = DC + pi/2 - AO.
Hence, triangles AOO_1 and AOO_2 are identical - they have a common side and 2 equal angles.
Thus, AO_1 = OO_2, so AO_1 + AO_2 = OO_2 + AO_2 = OO_2 + EO_2 = OE = R.
  • asked a question related to Mathematics
Question
2 answers
IIHIs
Relevant answer
Answer
Is Even Fibonacci series make revolution over Fibonacci series in future?
  • asked a question related to Mathematics
Question
1 answer
In triangle ABC, the bisector AL₁ is drawn.
Points O₁, O₂, O are the centers of the circles circumscribed around triangles ACL₁, ABL₁, ABC, respectively.
The radii are denoted as R₁, R₂, R for the respective circles.
The task is to find OO₁ and OO₂.
Given: ∠CAL₁ = ∠BAL₁; γ₁ (O₁; R₁ = O₁ A); γ₂ (O₂; R₂ = O₂ A); γ₀ (O; R = OA).
Find: OO₁, OO₂
Relevant answer
Answer
Apparently, both share same length OO1 = OO2 = 📷 = sqrt(R*R - R1*R2)
  • asked a question related to Mathematics
Question
2 answers
Fermat and his genius !!!
Below it is rework of the chapter “The Prize” from Simon Singh's book “ Fermat’s Last Theorem: The story of a riddle that confounded the world's greatest minds for 358 years” :
<<Fermat wrote that his proof would not fit into the margin of his copy of Arithmetica, and Wiles’s 100 pages of dense mathematics certainly fulfils this criterion, but surely the Frenchman did not invent modular forms, the Taniyama-Shimura conjecture, Galois Groups and the Kolyvagin-Flach method centuries before anyone else.
If Fermat did not have Wiles’s proof then what did he have?
Mathematicians are divided into two camps:
The sceptics believe that Fermat’s Last Theorem was the result of a rare moment of weakness by the 17th-century genius.
They claim that although Fermat wrote, ”I have discovered a truly marvellous proof”, he had in fact found only a flawed proof.
Other mathematicians, the romantic optimists, believe that Fermat may have had a genuine proof.
Whatever this proof might have been, it would have been based on 17th-century techniques, and would have involved an argument so cunning that it has eluded everybody.
Indeed, there are plenty of mathematicians who believe that they can still achieve fame and glory by discovering Fermat’s original proof.
In my case it is pure passion for the Mathematics and the desire to do justice to Fermat and his genius !!! >>
For this reason I recommend carefully reading the following document entitled "Fundamental elements of a proof” relating to the recently elementary proof of Fermat Last Theorem has been given by Andrea Ossicini.
This articles, entitled "On the Nature of Some Euler's Double Equations Equivalent to Fermat's Last Theorem" effectively provide a reformulation of Fermat's Last Theorem and has been published in 2022 in the journal "Mathematics" by publisher MDPI (Multidisciplinary Digital Publishing Institute).
The Journal "Mathematics" is indexed in SCOPUS. Impact factor 2.4. It is quoted with a journal rank: JCR - Q1 (Mathematics) / CiteScore 3.5 - Q1 (General Mathematics).
Ossicini's article is indicated by Mathematics as "Feature Paper".
This label is used to represent the most advanced investigations which can have a significant impact in the field.
A Feature Paper should be an original contribution that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewer.
Relevant answer
Answer
Dear Sir/Madam,
I am sending you my article as my friends have suggested. I will now present my friend's speech from the IAEA:
Fermat's Theorem - A Proof by Fermat Himself
(c) Yurkin Pavel, IAEA
The Russian nuclear physicist Grigoriy Leonidovich Dedenko has reconstructed the original reasoning of Pierre Fermat, which led Fermat to conclude that the sum of two identical natural powers of rational numbers, raised to an exponent greater than two, is not representable. This is known as Fermat's Last Theorem.
As you may know, in 1637, Fermat wrote a note in the margins of his copy of Diophantus's "Arithmetic" stating his discovery and adding, "I have discovered a truly marvelous proof, but this margin is too narrow to contain it."
According to G.L. Dedenko, Fermat analyzed power differences using a method that was novel at the time: decomposing these differences into a sum of pairwise products, later known as the "Newton binomial". Fermat discovered that the coefficients in this expansion satisfied simple conditions equivalent to a logarithmic equation (a concept still developing in the mid-17th century) for the degree of the sum (or difference). This equation has only two solutions: the numbers one and two.
Thus, the margins of the book were indeed too narrow to contain the complete proof. Fermat's proof would have required the introduction and development of new concepts, such as expansion with combinatorial coefficients (Newton's binomial) and logarithms. It remains unclear whether Fermat ever wrote down his detailed reasoning, and if so, whether this record survives in an unexpected archive. Historians of natural history are encouraged to search anew.
Sincerely, Grigoriy Dedenko
15.jun.2024
Final Release No.25 in PDF (MS Word & LaTeX versions)
  • asked a question related to Mathematics
Question
3 answers
I'm reading an article titled "Scientists Seek Life Across the Multiverse" and it says,
"If the multiverse hypothesis is correct, physicists would no longer have to find explanations for the absurdly improbable fine-tuning of the laws of nature that has made our existence possible. We are just lucky to live in a good universe among many different ones. One universe fine-tuned for life is an unlikely fluke. But one habitable universe among many is to be expected."
Another way of phrasing this is - Scientists are so eager to avoid any notion of Intelligent Design of the cosmos that they're willing to deny Earth's own scientific potential ... and their own intelligence.
Evolution can be observed in the form of adaptation of structure and function to the environment but there’s no reason to extrapolate this theory in order for it to account for life’s origin. In future centuries, human technology will develop terraforming and incredibly advanced bioengineering of cells - amino acids, proteins, water, nucleic acids, etc which were gathered in space or on planets and combined (science already knows these molecules exist out there). This could account for life’s origin since it agrees with 19th-century chemist Louis Pasteur’s proving that life can only originate from life. The origin-of-life hypothesis presented here obviously needs time travel back to a time when there was no life. This is feasible using General Relativity's concept of curved time (which is made circular via Wick rotation and future warping of space-time).
It's convenient to say Wick rotation is a form of mathematical trickery but explanation of the photoelectric effect seems to have sprung directly from Max Planck's idea of quanta - now called photons - which was also regarded for years as a mathematical convenience. Could an extension of evolution spring directly from the supposed math trickery of Wick rotation? We only need to be open to our current interpretations of science and maths not being set in stone. History has shown that presently accepted theories always change. And we are not the endpoint of history - we're simply one more step passing through it.
Relevant answer
Answer
The main aspect of this problem is the frequency and intensity of interaction between the elements of the Multiverse. If these parameters are large enough, then natural selection can produce results that would seem unprofitable and very strange in our Universe, but quite rational in one or more parallel ones.
  • asked a question related to Mathematics
Question
3 answers
"Paper 4: Mathematical Framework for the Alcubierre Drive Using New Quarks: Unifying Dark Energy and Dark Matter"
The Dodecahedron Linear String Field Hypothesis (DLSFH) provides a viable theoretical foundation for the Alcubierre drive. By defining new quarks with the necessary properties to generate negative energy density and manipulate spacetime, this framework supports the feasibility of faster-than-light travel.
I invite the community to discuss this grand idea and our understanding of the need to explore new physics to fill the gap that Quantum Mechanics currently is missing!
A Theoretical Physicist, above all things, must have imagination and be a philosopher before he can part any knowledge of the Universe!