Science topic

# Pure Mathematics - Science topic

Explore the latest questions and answers in Pure Mathematics, and find Pure Mathematics experts.

Questions related to Pure Mathematics

I am aware of the facts that every totally bounded metric space is separable and a metric space is compact iff it is totally bounded and complete but I wanted to know, is every totally bounded metric space is locally compact or not. If not, then give an example of a metric space that is totally bounded but not locally compact.

Follow this question on the given link

I am trying to solve the differential equation. I was able to solve it when the function P is constant and independent of r and z. But I am not able to solve it further when P is a function of r and z or function of r only (IMAGE 1).

Any general solution for IMAGE 2?

Kindly help me with this. Thanks

Is the reciprocal of the inverse tangent $\frac{1}{\arctan x}$ a (logarithmically) completely monotonic function on the right-half line?

If $\frac{1}{\arctan x}$ is a (logarithmically) completely monotonic function on $(0,\infty)$, can one give an explicit expression of the measure

*$\mu(t)$*in the integral representation in the Bernstein--Widder theorem for*$f(x)=*\frac{1}{\arctan x}*$*?These questions have been stated in details at the website https://math.stackexchange.com/questions/4247090

Hello

Can someone help me to solve this?

Because I really don't know about these problems and still can't solve it until now

But I am still curious about the solutions

Hopefully you can make all the solutions

Sincerely

Wesley

Many proposals for solving RH have been suggested, but has it been splved? What do you

- I would like to post this question to clarify my doubts as there were two different answers seems to be correct. Two different experts (1. faculty members in applied mathematics department and (ii) and faculty in pure mathematics department ) have different opinion . Question: Find the limits of integration in the double integral over R , where R is the region in the first quadrant (i) bounded by x=1, y=1 and y^2=4x Problem 2 (ii) bounded by x=1, y=0 and y^2=4x.

More precisely, if the Orlik-Solomon algebras A(A_1) and A(A_2) are isomorphic in such a way that the standard generators in degree 1, associated to the hyperplanes, correspond to each other, does this imply that the corresponding Milnor fibers $F(A_1)$ and $F(A_2)$ have the same Betti numbers ?

When A_1 and A_2 are in C^3 and the corresponding line arrangements in P^2 have only double and triple points, the answer seems to be positive by the results of Papadima and Suciu.

See also Example 6.3 in A. Suciu's survey in Rev. Roumaine Math. Pures Appl. 62 (2017), 191-215.

We study about some laws for group theory and ring theory in algebra but where it is used.

Can we apply the theoretical computer science for proofs of theorems in Math?

By dynamical systems, I mean systems that can be modeled by ODEs.

For linear ODEs, we can investigate the stability by eigenvalues, and for nonlinear systems as well as linear systems we can use the Lyapunov stability theory.

I want to know is there any other method to investigate the stability of dynamical systems?

A careful reading of THE ABSOLUTE DIFFERENTIAL CALCULUS, by Tullio Levi-Civita published by Blackie & Son Limited 50 Old bailey London 1927 together Plato's cosmology strongly suggest that gravity is actually a real world mathematics or in another words is gravitation a pure experimental mathematics?

Please share your Opinion.

Mathematics is the queen of Sciences. It deals with the scientific approach of getting useful solutions in multifarious fields. It is the back bone of modern science. Ever since its inception it is going into manifold directions. Now in these days of advanced development, it is interlinked with every important branch of technical and modern science. Pure mathematics and Applied mathematics are two eyes of Mathematics. Both are having and playing an equal and significant role in the field of research.

Compute nontrivial zeros of Riemann zeta function is an algebraically complex task. However, if someone able to prove such an iterative formula can be used to get all approximate nontrivial using an iterative formula, then its value is limitless.How ever to prove such an iterative formula is kind of a huge challenge. If somebody can proved such a formula what kind of impact will produce to Riemann hypothesis? . Also accuracy of approximately calculated non trivial accept as close calculation to non trivial zeros ?

Here I have been calculated and attached first 50 of approximate nontrivial using an iterative such formula that I have been proved. Also it is also can be produce millions of none trivial zeros. But I am very much voirie about its appearance of its accuracy !!. Are these calculations Is ok?

What are your opinions and suggestions regarding my research (Development of Mathematical Model for accounting system in the Stocks)

Is our use of pure mathematics in this research a shift in the field of modern accounting

And continuous development of analytical accounting curricula

Is there a difference between pure and applied mathematics?

In Wikipédia, we can find the following definition :

**Number theory**(or

**arithmetic**or

**higher arithmetic**in older usage) is a branch of

**pure mathematics**

**devoted primarily to the study of the integers. But Number Theory is mostly applided, for example, in modern Data Encryption Techniques (cryptography).**

**So, Is there really a difference between pure and applied mathematics?**

Dear All,

I am hoping that someone of you have the

**First Edition**of this book (pdf)**Introduction to Real Analysis by Bartle and Sherbert**

The other editions are already available online. I need the First Edition only.

It would be a great help to me!

Thank you so much in advance.

Sarah

I'm teacher and suffering a lot to complete my MS. I need to write an MS level research thesis. I can work in Decision Making (Preference relations related research work), Artificial Intelligence, Semigroups or Γ-semigroups, Computing, Soft Computing, Soft Sets, MATLAB related project etc. Kindly help me. I would be much grateful to you for this. Thanks.

The function assumes a direct and reverse law. What do we know about the inverse function? Never mind. This is just the shadow of the direct function. Why don't we use the inverse function, as well as direct? ------------- I propose the concept of an unrelated function as extended concept of reverse function. ------------ There is a sum of intervals, on each of which the function is reversible (strictly monotonic) -nondegenerate function. ---------- For any sum of intervals, there is an interval where the function is an irreversible-degenerate function.

Mathematics has been always one of the most active field for researchers but the most attentions has gone to one or few subjects in one time for several years or decades. I'd like to know what are the most active research areas in mathematics today?

**Is the canonical unit 2 standard probability simplex, t**he convex hull of the equilateral triangle in the three dimensional Cartesian plane whose vertices are (1,0,0) , (0,1,0) and (0,0,1) in euclidean coordinates, closed under all and only all convex combinations of probability vector

that is the set of all non negative triples/vectors of three real numbers that are non negative and sum to 1, ?

Do any unit probability vectors, set of three non negative three numbers at each pt, if conceveid as a probability vector space, go missing; for example

**<p1=0.3, p2=0.2, p3=0.5>**may not be an element of the domain if the probability simplex in barry-centric /probabilty coordinate s a function of p1, p2, p3 .**where y denotes p2, and z denotes p3, is not constructed appropriately?**

and the pi entries of each vector, p1, p2 p3 in <p1, p2,p3> p1+p2+p3=1 pi>=0

in the x,y,z plane where x =m=1/3 for example, denotes the set of probability vectors whose first entry is 1/3 ie < p1=1/3, p2, p3> p2+p3=2/3 p1, p2, p3>=0; p1=1/3 +p2+p3=1?

p1=1/3, the coordinates value of all vectors whose first entry is x=p1=m =1/3 ie

**Does using absolute barry-centric coordinates rule out this possibility? That vector going missing?**

where <p1=0.3, p2=0.2, p3=0.5> is the vector located at p1, p2 ,p3 in absolute barycentric coordinates.

Given that its convex hull, - it is the smallest such set such that inscribed in the equilateral such that any subset of its not closed under all convex combinations of the vertices

**(I presume that this means all and only triples of non negative pi that sum to 1 are included, and because any subset may not include the vertices etc). so that the there are no vectors with negative entries**every go missing in the domain in the , when its traditionally described in three coordinates, as the

**convex hull of three standard unit vectors**(1,0 0) (0,0,1 and (0,1,0), the equilateral triangle in Cartesian coordinates x,y,z in three dimensional euclidean spaces whose vertices are Or can this only be guaranteed by representing in this fashion.In QFT, computations are done with plane-wave free solutions of the Dirac equations: if one were to consider full solutions of the Dirac equation in interaction with its own electrodynamic field, even without field quantization, what would one obtain? Does anybody know of full solutions, even at a classical level?

NOTE: the question is of purely mathematical interest, so I am not interested in reading that we commonly do not do that in standard computations, I would like to know what would happen if considering the problem with mathematical rigour.

People usually say that the number greater than any assignable quantity is infinity and probably same in the case of -ve ∞.

We are dealing with infinity ∞ in our mathematical or statistical calculations, sometimes we assume, sometimes we come up with it. But whats the physical significance of infinity.

Or

Anyone with some philosophical comments?

Today, every educational field or domain contains several branches. You first choose one branch of your field to prepare your Master and Ph.D degrees. What is preferable system for you:

1- study the same branch at both Master and Ph.D degrees

or

2-study different branch at Ph.D degrees from Master degree

And, Why?

All of us have a different view of point when we prepare study on some topics. Some of us try to study only the depth results in the work, whereas other think it must to investigate all results that associated to the topic regardless its difficulty or ease. On the other hand, some scholars focus on both.

*What is your opinion?*State dependent additivity and state independent additivity? ;

akin to more to

**cauchy additivity**versus local ko**lmorgov additivity/normalization of subjective credence/utility,**in a simplex representation of subjective probability or utility ranked by objective probability distinction? Ie in the 2 or more unit simplex (at least three atomic outcomes on each unit probability vector, finitely additive space) where every events is ranked globally within vectors and between distinct vectors by < > and especially '**='****i [resume that one is mere representability and the other unique-ness**

**the distinction between the trival**

**(1)x+y+z x,y,z mutually exclusive and exhaustive F(x)+F(y)+F(z)=1**

**(2)or F(x u y) = F(x)+F(y) xu y ie F(A V B)=F(A)+F(B)=F(A)+ F(B) A, B disjoint**

**disjoint on samevector)**

**(3)F(A)+F(AC)=1 disjoint and mutually excluisve on the same uni**t vector

and more like this or the properties below something more these

to these (3a, 3b, 3C) which are uniqueness properties

**forall x,y events in the simplex**

**(3.A) F(x+y)=F(x)+F(y) cauchy addivity(same vector or probability state, or not)**

**This needs no explaining**

**aritrarily in the simplex of interest (ie whether they are on the same vector or not)**

**or(B) x+y =z+m=F(z)+F(m) (any arbitary two or more events with teh same objective sum must have the same credence sum, same vector or not) disjoint or not (almost jensens equality)**

**or (C)F(1-x-y)+F(x)+F(y)=1 *(any arbitrary three events in the simplex, same vector or not, must to one in credence if they sum to one in objective chance)**

**(D) F(1-x)+F(x)=1 any arbitary two events whose sum is one,in chance must sum to 1 in credence same probability space,/state/vector or not**

**global symmetry (distinct from complement additivity) it applies to non disjoint events on disitnct vectors to the equalities in the rank. 'rank equalities, plus complement addivitity' gives rise to this in a two outcome system, a**

. It seems to be entailed by global modal cross world rank, so long as there at least three outcome, without use of mixtures, unions or tradeoffs. Iff ones domain is the entire simplex

that is adding up function values of sums of evenst on distinct vectors to the value of some other event on some non commutting (arguably) probability vector

F(x+y)=F(x)+F(y)

In the context of certain probabilistic and/or utility unique-ness theorems,where one takes one objective probability function and tries to show that any other probability function, given ones' constraints, must be the same function.

Given a

_{1}, a_{2}, ..., a_{n}positive real numbers, and definedp

_{k }= a_{k }\prod { i = 1 to n, i <> k} [(a_{i})^{2 }- (a_{k})^{2}]how to prove that \sum \frac{1}{p

_{i}} is positive?I had an idea of proof, but not sure it would work....

I have the idea written in the attached .png file.

EDIT: See the .png file here.

In order to get a homogeneous population by inspecting two conditions and filtering the entire population (all possible members) according these two conditions, then used the all remaining filtered members in the research, Is it still population? or it is a sample ( what is called?).

working on mathematical equation by adding other part to it then find the solution and applying it on the real world. can we generalize its result to other real world?

It will be of immense help for me if you can suggest me some papers and books related to the same.

(1) How can we find the partial sum of

**n**instantly ?^{1000}(2) is there is a simple method to find partial sum of the sequence

**f(n)**?(3) Any general method to compute partial sum of sequence ?

(4) What is the value of Method , if we have good approximation for all differentable sequence ?

In basic numerical analysis, it is shown that Aitken's method improves on the basic iteration method in speed of convergence in the asymptotic sense (see detail below if desired). Now it seems that this should be meaningless in practice, giving no guarantees of 'faster' for any finite number of iterations. I found that this is not only my feeling, but that this concern is echoed in the related Wikipedia article:

*Although strictly speaking, a limit does not give information about any finite first part of the sequence, this concept is of practical importance in dealing with a sequence of successive approximations for an iterative method, as then typically fewer iterations are needed to yield a useful approximation if the rate of convergence is higher. This may even make the difference between needing ten or a million iterations insignificant.*

My questions then are

1. Barring empirical evidence, is there ANY formal way of turning the asymptotic result into a result in terms of finite iterations? Even at least probabilistically? Even when conditions are added? This would be an example of what I have in mind: Given a function with condition such and such (smooth, etc.), the convergence is indeed faster in nn iterations with probability p(n)p(n).

2. If there are such results, can you point me to some of them?

3. If there are no such results, should there be no interest in trying to find them? If not why not?

4. Doesn't this state of affairs 'bother' numerical analysts? If not shouldn't it?

5.Do people reading this have their own 'intuitions' about when the speed of convergence holds in practice? What are these intuitions? Why not try to formalise them?

**Detail**

When the series {x_n} generated using x_i=f(x_i−1) converges under the usual conditions, Aitken's method, generating the series {x′_n} using

x′n=xn−(xn+1−xn)^2 / (xn+2−2xn+1+xn)

converges faster in the sense that, with s being the solution f(s)=sf(s)=s, we have

(x′n−s) / (xn−s) → 0, n→∞.

I was wondering if there are any set of n; n>=3continuous or somewhat smooth functions (certain polynomials), all of which have the same domain [0,1]

f^n(min,max):[0,1] \to[min,max]; where max>min, min, max\in [0,1] that have these properties

1.\forall v \in [0,1]\sum_n=1\toN(f^n(v)=1; for all values in the domain [0,1], the sum of the 'function values' =1at all points in this domain, and for all possible min and max values of the ranges of the n functions

2. The n functions have range continuously between [min,max], max>=min, both in [0,1]; and will all reach their maximum at some point, but only once (if possible).

3. These functions must reach their global maximum value only at the same point on the domain st,t the other distinct functions attain their global minimum value. These global minima for each fi, may occur at multiple points in the domain [0,1] .

4. There is, for each point in the domain, under which any distinct function fi\neqfj,( fj j\neqi) reaches a global maxima, at least one point (the same point) in the domain s.t that any fi attains its global minima. Correspondingly, each function fi has (or at least) n-1 points (in the domain) under which it attains its global minima, if each of the n functions has a singular point in the domain under which it attains its global maxima (the n-1 points corresponding to the distinct points in the domain under which the n-1 distinct functions, fj\neqfi attain their global maxima .

That is, for each, of the (n-1) global maximum values of the range of the other functions, f^i;kneqi, where there are n functions ; the value of the domain in [0,1] when one function reaches its maximum, is the same value of the domain wherein other functions at which they all reach their absolute minimum . W

5. The max values of the range of said functions, f^n= 1-\sum_{i;i\neqn}(min(ranf^i)). Ie the maximum value of the functions, f^i for all i\in{1,n} for n functions, is set by 1- sum (of the min function value/range of the other 'distinct' n-1 functions)

5. It must be such that whenever the minimum values are set so as to sum to one. ie min of any function= max of that same function for all such functions; that all the functions values become a flat line (ie, ie min=max, for such functions)

6.This should be possible for all possible combinations of min values of all 'n' f's (functiions), So each combination of n, non negative values in [0,1] inclusive that sum to one. So one should be able to set the min value of any given value of a function to any value in [0,1], so long as the conjoint sum, sums to one. Likewise presumable if any such function is set such that min=max, all of them are, and these will mins will add to one.

It must be be such that these functions should have to change, to make this work.So that for all min, max range values for the n functions and all elements in their domain (which is common to all and is [0,1]) these function values \forall(n, min function values)\forall(v in [0,1])\sumf^i(v) sum to one

6. Moreover and most importantly they must also allow for the non trivial case where the function minimums do not sum to one (ie are not all flat lines); but such that the sum of the values of the function of all such functions sum to one for all values in their common domain ,v=[0,1](which just is their domain, as their domains;f^i:[0,1]->[min,max] are the same).

Ie so that the functions will range between a maximum and minimum values (which are not equal) in a continuous and smooth fashion (no gaps, steps or spikes).

In this case it must be such that it possible for the the sum of the minimum values, to sum to some positive value smaller then one, although not greater than one, or smaller then zero; for all or 'some' possible combinations (obviously values may not be possible. although, not in virtue of the sum of the functions at some point not being one, but because the max has been set to be smaller then the min for some function; that is the sum of the mins is a particular combination greater then one; (ie 7 function min=0.16....., so that max of fi=1-1=0<0.166=min fi; that is if they sum to more then one;

7. I was wondering if there for any n>3,4,5,6,7,,,,four sets of such sets of 3,4,5,6,7 functions that will do this.

And then given this; it must also be such that the function can be modified so that a function only ever has a minimum of 0 if the maximum is 0; without having to set th sum of the mins of the function to 1.

And likewise such that the function only ever reaches a max of one if its min is one, unless under certain circumstancess; unless that is, it is possible to do this,for the same reason; without doing this having to set the sum of the mins to one. If indeed one does want some of the functions to range between. If possible, Although, What is most important, is that if any function has a min and max set to one, all of the rest of the functions values sit at zero for all v in [0,1].

It also must be such that if the maximum range value for any given function f1 is larger than that of another function f2 in the set, then f1's minimum value will be larger then that of f2's minimum range value; and conversely.

Are there sets of three, or rather for any N>3 sets of N

**sur-jective**uniformly continuous functions, for all N>3, where n denotes the number of function in the sets, such that each function in the set has the same domain [0,1] and the same range [0,1],such that these functions, in a given set, sum to one on all points in the domain [0,1]. ie \forallv\in[0,1] [\sum_{i=1}^{i=1n)fi(v)} =1 in [0,1] .Moro-ever, Are there arbitrarily many such functions for all n>=3.By non trivial, I mean, -non linear, (and presumably not quadratic) functions which just so happen to be that that their sum give value 1, in [0,1] or perhaps for any domain, and not by way of the algebraic sum cancelling out to a constant 1 as in the case of x, 1-x (ie error correcting) .Presumably due to the nature of the derivatives

That is, so that the functions, would sum to one, regardless of the domain [0,1] or least for if the domain [0,1] is held fixed, and the functions are weighted; without it being a mere artefact that the algebraic sum to cancels out to give a constant,1,for any x; that is error-correcting functions. That is three (or n>3) functions all of which have a maximum range value of one, miniimum range value of zero, and which sum to one for all points in the domain [0,1]; and which are surjective and uniformly continuous, ie for every value in the range ri\in [0,1], these functions have some (at least one) value, ci in the domain [0,1] such that f(ci)=ri such that takes on that .

Where these maximum values and min values for all fi coincide (the same element of the domain for which fi corresponds to 1, is such that other n-1/2 functions correspond to zero etc); where obviously these three/n max points such that fi(c)=1,(one, and only one for each function, fi,i1<=i<=n for n>=3, such functions) are distinct (correspond to distinct elements of the domain), although they correspond to the same element of the domain for for which each of the other 2/n-1 are at their minimum value (ie 0)

So for n=3 there is one one max point for each function, and at least , and presumably only, two/n-1 min points), s.t, when one functions fi(c) hits 1, fi(c)=1,the other 2/n-1 functions fj(c),jneqi take the value 0, fj(c)=0 for all j.And are there at least two such such sets for all sets of N>=3 such functions. So there are three distinct domain points in [0,1] corresponding to a n/3-tuple of function values,f1,f2,f3(c)= <1,0,0>,,f1,f2,f3(c1)=<0,1,0>f1,f2,f3(c2)=,<0,0,1>; corresponding the three distinct elements/points in the domain [0,1] , c\neqc1\neqc2, c,c1,c2,\in [0,1] . Where the first function is at its maximum value here 1, and the other two are their minimum, zero at c, the second is at its maximum at c1 (1), and first and third are a their mimimum (zero here) at c1 etc

And are continuous (no gaps, and uniformly continuous, ie no spikes).

As one could not make use of these such functions, if one wanted them to be weighted otherwise; if said functions have sums which either (A) as just cancel out to be constant, or two (are such that it their sums are do not cancel out to a constant, but just so happen to line up because the domain is [0,1]

Likwise, the functions a similar form; so that one does not want, one having two maximums whilst the other two for example have one maxima, and two minima. Perhaps Berstein polynomials could be so weighted, but I do not know; the linear forms cancel out but their weighted bezauir forms seem to be a little unstable from what I have read.

Dear professors:

Good afternoon. I am researching about teaching triangle inequality. Are there papers about theorems´production (or formulation) by secondary level's students? Which theoretical framework) in Mathematics Education) may be suitable to study conditions to construct triangle with three segments?

Best regards of Peru!

Luis

It is possible to write a set of quaternionic partial differential equations that are similar to Maxwell equations. For example:

The quaternionic nabla ∇ acts like a multiplying operator. The (partial) differential ∇ ψ represents the full first order change of field ψ.

ϕ = ∇ ψ = ϕᵣ + 𝟇 = (∇ᵣ + 𝞩 ) (ψᵣ + 𝟁) = ∇ᵣ ψᵣ − ⟨𝞩,𝟁⟩ + ∇ᵣ 𝟁 + 𝞩 ψᵣ ±𝞩 × 𝟁

The terms at the right side show the components that constitute the full first order change.

They represent subfields of field ϕ and often they get special names and symbols.

𝞩 ψᵣ is the gradient of ψᵣ

⟨𝞩,𝟁⟩ is the divergence of 𝟁.

𝞩 × 𝟁 is the curl of 𝟁

The equation is a quaternionic first order partial differential equation.

ϕᵣ = ∇ᵣ ψᵣ − ⟨𝞩,𝟁⟩ (This is not part of Maxwell equations!)

𝟇 = ∇ᵣ 𝟁 + 𝞩 ψᵣ ±𝞩 × 𝟁

𝜠 = −∇ᵣ 𝟁 − 𝞩 ψᵣ

𝜝 = 𝞩 × 𝟁

From the above formulas follows that the Maxwell equations do not form a complete set.

Physicists use gauge equations to make Maxwell equations more complete.

χ = ∇* ∇ ψ = (∇ᵣ − 𝞩 )(∇ᵣ + 𝞩 ) (ψᵣ + 𝟁) = (∇ᵣ ∇ᵣ + ⟨𝞩,𝞩⟩) ψ

and

ζ = (∇ᵣ ∇ᵣ − ⟨𝞩,𝞩⟩) ψ

are quaternionic second order partial differential equations.

χ = ∇* ϕ

and

ϕ = ∇ ψ

split the first second order partial differential equation into two first order partial differential equations.

The other second order partial differential equation cannot be split into two quaternionic first order partial differential equations. This equation offers waves as parts of its set of solution. For that reason it is also called a wave equation.

In odd numbers of participating dimensions both second order partial differential equations offer shape keeping fronts as part of its set of solutions.

After integration over a sufficient period the spherical shape keeping front results in the Green’s function of the field under spherical conditions.

𝔔 = (∇ᵣ ∇ᵣ − ⟨𝞩,𝞩⟩) is equivalent to d'Alembert's operator.

⊡ = ∇* ∇ = ∇ ∇* = (∇ᵣ ∇ᵣ + ⟨𝞩,𝞩⟩ describes the variance of the subject

Maxwell equations must be extended by gauge equations in order to derive the second order partial wave equation.

Maxwell equations use coordinate time, where quaternionic differential equations use proper time. In terms of quaternions the norm of the quaternion plays the role of coordinate time. These time values are not used in their absolute versions. Thus, only time intervals are used.

The quaternionic nabla obeys some other pure mathematical relations:

⟨𝞩 × 𝞩, 𝟁⟩=0

𝞩 × (𝞩 × 𝟁) = 𝞩⟨𝞩,𝟁 ⟩ − ⟨𝞩,𝞩⟩ 𝟁

(𝞩𝞩) ψ = (𝞩 × 𝞩) ψ − ⟨𝞩,𝞩⟩ ψ = (𝞩 × 𝞩) 𝟁 − ⟨𝞩,𝞩⟩ ψ = 𝞩⟨𝞩,𝟁 ⟩ − 2 ⟨𝞩,𝞩⟩ ψ + ⟨𝞩,𝞩⟩ ψᵣ

The term (𝞩 × 𝞩) ψ indicates the curvature of field ψ.

The term ⟨𝞩,𝞩⟩ ψ indicates the stress of the field ψ.

(𝞩 × 𝞩) ψ + ⟨𝞩,𝞩⟩ ψ = 𝞩⟨𝞩,𝟁 ⟩ − ⟨𝞩,𝞩⟩ ψᵣ

**Einstein equations for general relativity use the curvature tensor and the stress tensor. Above is shown that some terms of the partial differential equations relate to terms in Einstein's equations.**The advantage of writing equations with nabla based operators instead of with the help of tensors is that these PDE's are more compact and therefore easier comprehensible. The disadvantage is that the quaternionic PDE's enforce you to work in an Euclidean space-progression structure instead of in a spacetime structure that has a Minkowski signature.

Personally I consider the Euclidean structure as an advantage, but the Minkowski signature is more in concordance with mainstream physics.

I have triangle mesh and calculate normal of triangles then calculate vertex normal and do some calculations on it and want to calculate vertex coordinates from this vertex normal after do calculations.

In this figure called concentric circles, the circumference on A less than circumference on B. and so on. i.e, The distance between neighbor circles are the same, s.

circA < circB < circC < circD < circE

When XA finishes moving round the circumference A, it moves (transits) to the next circle, B, to help XB complete moving round the circumference. When XA and XB complete the movement round the circumference B, they now make a transition to circumference C and help XC complete the movement round the circumference C. After the completion, they also transit to join XC on circumference C and so on.

The question is this. How can this be presented mathematically (arithmetically)?

I am interested in prime number generation. Apart from the 2

^{p}-1 formula generated so many years ago by the French mathematician are there known formula for determining the next prime?Suppose $u(n)$ is the Lie algebra of the unitary group $U(n)$, why the dual vector space of $u(n)$ can be identified with $\sqrt{-1}u(n)$?

Any mathematical expert can see my attachment I have highlighted few mathematical symbols , what that symbol signifies how to understand that can anyone tell

As we know, an elliptic curve defined over Fq with a rational 2-torsion subgroup can be expressed in the special form (up to twists). Accordingly a natural question arises about the number of distinct (up to isomorphism) elliptic curves over Fq in the family.

Let HT denotes the statement of Hindman’s theorem. Within RCA

_{0}one can prove that:1. HT implies ACA

_{0}2. HT can be proved in ACA

_{0}^{+}.An open question is the strength of Hindman’s theorem.

Is HT equivalent to ACA

_{0}^{+}, or to ACA_{0}, or does it lie strictly between them?Let f(x)+g(x) = h(x). Here, h(x) is minimum at the points(a1,a2,...,ak). For which condition , we can say that f(x) is also minimum at the points(a1,a2,...,ak)?

Thanks in advance for your idea and please give any reference

.

Let $p(.)$ be an equivalent norm to the usual norm on $\ell_1$ such that

$$\limsup\limits_{n\to\infty} p(x_n+x)=\limsup\limits_{n\to\infty}p(x_n)+p(x)$$ for every $w^*-$null sequence $(x_n)$ and for all $x\in\ell_1,$ moreover, let $$\rho_{k}(x)=p(x)+\lambda\gamma_{k}\sum\limits_{n=k}^{\infty}|x_n|,$$ where, $(\gamma_{k})$ be any non-decreasing sequence in $(0,1)$ and $\lambda >0$. I'd like to prove for every $w^*-$null sequence $(x_n)$ and for all $x\in\ell_1,$

$\limsup\limits_{n\to\infty}\rho_k(x_n+x)=\limsup\limits_{n\to\infty}\rho_k(x_n)+\rho(x)$ .

**My attempt is the following**

\begin{align}

\limsup\limits_{n\to\infty}\rho_k(x_n+x)

=\limsup\limits_{n\to\infty} p(x_n+x) +\limsup\limits_{n\to\infty}\lambda\gamma_{k}\sum\limits_{n=k}^{\infty}|x_n+x| \\

=\limsup\limits_{n\to\infty}p(x_n)+p(x) +\limsup\limits_{n\to\infty}\lambda\gamma_{k}\sum\limits_{n=k}^{\infty}|x_n+x|\\

\end{align}

Now I could not proceed to prove, any ideas or hints would be greatly appreciated.

Thanks in advance

Every natural number n can be written as:

n= a_0 + a_1 *(10)^1 + a_2 * (10)^2 +... a_i between 0 and 9. how can we generate a new method to find the divisors of n apart from the well known method of prime factorization. If so, we can provide a new method to calculate the sum of divisors function.

In some cases, learners find it easy to deal with decimal fractions than proper and improper fractions. Looking at the complex formation of fractions when adding or subtracting seems harder and almost impossible.

e.g.

0.5 + 3.3 = 3.8

1/2 + 33/10 = 38/10

A mathematical colleague and I are working on an article which uses the pure mathematical analysis for equilibrium in equity crowdfunding. We inspired from the model of consumer-product (brand) preference to build up a mathematical model for investor-project preference on a crowdfunding platform. A common point in consumer-product and investor-project relations is that an agent has to choose among different options with optimal efficiency.

We presented a first draft recently at a conference.Non mathematician researchers had difficulty to follow and understand our paper.How can we make such a mathematical reasoning more understandable for non-mathematicians? Do you any article as model? What is your adive?

By giving evidence or reference,

Who first discovered the base of the natural logarithm:

*e*?A well-known result of B.M. Levitan and T.V. Avadhani asserts that the Riesz-summability of order k of the eigenfunction expansion f (P) of f (P) from L

^{2}(D) at the point P =P_{o}from D depend only on the behaviour of f (P) in the neighbourhood of P_{o}if k > (n-1)/2, i. e. is a local property of f(P) at the considered point P_{o}if k > (n-1)/2. Is it possible to prove (applying Parseval’s formula) the analogue of Avadhani's theorem for Avakumovic’s G - method of summability. A crucial step in the proof of this theorem is to find a function g that lead us to the core of Avakumovic's summability which is more complex than the core of Riesz’s summability.Since it is difficult to write mathematical formulae please consider the attached file.

We define a factoriangular number (Ft

_{n}) as the sum of a factorial and its corresponding triangular number, that is, Ft_{n}= n! + n(n+1) / 2. If both n and m are natural numbers greater than or equal to 4, is there an Ft_{n}that is a divisor of Ft_{m}? Please also see the article provided in the link below, specifically Conjecture 2 on pp. 8-9.My function is nonlinear with respect to a scalar \alpha .

However, the calculation of objective function is very time consuming, making optimization also very time consuming. Also, I have to do it for 1/2 millon voxels (3d equivalent of pixels). I plan to do it using “lsqnonlin” of matlab.

Rather than optimizing over all possible real values, I plan to search over preselected 60 values. My variable \alpha (or flip angle error)

**could be anything between 0-35%;**but, I want to pass only**linearly spaced points as candidates (i.e. 0:005:0.35)**. In other words, I want lsqnonlin to choose possible solution only from (0:005:0.35). Since I can pre-calculate objective values for these, it would be very fast. In other words, I need to restrict search space.Here, I am talking about single voxel; though I performs lsqnonlin over multivoxel and corresponding \alpha is mapped accordingly to a column vector.

I can not do grid search over preselected value as I plan to perform spatial smoothing in 3D. Some guidance would be highly appreciated.

Regards, Dushyant

For example,

{2},

{3,5,7},

{11,13},

{17,19}, etc.

`A second interested question would be that if any such patterns terminate at any level then "Does the cardinality of such sets follow any pattern?"

It is believed that there are the bijection relationships between Infinite Natural Number Set and Infinite Rational Number Set, but following simple story tells us that Infinite Rational Number Set has far more elements than that of Infinite Natural Number Set:

The elements of a tiny portion of rational numbers from Infinite Rational Number Set (the sub set ： 0, 1, 1/2, 1/3, 1/4, 1/5, 1/6, …, 1/n …) map and use up (bijective) all the numbers in Infinite Natural Number Set (0,1, 2, 3, 4, 5, 6, …, n …); so，infinite rational numbers (at least 2,3,4,5,6,…n,…) from Infinite Rational Number Set are left in the “one—to—one element mapping between Infinite Rational Number Set and Infinite Natural Number Set (not the integer set )------- Infinite Rational Number Set has infinite more elements than Infinite Natural Number Set.

This is the truth of a one-to-one corresponding operation and its result between two infinite sets: Infinite Rational Number Set and Infinite Natural Number Set. This is the business just between the elements’ quantity of two infinite sets and it can be nothing to do with the term of “

**proper subset,****CARDINAL NUMBER, DENUMERABLE or INDENUMERABLE**”. Can we have many different bijection operations (proofs) with different one-to-one corresponding results between two infinite sets? If we can, what operation and conclusion should people choose in front of two opposite results, why?”

Such a question needs to be thought deeply: there are indeed all kinds of different infinite sets in mathematics, but what on earth make infinite sets different?

There is only one answer: unique elements contained in different infinite sets -------the characteristics of their special properties, special conditions of existence, special forms, special relationships as well as very special quantitative meaning! However, studies have shown that, due to the lack of the whole “carriers’ theory” in the foundation of present classical infinite theory, it is impossible for mathematicians to study and cognize those unique characteristics of elements operationally and theoretically in present classical set theory. So, it is impossible to carry out effectively the quantitative cognitions to the elements in various different infinite set scientifically -------a newly constructed Quantum Mathematics.

The article《On the Quantitative Cognitions to “Infinite Things” (IX) ------- "The Infinite Carrier Gene”, "The Infinite Carrier Measure" And "Quantum Mathematics”》 has been up loaded onto RG introducing the working ideas. https://www.researchgate.net/publication/344722827_On_the_Quantitative_Cognitions_to_Infinite_Things_IX_---------_The_Infinite_Carrier_Gene_The_Infinite_Carrier_Measure_And_Quantum_Mathematics

Dear RG friends:

In two weeks time, I am all set to conduct a technical session on Analysis.

I plan to deliver a long lecture on "Fixed Point Theorems". Of course, Banach fixed point theorem is useful to establish the local existence and uniqueness of solutions of ODEs, and contraction mapping ideas are also useful to develop some simple numerical methods for solving nonlinear equations. Are there any other interesting science / engineering applications?

Kindly let me know! Thank you for the kind help.

With best wishes,

Sundar

Let q be an odd positive integer, and let N

_{q}denote the number of integers a such that 0 < a < q/4 and gcd(a, q) = 1. How do I see that N_{q}is odd if and only if q is of the form p^{k}with k a positive integer and p a prime congruent to 5 or 7 modulo 8?As the title suggests, how do i see that for any n, the covering map S^{2n} → RP^{2n} induces 0 in integral homology and cohomology, except in dimension 0?

Say a definition to be self-referential provided that contains either an occurrence of the defined object or a set containing it. For instance,

Example 1) n := (n∈ℕ)⋀(n = n⁴)⋀(n > 0)

This is a definition for the positive integer 1, and it is self-referential because contains occurrences of the defined object denoted by n.

Example 2) Def := "The member of ℕ which is the smaller odd prime."

Def is a self-referential definition, because contains an occurrence of the set ℕ containing the defined object.

Now, let us consider the following definition.

Def := "The set K of all non-self-referential definitions."

If Def is not a self-referential definition, then belongs to K, hence it is self-referential. By contrast, if Def is self-referential does not belong to K, therefore it is non-self-referential. Can you solve this paradox?

Take into account that non-self-referential definitions are widely used in math.

Two numbers a and b are elements of the set of real numbers exclusive of the set of rational numbers; they are irrational. Are there cases where a times b, or where a divided by b, yield a member of the set of integers? How rare or commonplace is the condition of irrational number products yielding rational values?

I was working on 2 papers on statistics when I recalled a study I’d read some time ago: “On ‘Rethinking Rigor in Calculus...,’ or Why We Don't Do Calculus on the Rational Numbers’”. The answer is obviously trivial, and the paper was really in response to another suggesting that we eliminate certain theorems and their proofs from elementary collegiate calculus courses. But I started to wonder (initially just as a thought exercise) whether one could “do calculus” on the rationals and if so could the benefits outweigh the restrictions? Measure theory already allows us to construct countably infinite sample spaces. However, many researchers who regularly use statistics haven’t even taken undergraduate probability courses, let alone courses on or that include rigorous probability. Also, even students like engineers who take several calculus courses frequently don’t really understand the real number line because they’ve never taken a course in real analysis.

The rationals are the only set we learn about early on that have so many of the properties the reals do, and in particular that of infinite density. So, for example, textbook examples of why integration isn’t appropriate for pdfs of countably infinite sets typically use examples like the binomial or Bernoulli distributions, but such examples are clearly discrete. Other objections to defining the rationals to be continuous include:

1) The irrational numbers were discovered over 2,000 years ago and the attempts to make calculus rigorous since have (almost) always taken as desirable the inclusion of numbers like pi or sqrt(2). Yet we know from measure theory that the line between distinct and continuous can be fuzzy and that we can construct abstract probability spaces that handle both countable and uncountable sets.

2) We already have a perfectly good way to deal with countably infinite sets using measure theory (not to mention both discrete calculus and discretized calculus). But the majority of those who regularly use statistics and therefore probability aren’t familiar with measure theory.

The third and most important reason is actually the question I’m asking: nobody has bothered to rigorously define the rationals to be continuous to allow a more limited application of differential and integral calculi because there are so many applications which require the reals and (as noted) we already have superior ways for dealing with any arbitrary set.

Yet most of the reasons we can’t e.g., integrate over the rationals in the interval [0,1] have to do with the intuitive notion that it contains “gaps” where we know irrational numbers exist even though the rationals are infinitely dense. It is, in fact, possible to construct functions that are continuous on the rationals and discontinuous on the reals. Moreover, we frequently use statistical methods that assume continuity even though the outcomes can’t ever be irrational-valued. Further, the Riemann integral is defined in elementary calculus and often elsewhere as an integer-valued and thus a countable set of summed "terms" (i.e., a function that is Riemann integrable over the interval [a,b] is integrated by a summation from i=1 to infinity of f(x

^{*}_{I})Δx, but whatever values the function may take, by definition the terms/partitions are ordered by integer multiples of i). As for the gaps, work since Cantor in particular (e.g., the Cantor set) have demonstrated how the rationals “fill” the entire unit interval such that one can e.g., recursively remove infinitely many thirds from it equal to 1 yet be left with infinitely many remaining numbers. In addition to objections mostly from philosophers that even the reals are continuous, we know the real number line has "gaps" in some sense anyway; how many "gaps" depends on whether or not one thinks that in addition to sqrt(-1) the number line should include hyperreals or other extensions of R1. Finally, in practice (or at least application) we never deal with real numbers anyway (we can only approximate their values).Another potential use is educational: students who take calculus (including multivariable calculus and differential equations) never gain an appreciable understanding of the reals because they never take courses in which these are constructed. Initial use of derivatives and integrals defined on the rationals and then the reals would at least make clear that there are extremely nuanced, conceptually difficult properties of the reals even if these were never elucidated.

However, I’ve been sick recently and my head has been in a perpetual fog from cold medicines, so the time I have available to answer my own question is temporarily too short. I start thinking about e.g., the relevance of the differences between uncountable and countable sets, compact spaces and topological considerations, or that were we to assume there are no “gaps” where real numbers would be we'd encounter issues with e.g., least upper bounds, but I can't think clearly and I get nowhere: the medication induced fog won't clear. So I am trying to take the lazy, cowardly way out and ask somebody else to do my thinking for me rather than wait until I am not taking cough suppressants and similar meds.

I have come out with my own equation(I have no idea whether it is new) for pie : π = √2/2 x n x √((1-cos(dΘ)), where n is the number of triangles in the circle, and dΘ is the angle of the triangle that is -->0. Ok, now, say, I put n = 1440, so dΘ will be 360/1440 = 0.25, and put it into the equation, i will get π= 3.141590118…; if I put n=2880, so dΘ will be 0.125, putting it into the equation, I will get π=3.141591603.., if I put n=5760, so dΘ will be 0.0625, putting it into the equation, i will get π= 3.141592923..., we know π= 3.141592654..,but i can never really get the n and dΘ to give that answer. Anyone can come out with some good idea??

If a polynomial P(z) of degree n omits w in |z|<1, show that P(z)+(1-e^{ih})zP'(z)/n also omits w in |z|<1 for every real h. I know at least two proofs of this result, one follows by using Laguerre's theorem concerning the polar derivative of a polynomial. I want to find the direct proof of this result with out using any known Theorem.

If a

_{1},a_{2},...,a_{n }are given positive integers in strictly increasing order, what would be the best possible lower bound of|1+z

^{a1}+z^{a2}+...+z^{an}| for |z|>1?A (single variable) function is differentiable iff some other function is continuous. Can we get similar characterizations for (different kinds of) multifunctions? How to express: a multifunction is X-differentiable iff some other (multi)function is continuous?

PARITY is about whether a unary predicate of a structure has even numbers of elements in it.

If a kind of logic can define PARITY, then there is a formula of this logic so that:

PARITY return True on a structure iff this structure is a model of this formula.

We have known that logics with counting can easily define PARITY.

But what about others without counting?

I am just wondering. For example, for series 1+2+3+4+5+6+.... the Tn is k, while the Sn is (n)(n+1)/2. I do know there are rules to reach the summation for each case (for example, for series k

^{2}, the summation is n(n+1)(2n+1)/2, etc), so is there a more general way to convert the summation Sn to Tn just like the the case of derivation and integration in Calculus?Equation: $(e

^{iaX}f)=f(ax)$, where $X$ may be unbounded operator and $a in R>0$. I have found something but I am not convinced. The important point: Operator $X$ is not in terms of $a$ and the Hilbert space L^2(R>0, dx/x).Thank you in advance.