Science topic

# Algebraic Geometry - Science topic

Explore the latest questions and answers in Algebraic Geometry, and find Algebraic Geometry experts.

Questions related to Algebraic Geometry

A fascinating question in theoretical physics is whether it is possible to extend Einstein's ideas beyond gravitation to all aspects of physics. The energy-momentum tensor is usually defined extrinsically over the space-time manifold. But could it rather be derived from the geometry alone ? Likewise our local subjective notion of time is given by a local orientation which need not be globally consistent as in Gödel's famous model.

It has been proposed that space-time may have a foam- or sponge-like fine-grained structure (possible involving extra dimensions) which explains energy and matter and the other fundamental forces in a Kaluza-Klein style. That is, "microlocally" the topology of the space-time manifold is highly complex and there may be even a direct relationship between mass, energy and cohomology complexes in an appropriate derived category. At this fine scale there may even be non-local wormholes that connect distant regions of space-time and explain quantum entanglement.

But why not consider the universe as a Thom-Mather stratified space (one can think of this as a smooth version of analytic spaces or algebraic varieties) rather than a manifold ? In this case "singularities" would be "natural" structures not pathologies as in black holes. It is difficult not to think of matter (or localised energy) as corresponding to a singular region of this stratified space. Has this approach been considered in the literature ?

Consider the powerful central role of differential equations in physics and applied mathematics.

In the theory of ordinary differential equations and in dynamical systems we generally consider smooth or C^k class solutions. In partial differential equations we consider far more general solutions, involving distributions and Sobolev spaces.

I was wondering, what are the best examples or arguments that show that restriction to the analytic case is insufficient ?

What if we only consider ODEs with analytic coeficients and only consider analytic solutions. And likewise for PDEs. Here by "analytic" I mean real maps which can be extended to holomorphic ones. How would this affect the practical use of differential equations in physics and science in general ? Is there an example of a differential equation arising in physics (excluding quantum theory !) which only has C^k or smooth solutions and no analytic ones ?

It seems we could not even have an analytic version of the theory of distributions as there could be no test functions .There are no non-zero analytic functions with compact support.

Is Newtonian physics analytic ? Is Newton's law of gravitation only the first term in a Laurent expansion ? Can we add terms to obtain a better fit to experimental data without going relativistic ?

Maybe we can consider that the smooth category is used as a convenient approximation to the analytic category. The smooth category allows perfect locality. For instance, we can consider that a gravitational field dies off outside a finite radius.

Cosmologists usually consider space-time to be a manifold (although with possible "singularities"). Why a manifold rather than the adequate smooth analogue of an analytic space ?

Space = regular points, Matter and Energy = singular points ?

Hi, Not sure if I am understanding this correctly?

In attached file I do not get how the red marked values came?

shouldn't the projections of unit vector's value be like what I wrote in purple???

Please give an explanation.

???

Attached is extracted from Electromagnetic Field Theory Fundamentals / Bhag Guru (Page 25)

The methods of zeta function regularization and Ramanujan summation assign the series a value of −1/12, which is expressed by a famous formula:

1 + 2 + 3 + 4 + 5 + 6 + . . . and so on to infinity is equal to . . . −1/12.

Is it correct?.

Can you suggest some work related to this formula?, application?...

What are the current trends in Commutative Algebra and its interactions with Algebraic Geometry? What are the important articles that one should read for starting to research in this area?

I know that the 3 vectors x,y,z in

**R**, where angles between them are 120`are coplanar .^{n}I am interested in studying the curves on a projective surface in P^3 and came across Mumford's work "Lectures on Curves of an Algebraic Surface." Hence, I would want to explore more of the same. Specifically, the linear equivalence of curves on the surface is interesting. So any direction for this would be helpful.

The pitch value

**in music and guitar***set***in tablature are connected by***group**adjunction*of tangent-cotangent bundles.*Tuning g*is the tangent gradient to the flow of pitch on guitar. It determines the directional derivative at every point in tablature.*Intonation**f*is cotangent. It connects every point on guitar to a pitch. Tuning*g*:(**), which might be 0 5 5 5 4 5 (Standard tuning), is a left adjoint pullback vector used by guitarist as an algorithm to construct tablature by the principle of least action. The right adjoint***set→ group**f*:(**), respectively 0 5 10 15 19 24, is a***group→ set**forgetful*vector transforming fret number vectors to the codomain pitch number vectors by intonation at a specific pitch level. When the tablature is played, the frequency spectrum observed seems to forget the tablature group, but it can be proven an efficient Kolmogorov*algorithm for learning the tuning*exists.The symmetry of (0 5 5 5 4 5) and (0 5 10 15 19 24) is obvious. The second vector is just the summation of the first. The first vector gives the intervals between strings. It points in the direction of steepest pitch ascent. The second vectors gives the pitch values of the open strings. When added to the fret vector, 0 5 10 15 19 24 gives the pitch vector.

The tablature is pitch-free and the music is tablature-free. These vectors form a Jacobian matrix on the transformation.

I want to know if a mathematician can see the tangent-cotangent relation of these two vectors. If not, then what is required to convince?

Does it help to know that addition and multiplication are the same? That the vectors are open subsets of the octave intervals? Do I need to prove a partition exists, or is it obvious?

Is the tensor notation clear?

Is there any mathematician out there that can say something useful about tablature?

I want know about an algorithm or formula to find the asymptote from coordinates obtained from Machine Learning. Like the ML will always give points precise & closer to the Asymptote if I ran it, but won't ever reach the asymptote value. The normal methods are only created for humans like: limit tends to zero -> 0 and by graphing but there is not an algorithms way for computers which can compute an asymptote. If anybody knows about this can give me a direction on this topic.

I have two points in an (x,y) coordinate system, the only information i have is the distance of these points from the origin and the distance between these two points. I need to find the coordinates of these two points. As these points form a triangle so i use the properties of triangles to find these coordinates. Using the sides length information i can find out the three angles inside. But still i haven't been able to find the coordinates. The attached pictures can help to visualize the problem.

The function f(x,y) also contains other constants.

How to find the (arriving angles) αb,βb ?

If we know the values of two sides of triangle b, c and angles between them αa, βa . The angles α and β are representing the azimuth and elevation angles. I am not understanding how to implement the law of sines and cosines ( or any other method ) for this 3D problem.

(Kindly think your idea before looking at details file )

**1.**How to find the distances between P0 to P1 in this 3 dimensional ellipsoid?

**2.**Is any of the method 1 or 2 is correct ?

**3:**How to find the value of D from equation of ellipsoid? As in case of 2D ellipse the distance between two foci can be found using the D^2=a^2-b^2 . Where D is the distance between two foci. And a and b are the semi major axis and semi minor axis of ellipse respectively.

Given in 3D Ellipsoid semi major axis a=5 (along axis) and semi minor axis b =4 ,c=3 (along y and z axis) . How can we find the distance D between two focal points (P0 and P2) ? Also how to find the distance from focal point P0 to P1 ? Also will the distance from P0 to P1+ P1 to P2 be equal to 2a like in ellipse ?

Suppose an element of even subalgebra of geometric algebra over 3D is written in two ways: a+b

_{1}B_{1}+b_{2}B_{2}+b_{3}B_{3}and exp(acos(a)B). I want to differentiate it, say, by b_{1}. It looks like differentiation of geometrical sum and exponent give very different results. Where is possible error?Mathematics has been always one of the most active field for researchers but the most attentions has gone to one or few subjects in one time for several years or decades. I'd like to know what are the most active research areas in mathematics today?

**Algebraic geometry**studies zeros of multivariate polynomials. Modern algebraic geometry is based on the use of abstract algebraic techniques, for solving geometrical problems about these sets of zeros. Representatives: Riemann, A.Grothendieck. [From wiki]

**Geometry of numbers**studies convex bodies and integer vectors in n-dimensional space. The geometry of numbers was initiated by Hermann Minkowski (1910). [From wiki]

Let H1 and H2 be two convex hulls defined by sets of point P1 and P2.

Let H be the convex hull defined by the sets of points {p1 + p2 with p1 in P1 and p2 in P2}

Is H the Minkowski's sum of two convex hulls H1 and H2?

(Minkowski'sum of two convex hull H1, H2= convex hull : {a + b with a in H1 and b in H2} )

Or at least, is it true in 3D?

Please I would like your help on getting an e-copy of the book entitled "The theory of fixed point classes" by Tsai-Han Kiang.

Best regards,

M.S. Abdullahi

I am a graduate student in Mathematics and interested in Algebraic Geometry , In particular questions on Moduli Space . Now to start thinking about some problem for research what kind of question we may ask?

Are there any paper that will be very helpful?

How to start thinking about it.

Looking forward for help and suggestion.

I got problem posted on internet , like compatifying moduli space and motivic structure.

But I am not sure about those problem in initial research.

It is easy to verify that the product of two linear forms gives a quadratic form, my question is the reciprocal if it is right for any quadratic form?

Given a bivariate homogeneous polynomial p(x,y), how can we verify if all the roots (even complex roots) are distincts, or if there is some multiple roots ?

Is there a way to do it without computing all the roots, or without the factorized form of the polynomial ?

Thanks in advance.

Who is the scientist or researcher who used the topological term for the first time? and when it was specifically ?

Let (X,0) be an irreducible plane curve singularity and let n:(C,0) \to (X,0) be the normalization morphism, with C=complex line. It is known that the delta-invariant of (X,0) is the dimension of the quotient $O_{(C,0}/n^*(O_(X,0))$.

Some special cases suggest that the delta-invariant of (X,0) is also the dimension of the quotient $\Omega^1_{(C,0}/n^*(\Omega^1_(X,0))$, where $\Omega^1$ denotes the corresponding sheaves of 1-forms. Do you have a reference or a simple proof for this claim?

The Fibonacci numbers can be obtained through the recursivity

f_k=f_{k-1}+f_{k2} (with f_1=1 and f_2=1)

This is one type of a Lucas sequence, and can be written in matrix form as

(f_{k+2},f_{k+1})=({1,1},{1,0})(f_{k+1},f_k)

(the left-hand-side and the rightmost product of the right-hand-side product are column vectors).

The matrix ({1,1},{1,0}) is called the Fibonacci matrix.

Now, I would like to be able to write similar expressions for other Lucas sequences, such as

a_n=a_{n-1}+a_{n-2}+a_{n-3}, a_1=1, a_2=1, a_3=1

and also for rational recursions, such as

a_n=a_{n-1}+a_{n-2}*(1+b/d), a_1=1, a_2=1,b and d natural numbers different of zero and b and d are fixed, for all times.

I would sincerely appreciate, and be thankful about any concrete reference where the process for obtaining such matrices is being explained.

- Let x=(x
_{1},x_{2},...,x_{n}) be standart coordinates of a Cartesian (n+1)-space and y=(y_{1},y_{2},...,y_{n}) another one, where y_{i}=a_{ij}x_{j}, i,j=1,2,...,n. - Denote Hess(z(x)) the Hessian of a real-valued function z=z(x
_{1},x_{2},...,x_{n}). - We obtain that

- Hess(z(x))=(A^{T}).Hess(z(y)).A,

- where A=(a
_{ij}), A^{T} the transpose of A, "." matrix multiplication. - The question is: the equality in (1) is new one? Or is well-known?
- Many thanks in advance for your interest and comments...

I have triangle mesh and calculate normal of triangles then calculate vertex normal and do some calculations on it and want to calculate vertex coordinates from this vertex normal after do calculations.

In Riemannian geometry, we know that every 2d manifold is locally conformally flat thanks to local existence of isothermal coordinates; what could we say about surfaces for which these coordinates exist globally ?

Are they "Riemann surfaces of parabolic type" (-> uniformisation thm) ? The reason would be that they are conformally equivalent to the complex plane C.

And can they have negative curvature ?

Consider a parameter space R

^{n}, comprising points a=[a_{1}, a_{2},...a_{n}]'∈R^{n}. Suppose we've got two constrained sets within the space.The first is linearly constrained: f(a

_{1}, a_{2},...a_{n})=0, with f() being a linear function. So it's a linear subspace.The second is nonlinearly constrained: g(a

_{1}, a_{2},...a_{n})=0, with g() being a nonlinear function including only polynomial operations i.e. addition and multiplication. So it's a nonlinear manifold.If the dimensions of the two sets are identical, then the intersection must be a null set of Lebesgue measure 0, comprising only countable crossovers with lesser dimension.

Am I right? How to prove it?

The string is a real closed field minus finitely many points.

If pitch is a continuous function defined on the string interval [0, 1] then there must be a fixed point in the interval.

Each mode of vibration is represented by an n-tuple that is a subset of

*so that the waves are intervals between nodal points in projective space.***R**^{n }The null set is just the endpoints, no string, which are the points (-1, 0) and (1, 0) represented by

*.***S**^{0 }The string is connected like a circle missing a single point which can be used to glue the circle into the origin of

*.***RP**^{3 }- 0There is a curve-lifting equation that proves the string at rest and in the perturbed state match point for point, which proves the force acting upon the string is orthonormal to the string axis. Without a force in the direction the string axis there can be no traveling wave! They kinetic and potential energy go to zero at the endpoints.

The string assumes the same shape regardless of how it is struck and clearly the fundamental has the lowest frequency and therefore the lowest engergy. Why would several energy levels co-exist? Why wouldn't higher modes degenerate to lower modes if they can co-exist. Clearly the modes are singletons that are all 1 step away from the fundamental. Isn't it obvious that the string is given a subspace topology?

The string has an atomic structure that is the union of wave and not waves, and so on.

Now my question is, if the sound wave that radiates from the string is a purely algebraic object that is a function of one continuous variable pitch(frequency) and the wave is a polyhedron with n + 1 vertices, n edges, and 1 face in

*, then why isn't it clear already that the string is a semialgebariac ring, and not just a frequency transducer.***R**^{2 }We have a graph (the string with nodes and wave) and a tuning function f and intonation function g. What else is needed here? Why isn't clear that each n-tuple is a different

*system*and the n-tuples cannot just add? Pitch cannot be divided into 0 and 1 without an algorithm. Clearly the string is partitioned into a finite number of points and intervals in the real closed field. Isn't that enough, by itself, to make a new theoretic model for the string that makes sense?It is just astonishing that people believe in things like traveling waves reflect to make standing waves (clearly the boundary condition for this (1, 0, 1) cannot exist); or that the string can just be divided into smaller and smaller fundamentals that all co-exist in the string as independent modes.

Nodes and waves cannot co-exist. Period. If evidence shows they can, the evidence needs to be re-examined. The monochord proves that string can have only one mode at a time according to a fixed point on the string and also that higher modes exist only when the string driven in a higher energy state. But each fulcrum point is a different system.

Notice the monochord string has the property that it recognizes when [0, 1] is equal to a simple multiple the string fundamental. That is a boundary condition for harmonic motion.

*Recognition*is a property of a finite state machine. The string recognizes it's finite modes. Finite state machines have only one state at a time.I cannot understand why such fallacy as classic string theory is allowed to persist in science. Such a profound illusion that no one gets this!

hello,

I have a vector (x,y,z) and q certain number of vectors (x1,y1,z1) and (x2,y2,z2)............

I want to define a metric to now which vector is the nearest to (x,y,z)

for example

if my vector is (9,9,9) and my candidate vectors are (3,3,0), (0,3,6) and (9,9,9)

so the nearest vector is (9,9,9).

Any idea please?

Cordially

Dear all,

How can we obtain the

**analytical solution**(in the complex n-space C^n) to the following**2nd order**system of polynomial equation with**n variables**?a_{11}x_1x_1+a_{12}x_1x_2+...+a_{1n}x_1x_n=b_1

a_{21}x_2x_1+a_{22}x_2x_2+...+a_{2n}x_2x_n=b_2

.

a_{k1}x_kx_1+a_{k2}x_kx_2+...+a_{kn}x_kx_n=b_k

.

a_{n1}x_nx_1+a_{n2}x_nx_2+...+a_{nn}x_nx_n=b_n

where a_{ij} and b_k, i,j,k=1,2,...n are constant complex values, and x_i, i=1,2,...n are unknowns.

Kind regards,

Chao

I need geometrical interpretation of runge kutta nystrom method for solving 2nd order differential equation? any body who can help me in this regard.

The phrase "continuous variation" appears several times in Titchmarsh's books and is repeated verbatim by those who cite his work. But nowhere can I find this phrase defined. The earliest reference I have found (1939) occurs on page 132 of his book "The Theory of Functions". From the context it seems to describe a particular way of defining the value of a cut function. Can anyone provide a (citable) reference to a formal definition of this phrase, or, better yet, an explanation of its meaning?

For example, if I have a grain with blue color in IPF map of my sample, obtained in x direction, it means that 111 of this grain is parallel to x direction, according to IPF triangle. For grains having yellow, orange or other colors inside the triangle, I want to say that 111 of these grains have .... degree difference relative to x direction. How can I have the value of this angle, with a open-source EBSD software, online calculators or a mathematical operation?

I know the following result:

Suppose C and D are nonempty disjoint convex sets. The hyperplane {x | a^T x = b} separates these sets provided a=d−c, b= (∥d∥^2 −∥c∥^2)/2, where c and d are objects lying C and D, respectively, that minimizes the distance.

I wonder if is there another result that ensures a better separation than this?

Thank you!

In the question given above, H is the mean curvature of the immersed real projective space in the Euclidean m-space.

Please can someone give me a reference on the distribution of Pisot numbers on the real line.

In imaginary quadratic fields we have:

* ELL(O_K) : = {elliptic curves E/C with End(E) ∼= O_K}/{isomorphism over C}

∼= {lattices L with End(L) ∼= OK}/{homothety}∼=ideal class group CL(K)

* #CL(K)=#ELL(O_K)

this notation at the papper :A SUMMARY OF THE CM THEORY OF ELLIPTIC CURVES

JAYCE GETZ

In the image I show what I figured out so far and where I'm stuck... I would be more than happy if anyone could help me with this. Thanks! :-)

Hello RG people,

It's been too long since I last solved such an equation and I'm sure a high school student would be able to solve it easily. However, I'm stuck and can't figure out the coordinates of a point

*C*. I know the following things:- the coordinates of
*A*and*B* - if
*A*and*B*are on a straight line*f*, then*B*and*C*are on a straight line*g* *f*and*g*are perpendicular- the euclidean distance of
*A*to*B*is double the euclidean distance of*B*to*C*, which is defined as*e*

An n_k ctheorem (configurational theorem) is a set of n points and n hyperplanes with k points on each hyperplane and k hyperplanes through each point, all embedded in (k-1)-dimensional space. (The type of space could be e.g. a projective (or affine) space over a general commutative field (type (0)), over a general possibly non-commutative field (type (1)), or over a general field of prime characteristic p (type (p)). If the existence of n-1 hyperplanes implies the existence of the n'th hyperplane then it is called a "ctheorem".

Those known are:

Desargues 10_3 (type (1)) discovered about 1650 CE

Pappus 9_3 (type (0)) discovered about 300 CE

Moebius 8_4 (type (0)) discovered 1828 by A.F. Mobius

Glynn 8_4 (type (0)) discovered 2010 by D.G. Glynn (Theorems of points and planes ...)

Glynn 9_4 (type (1)) discovered 2010 by D.G. Glynn (same paper)

Fano 7_3 (type (2)) known to geometers in the late19th century (the matroid dual is a 7_4 ctheorem of type (2) also) Could be called "anti-Fano" since Fano's axiom proscribed it in the geometry. It is also called PG(2,2), the projective geometry of dimension 2 over the finite field GF(2).

Note that the matroid dual of an n_k ctheorem is also a ctheorem n_{n-k} (if type (0) or (p)), so Pappus gives a ctheorem in 5-d space. The two 8_4's (they are unique) are self-matroid-dual. (Sometimes the matroid dual is a bit degenerate, as in the cases 10_3 and 9_4.)

Any expert working in the area of computational geometry.

Is it possible to give an elementary proof of the aforementioned result? This was proved by Robert Penner (at "A construction of pseudo-Anosov homeomorphisms". Robert C. Penner. Trans. Amer. Math. Soc., 310(1):179–197, 1988) under the techniques of measured-foliations.

A hypersurface is called a Dupin hypersurface if the multiplicities of its principal curvatures are constant and moreover each principal curvature isconstant along its principal directions.

A compact submanifold lying in a hypersphere S

^{m}of a Euclidean (m+1)-space E^{m+1}is call mass-symmetric if the center of mass of M is the center of the hypersphere.Dear all,

I have a question about the existence of real solutions to a sixth order polynominal system (equations) with two variables (i.e., f(x,y)=0 and g(x,y)=0 each equation is a sixth order polynominal equation). Is there a criterion to determine the existence of real solutions? I know this is a problem in Algebraic Geometry:)

Kind regards,

Chao

Attack angle and sideslip angle have math definitions based on arcsin and arctan trigonometric functions (e.g., in J. N. Nielsen' MISSILE AERODYNAMICS). How to choose between one or another definition? Can someone provide a formal reference or proof on that? (textbook or paper)

One knows that a collection of k<n points on the veronese curve with multiplicities m_1,...,m_k (m_1+...+m_k=n+1) give a secant projective space to the veronese curve, and one can compute the dual space of this secant and it is the intersection of the (m_i)-1 osculating hyperplanes of the curves on the relative points. My question is: it is possible that any projective subspace of any dimension on CP^n can be a dual of some secant to the veronese curve?

Suppose given an m-sequence M over a field F

_{q}of size q (a prime power) and of order s. In other words: there is a primitive polynomial f(x) of degree s over F_{q }serving as the characteristic polynomial for a recurrently defined sequence. If alpha is a primitive root of f(x), then M is essentially the (trace of the) sequence of integer powers of alpha (which may be reduced modulo f). I am thinking of rather large-sized sequences (with s many hundreds or even thousands).Given r < s and given r linearly independent powers of alpha with exponents i

_{1}<i_{2}<..<i_{r}(there is an abundance of them with successive distances i_{k+1}-i_{k}< s), I am looking for information on how dependent powers of alpha are distributed. Assuming that r is much smaller than s, and knowing that there are "only" q^{r }vectors in an r-dimensional linear subspace, dependent vectors are likely to be far apart in M. What is known about this?I find the case q=2 most interesting, but quite often results on binary m-sequences extend to general powers of a prime.

I have found a (maybe the) faithful and full representation

of quaternions in the ordinary real three dimensional Euclidean space.

A non constant quaternion is the roto-dilation, usually associated

to it, endowed with a verse ("clockwise" or "counter-clockwise")

for the underlying rotation.

A negative constant -c is the dilation associated to the positive

constant c endowed with the rotation of 2π of the space, independent

of the axis and of the verse of the rotation.

A positive constant c is simply the the dilation having ratio c,

endowed, if you like, with the identity rotation.

I'm asking if this simple faithful and full interpretation

of quaternions, in the usual real three dimentional Euclidean space,

is new or if it is well known.

Suppose I have two matrices A=[3 1;1 4] and B=[5 -2;-2 4]. Where A and B represent covariance ellipses in 2D. Now, if I want to combine it in 4D, that is, C=[A 0;0 B]. C is a four dimensional ellipsoid but the cross correlation between A and B is zero. I want to rotate A and B such that, they are highly correlated.

I have the following questions:

1) How can I rotate the A and B (either in 4D or 2D separately) such that they have maximum cross correlation with each others?

2) Is there any 4D rotation(either Euler or Quaternion) matrix in explicit form(I couldn't found any)?

Kindly guide me.

There is a twin prime conjecture that for any given integer N>0,

there is a twin primes p, p+2 such that p>N.

What about triplets of primes p, p+2, p+4. Although there is a result that

there is an integer A such that for any integer N>0, there are primes p, p+B, p+C

such that 1. p>N and 2. B<C<A,

this could not really be a progress toward the triplet primes

conjecture since any triple integers p, p+2, p+4 has one of them which is dividable by 3.

Therefore, the triplet prime conjecture as it stated above is wrong.

There could be a modified triplet prime conjecture that for any integer N,

there is a triplet primes p, p+B, p+C such that 1. p>N and 2. B<C=6.

The examples are: A. lower twin triplets: (5, 7, 11); (11, 13, 17); (17, 19, 23); (41, 43, 47);

(101, 103, 107); (191, 193, 197).

B. upper twin triplets: (7, 11, 13); (13, 17, 19); (37, 41, 43); (67, 71, 73); (97, 101, 103);

(013, 107, 109); (197, 193, 199).

Similarly, for the quadruplets, we have the twin twins p, p+2, p+6, p+8:

(5, 7, 11, 13); (11, 13, 17, 19); (101, 103, 107, 109); (191, 193, 197, 199); (821, 823, 827, 829).

My question is: Could we find all this kind of quadruplets or prove that the quadruple conjecture is true?

Of course we expect that these kind of multiplets with minimal length are rare just as the human multiplets.

Notice that there is a eight-let: (3, 5, 7, 11, 13, 17, 19, 23) and a six-let (97, 101, 103, 107, 109, 113).

Notice that we could not have a six-let (p, p+2, p+6, p+8, p+12, p+14).

I consider an application a "true" one if it does not come as a reformulation of an optimization problem. I already know about applications of the second order and positive semidefinite cone-complementarity problems which are reformulations of optimization problems (for example related to Nash equilibrium). I also know about true practical applications where the cone is either the nonnegative orthant or the direct product of the nonnegative orthant with a Euclidean space. However I don't consider the later cones essentially different from the nonnegative orthant. I am mostly interested in practical applications, but I am also interested in possible applications of cone-complementarity to another field of mathematics. I would be grateful if you could point me to any papers, books, links or other materials in this topic.

Hello,

I am wondering whether the area enclosed by an ellipse drawn around data points on an x - y graph where both axes are % (or in my case involving stable isotopes, ‰) takes a unit or is dimensionless.

From what I have seen most researchers report these areas sans unit. However, I came across a thesis recently with ‰

^{2}which started me thinking - why not?I am sure someone knows and can explain the answer to this for me. I would greatly appreciate any thoughts! And thanks in advance.

One would like a good definition of etale cohomology for non-commutative rings A with corresponding Chern characters from Higher Algebraic K-theory (Quillen type) of A. In particular, one would like a non-commutative analogue of Soule's definition of etale cohomology for rings of integers in a number field with Chern characters from the K-theory of such rings. A possibly accessible setting is to define such a theory for maximal orders in semi-simple algebras over number fields and then extend this to arbitrary orders in semi-simple algebras over number fields. The goal in this case is to be able to understand such theories for non-commutative integral group-rings i.e group-rings of finite non-abelian groups over integers in number fields.

REMARKS: Geometrically, Soule's construction translates into etale cohomology of affine and related schemes and so the envisaged construction should translate into etale cohomology for a suitably defined 'non-commutative' scheme.

where p is a prime number greater than or equal to 3, and n is a natural number between 2+p and 2p.

I'm working in a project I would know if there is some references about the algorithms we can use to test if a given ideal is trivial in a given ring.

For example we have rule 90 in cellular automaton which can produce Sierpinski triangle, which can be made algorithmically by removing triangles, or also through applying fractal formulas, like what we have in XaoS fractal software's example. So is it impossible, theoretically, to produce any fractal which we can produce from any system, in another one? And the answer must lead to say how we can?

Some Riemannian manifolds are expressed as a product manifold. Recently, I have read two articles about space-times. In both articles, the authors prove that a Riemannian manifold \bar{M}^n is expressed as a product of the form I×M^{n−1}. Both authors use similar techniques, namely integrable distribution, in this decomposition. Really, I do not understand this technique. But it is enough to know a characterization of Riemannian manifolds which we can express it as a product manifold M^1\times M^2.

Q1 Does this characterization exist?(if yes, a reference is required)

Q2 What conditions and proof hints could one think of to characterize these manifolds?

Let

*E*in**P**^{3}be a real elliptic normal curve with two non-null-homotopic components. Is there a parametrization (**R**/**Z**) x (**Z**/2**Z**) ->*E*such that any four points on*E*are coplanar precisely when their corresponding parameters sum to zero? As an example one could take*E*to be the complete intersection of the quadrics*XY + ZW = 0*and*-X*.^{2}+ Y^{2}- 2Z^{2}+ ZW + 2W^{2}= 0For more details (and better formatting), see the corresponding question on MathOverflow: http://mathoverflow.net/questions/197848/ .

Part of the Bernstein's theorem (in algebraic geometry, about polynomial equations) says for n polynomial equations with n variables, if the coefficients are generic, the number of solutions must be finite and equal to the mixed volume.

But here is a counter example. (Sorry, it seems I cannot use La TeX symbols in this field so please excuse the ugliness.)

Consider a set of equations

sum_i {h_{i,j}*u_i*v_i}=0,

where the summation is from i=1 to N. j runs from 1 to 2N. h_{i,j} are the generic coefficients. u_i and v_i are variables. So we have 2N variables and 2N equations.

The solution is: u_i=a_i, i=1,...N, u_i=0, i=N+1,...2N

v_i=0, i=1,...,N, v_i=b_i, i=N+1,...2N and a_i, b_i can take any value. Therefore, we have infinite number of solutions.

This seems to violate the Bernstein's theorem.

Anyone have an explanation?

Mesh has 4.2 million cells. Geometry is a cylinder of 122mm diameter, 200mm length.

Suppose I have an affine algebraic variety, and I suspect that a certain element of its coordinate ring can be represented as a product of two elements having some nice form. Are there computer algorithms for such factoring?

What happens geometrically when one multiplies a matrix A by the inverse of a matrix B?

Jaconbson's Lemma to MP-inverse is not right.

I found in the book of Kashiwara-Schapira a precise description of this construction but I want to know some illustrative applications maybe a clearer motivation. Thanks a lot!

Given an algebraic system based on polarities on the sphere. A pair of opposite points and their equator constitute a basic element. (Add the equator to Riemann's unification of opposite points in elliptic geometry.) Two elements determine a resulting element of the same set. This is a partial binary operation with two axioms: ab = ba; (ab)(ac) = a. I call any set of this type a projective sphere. (Cf. Baer's finite projective planes and Devidé's plane pre-projective geometries.) From these axioms a number of important properties can be deduced. For example, if the set has at least two elements a and b, then xx cannot be properly defined for the whole set, because (ab)(ab)=a=(ba)(ba)=b contradiction. This means that in the general case xx must remain undefined, as with the case of division by zero in fields. However, if a smooth curve is given on the sphere, or an oval in a finite set, then the xx operation CAN partially be defined for the elements of the curve or of the oval as the tangent to the given point. Example: given the oval of the four reflexive (self-conjugated) elements in a 13-element finite sphere; the derivative consists of the same four elements. Another example: Given the basic elements on the sphere with homogeneous coordinates. Take the circle with center (1,0,0) and radius pi/4, given by elements (1,√(1-c^2 ),c); its derivative is the curve given by elements (-1,√(1-c^2 ),c). In this interpretation, the derivative does not represent the number indicating the slope of a straight line, but a set of the same type of geometric objects out of which the original curve is made. Also, this gives that every smooth curve evokes a geometry of its own, defined specifically for the given curve.

something on simplicial surfaces and regular and mean valence on simplicial surfaces.