Solving Some Problems and Elimination in Systems of Polynomial Equations ()
1. Introduction
Polynomials are preferred tools used to solve problems such as the solvability of equations, constructability, and Fermat’s last theorem [1].
This article is devoted to solving some problems and elimination in systems of polynomial equations. Let us give some examples of problems: solving a system of polynomial equations, finding its projection in different planes, and finding the Cartesian equation of a curve (or a surface) given by parametric equations.
The algebraic structures of rings and ideals play a large role and lead to algebraic geometry proper. We will see how to use the Bezout identity in
and
for these problems, then move on to the resultant and then discuss the Gröbner bases. In passing, we will see some geometry theorems that can be proved algebraically, although the methods of geometry and, in particular, of projective geometry are by far the most beautiful.
Remark
The pgcd of two or more integers is the largest integer that divides each of these numbers without leaving a remainder.
Example 1.1
For 12 and 18:
The divisors of 12 are: 1, 2, 3, 4, 6, 12;
The divisors of 18 are: 1, 2, 3, 6, 9 and 18;
The common divisors: 1, 2, 3 and 6.
The pgcd (12, 18) = 6.
Definition
In a factor analysis, the main elements refer to the linear combinations of variables that best explain the variation in the data.
Elements are often used to reduce the dimensionality of the dataset while preserving as much information as possible.
Example 1.2
Let’s say we have a dataset with three variables: X1, X2 and X3.
Factor analysis can reveal that:
The first element (EP1) represents 70% of the total variable, and is influenced mainly by X1, X2.
The second main element (EP2) could then represent 20% and be more related to X3.
This made it possible to represent the data with only two main elements, meaning analysis while retaining the essence of the information.
In short:
Bezout rings are of greater importance in both algebra and geometry, mainly due to their structure and special properties.
2. Algebra Complement
Let A be an integral unitary commutative ring. An invertible element of A for multiplication is called a unit of A.
Definition 2.1
We say that an ideal
of A is prime if and only if A/I is an integral ring. In other words, if
, then a or b belongs to I.
Definition 2.2
An element a of A is said to be irreducible if it is not a unit and if it cannot be written in the form
with b and c as non-units.
If
are elements of A, we denote
or
the ideal of A generated by the
. The
are called the generating system or the basis of the ideal I.
Proposition
If
is a prime principal ideal, then a is irreducible.
Indeed, if a is not irreducible, we can write it in the form bc with b and c not units and we then have
and
.
In
, the converse is true: if a is irreducible, the ideal of
generated by a is prime. It is true more generally in factorial rings (we then speak of prime elements for irreducible) but false in general.
Definition 2.3
Let A be an integral ring. A is said to be a factorial ring if any element of A is written as the product of one unit and irreducible elements, and this is essentially unique: if
with u and v units and irreducible
and
there exists a
bijection of I over J such that
with
unit.
Theorem 2.1
If A is a factorial ring, the ring
of the polynomials in x with coefficients in A is factorial. Thus, if A is a field or a principal ring, the ring
is a factorial ring.
In a factorial ring, we can define the p.g.c.d. of two elements (defined to the nearest unit) and the notion of prime elements between them.
When
is a field, we have the Euclidean division in
and we can calculate the p.g.c.d. of two polynomials by Euclid’s algorithm [2].
Theorem 2.2 (Bezout of Theorem)
If
is a field and P and Q are two polynomials of
, P and Q are prime to each other if and only if there are two polynomials U and V such that
.
The conclusion of this theorem is false in
. On the other hand, we can plunge
into
where
is the field of fractions of
. Note that the MAPLE commands reflect the fact that the p.g.c.d. exists in
(the gcd(P, Q) command does not make x or y play any particular role), but that the use of Euclid’s algorithm requires specifying a variable: gcdex(P, Q, x, “u”, “v”).
Let’s give an example of a non-factorial ring. The ring
is not factorial because
and the elements
and 2 as well as
and 3 do not differ by one unit. Thus, the idea
is not prime because
and
,
. On the other hand, 2 is irreducible, because it cannot be written as the product of two non-unit elements of
(we would then have
with
and
, hence
; for example
, which is not possible).
3. Properties of Bezout Rings
A Bezout ring is an integral ring in which both nonzero elements have a PGCD that can be expressed as a linear combination of these elements.
Example
1) Existence of PGCD: for any pair of elements a and b, there are elements x and y such that
.
2) Integrity: A Bezout ring is an integral ring, which means that it has no null elements other than 0.
3) Relationship to ideals: Ideals generated by elements in a Bezout ring are principal, meaning they can be written in the form (a) or a.
Relevance in algèbre
Bezout rings are fundamental in the study of Diophantine equations, polynomials and modules. Their structure facilitates the solution of linear equations and their analysis.
For example, in
(integers), each element can be decomposed into prime factors, which is essential for decomposition theorems.
Algebraics applications
Solving equations: methods using GCDPs are used to solve systems of equations.
Factorization theorem: by understanding how to decompose polynomials, we can better manipulate and solve equations.
Connection to geometry
Bezout rings also have geometric applications, especially in the context of algebraic geometry:
Algebraic surface: In polynomial-defined surfaces, the intersections of these surfaces can often be analyzed through the ideals generated by the polynomials. This achieves a nice connection with projective spaces where solutions can be interpreted geometrically.
Bezout’s theorem: It states that for two projective manifolds in a projective space, the number of points of intersection of these manifolds is given by the product of their degrees counting the sure multiplicities. This perfectly illustrates the link between the algebraic properties of a Bezout ring and their geometric implications.
Geometrics applications
Characterization of curves: the behavior of curves in projective spaces can be studied from their equations in Bezout rings.
Intersection analysis: through algebraic considerations, one can predict and understand how different geometric shapes interact with each other.
Conclusion: Bezout rings are central structures in algebra which, by their properties, directly influence geometric analysis in the framework of algebraic geometry. Their use makes it possible to navigate between pure algebra and geometric applications, thus facilitating.
The connection between geometry theorems and the algebraic framework
The connection between theorems is fundamental in algebraic geometry. A discipline that establishes deep links between geometry and algebras. Here are some key points illustrating this relationship:
1) Algebraic varieties
Definition: An algebraic manifold is the set of solutions of a system of polynomial equations. Each manifold can be described by ideals in a ring of polynomials.
Theorem: Theorems such as Hilbert’s Theorem, which links the ideal of polynomials to the points of the manifold, show the geometric properties of the polynomials.
2) Correspondence between ideals and subset
Ideals and points: each ideal in a ring of polynomials can correspond to a manifold, creating a duality between algebraic (ideal) and geometry (manifold) objects).
Nullstellensatz’s theorem: This theorem establishes a key connection by stating that ideals define sets of points, and conversely, sets of points can be expressed by ideals.
3) Functions and morphisms
Morphisms: functions defined on algebraic manifolds can be interpreted as morphisms between manifolds, linking algebraic structures.
Correspondence theorem: Theorems such as Zaiski’s theorem provide correspondences between morphisms of algebraic spaces and their geometric properties.
4. Manipulation of Polynomials
The commands for manipulating polynomials are, among others: collect, expand, sort, normal, coeff.
Example
By
or
, calculate the g.c.c.d. of P and P'. Then use the gcdex command. Similarly, let
. Factorize P (by the way, redevelop and check that you have ordered in order of descending monomials of the form
). For each factor, do the irreductive test. Then do
and start again. Thus, MAPLE factors this last polynomial in the field
of the rational fractions in a and x or in
, i.e. by considering a as an “indeterminate”. And this factorization is different from that of
in
, where
is the evaluation homomorphism
that sends a over 1.
5. Identity of Bezout
The Bezout identity is a result of arithmetic which says that the p.g.c.d. of two integers a and b can be expressed in the form
with u and v integers.
Theorem
Let A and B be two polynomials of 𝕂[X]. Then A and B are prime to each other if and only if there are two polynomials U and V such that
[3].
More generally, Bezout’s identity characterizes two prime elements in a main ring. A ring that verifies the property of the theorem is called a Bezout ring.
Example
1) Calculate the p.g.c.d. of
and
.
2) Calculate polynomials U and V such that
. Using the result obtained, find an integer n and polynomials
and
with coefficients in ℤ such that:
and such that n, the content
of
and the content
of
are prime to each other as a whole (we even have
and
). In a ring with a theory of the p.g.c.d. (e.g. a factorial ring), the content of a polynomial with coefficients in A is the p.g.c.d. of its coefficients (see in MAPLE, the content and primpart commands). The integer
has the following property: if p is a prime number, p divides
if and only if the polynomials P and Q are not prime to each other in
.
6. Resulting
The MAPLE command to calculate the resultant of two polynomials is the result.
Let A be an integral ring. If n is an integer, we denote
the A-module of polynomials of degre < n. It is therefore a free A-module of rank n. We can define the resultant in one of the following ways:
1) For any pair of polynomials
of
, there exists a single element
of A verifying.
a)
;
b) If
, if R is the remainder of the Euclidean division of Q by P and if
is the dominant coefficient of P,
;
c) Si
,
(in particular,
,
if a and
).
2) The resultant of P and Q is the determinant of the linear map
of
in
in the bases
and
,
and
.
Example 6.1
By
and
of degree 9, it is the determinant of the matrix called the Sylvester matrix (or its transpose). Definition I closely follows Euclid’s algorithm; it gives an algorithm to calculate the resultant, and at the same time the unit; it is quite easy to show that definition I verifies properties II.
Proposition 6.1
If P and Q are two polynomials of degree > 0, there are polynomials you and V in A[x] with deg U < deg Q and deg V < deg P such that UP + VQ = Res(P, Q).
Proof 6.1
If M is a matrix of order r with a coefficient in A, we have the relation
where N is the transpose of the comatrice of M. This implies in particular that for any element v of
,
belongs to the image of
by M. In particular, here, the polynomial Res(P, Q). 1 belongs to the image of the linear map
, which is the statement of the proposition.
Proposition 6.2
If A is a factorial ring, and if P and Q are of degree > 0, P and Q have a common factor of degree ≥ 1 if and only if
.
Proof 6.2
We start by proving that P and Q have a common factor of degree ≥ 1 if and only if there are polynomials U and V in
both of which are not zero, such as
,
and
. Let us show the sufficient condition. We have
.
Since A is factorial, any irreducible factor of P divides VQ. Since the degree of V is strictly lower than that of P, one of the irreducible factors of P necessarily divides Q.
It is then easy to see that the existence of U and V is equivalent to the nullity of the determinant of the Sylvester matrix. We leave the reciprocal to the reader.
Corollary
Let k be an algebraically closed field and P and Q two polynomials of
of degree > 0. Then
if and only if P and Q have a common root [4].
In the case of a polynomial of
, these results applied to
for p prime number imply the following proposition:
Proposition 6.3
When
and
, p divides
if and only if P and Q have a common factor in
.
In the case of several variables in the following way, we have the following results:
Theorem
Let P and
of degree ≥ 1 in x1. Then
if and only if P and Q have a common factor in
which is of degree ≥ 1 in x1.
Bearing theorem
Suppose that k algebraically closed. Let P and Q be two polynomials of
of degree ≥ 1 in x1 and let
(resp.
) be the dominant term of P (resp. Q) as the polynomial in x1. Let
such that
.
We also assume
or
. Then there exists
such that
and
[5].
General definition
If
is an ideal of
, we call the affine manifold defined by I or by
the set of common zeros of all the elements of I, or, which amounts to the same thing, of
:
Definition 6.1
If S is a set of points of
, we define
as the ideal of polynomials cancelling over S; let
be the smallest affine variety containing S.
The problems of elimination and rehabilitation can now be posed in a more general way.
Definition 6.2
If S is a set of points of
, we define I(S) as the ideal of polynomials cancelling over S; let
be the smallest affine variety containing S.
Definition 6.3
Let I be an ideal of
. The ideal of elimination of I with respect to the idea
the ideal
of
.
To eliminate is a way to “triangularize” the system of polynomial equations in order to solve it. Starting from an ideal
of
, we eliminate
in I and thus obtain an ideal
of
, then eliminate
in
and obtain an ideal
of
. This is exactly what we do when we triangulate a linear system of equations to solve it. This way of posing the problem implies an order on the
.
NB. It is very important to give oneself an order on the monomials of
.
To calculate V(I), we can therefore calculate
and if it is non-empty, calculate
, i.e. find for
if there exists
such that
, and start again. It is a problem of recovery.
By the way, the simplest problem of elimination is the following:
Solve the system
where p and s are constants. Calculate the resultant of the two polynomials
and
with respect to x. We find
.
Thus, if
is a solution of the system, y necessarily satisfies the equation
.
We will now look at other situations where this elimination problem occurs naturally.
7. Parametric Equations
Let S of points
of
given by parametric equations:
(1)
where the
are polynomials in the parameters
. Either
(2)
The ideal of
. Let
be the ideal elimination of I with respect to
, i.e.
. It is shown that
(3)
Thus, if the ideal I admit as a basis
, a system of Cartesian equations for
is given by
(4)
Calculating the difference between S and
is a bearing problem (we of course
).
If we replace the
with rational fractions
, the ideals to be considered are
and
where
(the last condition allows the zeros to be eliminated from the denominators
).
8. Extrema Related
Another example where the elimination problem naturally occurs is that of bound extrema. Let’s give an example:
Consider the sphere of
and f the function
. We want to find the extrema of f on the sphere. Let’s put
.
The Lagrange method says to find them among the points
such that
and such that there exists
such that
. We then obtain 4 polynomial equations in
. Do it and find the extrema of g on the sphere.
9. Two Geometric Problems
Most geometric problems actually involve polynomial equations and can be translated into the language of polynomial ideals. We will give a few examples of a different nature without pretending to make a general theory.
Pappus theorem
Let
and
be two distinct families of aligned points. We denote
by
the intersection of the lines
and
(with the convention that
,
). Then the three
and
are aligned.
Let us express
(resp.
) as the barycenter of the points
and
(resp.
and
) with parameter
(resp.
). We can choose
and
.
We will give two algebraic proofs of this theorem (although the geometrical ones are much prettier). In any case, don’t forget to use the normal command.
1) First method: explicitly calculate the coordinates of
points and directly verify that they are aligned: we can make intermediate procedures calculating the equation of the line passing through two points, the intersection of two lines, testing whether three points are aligned.
2) Second method: make an aligned procedure that takes as argument three lists P, Q, R of two elements (which corresponds to three points of the plane) that returns the polynomial condition so that these points are aligned; then write the polynomial expressions reflecting the fact that
belongs to the line
, then to the line
. This gives us six polynomials
. Write the polynomial condition g reflecting the fact that the points
are aligned. Calculate a Gröbner basis G of the ideal generated by
with
and verify that g belongs to this ideal using normalf.
The equation of two lines is of the form
, so it is an equation in x and y of total degree 2. Thus, the set formed by two straight lines in a plane is a degenerate conic (curve of equation a polynomial C in x and y of total degree 2) (the polynomial C is not irreducible).
Pascal’s theorem generalizes Pappus’ theorem to a non-degenerate conic, i.e. having an irreducible equation of degree 2.
Pascal’s theorem
Let
and
be six distinct points of a non-degenerate conic. For. By
, we denote
the intersection of the lines
and
. Then the three points
and
are aligned. (We always make the convention that
,
).
To prove this theorem, we give ourselves five points of the plane
,
and we choose the coordinate system appropriately.
The
is the intersection of the lines
and
. We take an arbitrary point
of the line
in the form of the barycenter of
and
with the points t and
. The lines
and
intersect
. The lines
and
intersect Q. Determine the x and y coordinates of Q as a function of t,
and
(these are rational fractions with integer coefficients in these variables). This gives parametric equations
and
. Eliminate t using the resultant of
and
with respect to t. We deduce that the point Q runs through a conic.
10. A Look Back at the Elimination and the Basics of Gröbner
The Gröbner bases and the algorithms for calculating them were introduced around 1965 by Bruno Buchberger and intensively developed to date. The full force of these techniques, which are as we can see very recent, is highlighted with the development of formal calculus.
We will give some very rudimentary notions about the Gröbner bases. An excellent reference is [Cox, Little, O’Schea]. We set
and if
,
.
Definition 10.1
A monomial order on
is a total order relation on
or equivalently on the monomials:
by
such that:
1) If
and
and if
, then
;
2) Any non-empty subset of
has a smaller element. We write:
and si
.
Example
1) Lexicographic order
if only if the first non-zero coordinate of
is positive. Thus,
and
.
2) Graduated inverse lexicographic order:
if only if
and
and the last non-zero coordinate of
is strictly negative. Here,
is the sum of the coordinates of α. Thus,
is the sum of the coordinates of α. Thus,
,
,
.
3) Lexdeg order: This order depends on two lists of variables
and
. The monomials containing only the
or only
are compared using the graduated inverse lexicographic order, then:
if and only if
or if
and
. Thus, a monomial containing a
is larger than a monomial containing only
.
Check that it is indeed a monomial order. There are other possible orders. The three orders listed are available in MAPLE as plex, tdeg, and lexdeg.
Once an order has been chosen, we can speak of the dominant coefficient, the dominant monomial, the dominant term LT(f) of a polynomial f: the MAPLE leadmon command gives a list formed by the dominant coefficient and the dominant monomial; The dominant term is then obtained using cover (, “*”).
Definition 10.2
Let I be an ideal of
. We denote
the ideal of
generated by the dominant terms of the elements of I.
Definition 10.3
A finite subset
of an ideal I is called the Gröbner basis
.
Theorem 10.1
Any Gröbner basis of an ideal I relative to a monomial order is a basis of I, i.e.
. Every ideal I admits a Gröbner basis.
Definition 10.4
Let
be an ideal of
. The k-th ideal of elimination
of I is called the ideal
of
.
Elimination theorem
Let I be an ideal of
and G a Gröbner basis relative to the lexicographic order
. Then,
is a basis of the ideal
[6].
Bearing theorem
We assume that k is algebraically closed. Let
and
be the first ideal for elimination of I. Let
be the highest coefficient in
of
. If
is a partial solution in
that does not belong to
, then there exists
such that
.
Recall that V(I) is the set of
such that
for all
[7].
To construct a Gröbner basis of an ideal and verify that a basis is a Gröbner basis, we use a Buchberger algorithm.
Gröbner’s bases are also a way to test whether a polynomial is in an ideal. The MAPLE command is normalf.
Let us explain the principle: to do this, we need to introduce a generalization of the division to
relative to the chosen order.
Theorem 10.2
Let
be polynomials of
with a monomial order. Any polynomial
can be written as
(5)
where r is a linear combination of monomials not divisible by any of the dominant terms of the
[8].
Proposition
Let
be a Gröbner base of an ideal I and let
. There is an
vérifying:
1) None of the monomials of r is divisible by one of the
;
2) There exists
such that
.
We say that r is the remainder of the division of f by G.
Corollary
Let G be a Gröbner basis of an ideal I and
. Then, f belongs to I if and only if the remainder of the division of f by G is zero.
11. Conclusions
Gröbner’s bases are also a way to test whether a polynomial is in an ideal. The MAPLE command is normal. We have given some very rudimentary notions relating to the Gröbner bases.
We have explained the principle; for this, we need to introduce a generalization of the division to
relative to the chosen order.