How much of this math is deep work and how much of it is it the conjuring of obscure objects that haven't had much study and proving trivial things about them?
I think Deligne's Theorem is a poster child for the power of Modern Algebraic Geometry. Andrew Wiles's proof of Fermat's Last Theorem might also be relevant.
Generally speaking, studying what the solution sets of polynomial equations is "like" is quite fundamental to a lot of mathematics. Doing this in a "deep" way can lead to a reimagining of much of modern mathematics: https://rawgit.com/iblech/internal-methods/master/notes.pdf
[edit]
For instance, lots of people use a straightforward generalisation of number systems called rings. But ring theory is quite abstract. Modern Algebraic Geometry shows that at least in the case of commutative rings, these are merely spaces of functions on a space called a ring's spectrum. You can visualise a ring's spectrum, unlike the ring itself. Many properties of a ring are just properties of its spectrum. This seems like a significant conceptual leap in the understanding of things that were studied since the 1800s without much geometric understanding.
Modern Algebraic Geometry is indeed highly abstract, but generally the conjuring of obscure objects is with a specific goal in mind, for example
- consolidation of many types of results into a `simple' theoretical framework, I suppose this originates with Noether, and reaches it apotheosis in Bourbaki's tracts.
- embedding of 'classical' objects (solutions to polynomial equations) inside a larger 'category' (schemes) where certain mysterious relations observed in the classical world (Weil Conjectures) have a more `natural' interpretation (fixed point theorem) and light the way to a proof which would have otherwise been beyond reach
Algebraic geometry has many deep and powerful ideas. It started out by looking at the space described by the zeros of a polynomial, eg
y^2 - x^3 - x = 0, x^2+y^2+1=0
But actually you do not need to talk about the underlying space directly. If you want to talk about a space, all you actually need to think about are the possible functions on the space. If you want to talk about geometry, you only need the algebra of functions on that space, so in the example just the polynomials themselves, rather than having to say explicitly solve it for the points. You can use this big idea in a lot of other areas of mathematics and physics.
The whole approach of Grothendieck (the “rising sea” analogy) is to conquer deep theorems by first conjuring obscure (but simple) objects and prove simple properties of them, again and again, until finally the deep theorem is a result of many simple ones. In this project (and Grothendieck’s program) the deep result is the Weil conjectures.
I once tried to learn some math from first principles in order to solve a symbolic dice problem. I had a question deleted from mathoverflow (probably because I did a lousy job wording the open ended question of what the heck the mathematical terminology even was for the problem I was solving) but it lead down the rabbit hole of abstract algebra and into universal algebra. I had discovered a simple dice problem where the only mathematical representation was a a “commutative magma” and I found reading the surrounding material fascinating. The ideas behind universal algebra, the notion boiling mathematics down to a set of arbitrary objects and abstract and arbitrary operation(s) and classification and rules build up based on what you define/allow the rules to be.
It’s surprisingly simple at a conceptual level but I rapidly stopped researching as I found the entire field seems to assume a phd level of math knowledge and terminology. For what I probably could have grasped in high school.
Math needs more people to try and build a chain of understanding “up from the ground level” instead of arbitrary starting points based on assumptions regarding prior learning and educational pipelines/universities.
I think you may run in to an issue with the fundamental nature of math in answering that question because the path to one deep theorem is usually made up of a many obscure objects linked through trivial steps.
To what extent is this knowledge reducible to a form that someone can learn it quickly and get something done with it? Algebraic Geometry is just one field of mathematics, no?
Related question: How do you use this resource?
[edit] To make it clear: It's a wonderful thing that this exists.
Modern Algebraic Geometry is a field that encompasses and somewhat unifies so many different areas of mathematics that it is extremely difficult to distill down quickly. So it really depends on what your currently mathematical background is. If you only know some Abstract Algebra (ring and field theory in particular) and Topology at the undergrad level, it will take a long-time for you to contribute to research mathematics, but you could probably get a feel for the subject after a couple years (shorter if our a graduate student focusing on it full-time). If you don't know anything about those two subjects, it's a very long slog. If you know differential geometry, category theory, complex geometry in addition to the above, you could probably pick it up relatively quickly.
The definitions and machinery make sense if you have enough background, but the why and how we got here is often very unclear.
If you don't care about schemes, stacks and current modern viewpoint of Algebraic Geometry, it's not hard to get a decent understanding of algebraic varieties which are the original motivation in the field. An undergrad book on the subject Ideals, Varieties, and Algorithms by Cox, Little, and O'Shea does a great job of introducing the subject. And has a really cool project on calculating Groebner bases for polynomial equations in sin and cosine to define the configuration space of different types of robotic arms.
I'm trying to learn more modern algebra (including algebraic geometry). The modern stuff is of interest to me, but it feels overwhelming.
I get that affine schemes are somehow the "geometric" dual of a commutative ring. A motivating example is Spec(R[X,Y]/(X^2 + Y^2 - 1)), which is simple enough as a ring (if you remove the "Spec"), but as an affine scheme it is a circle. The slash is almost acting like a subset formation operation. I know enough category theory to see that the reason why the slash is acting that way is because equalisers are the dual construction to coequalisers; the slash (ring quotienting) is a coequaliser, and in the category of affine schemes it becomes an equaliser, and equalisers on "spaces" are supposed to form subsets somehow. Another example is the ring R[X]/(X^2), sometimes called the dual numbers, whose affine scheme (or Spec) is a lot like an infinitely small line segment. The fact that the affine scheme behaves like an infinitely small region of space is dual to the algebraic fact that the dual numbers are a local ring.
Finally, I have a vague understanding that a scheme is the result of gluing some affine schemes together. Sheaf stuff is involved.
Anyway, the above summarises my understanding of schemes. I don't know any differential geometry as such. I rely a lot on naive, and sometimes not wholly rigorous intuition. I have no idea how you compute with this, especially given how elaborate the definitions are.
[edit]
I'm hoping this might present a shortcut for someone like me: https://www.ingo-blechschmidt.eu/research.html It's especially promising because the computations look more familiar to me.
There are some old tutorials that show how to do concrete calculations with these objects with Macaulay. They were an immense help to me when learning algebraic geometry, because they showed me how all this sheaf blah blah blah connected to actual concrete polynomials.
There's more in the book "Computations in algebraic geometry with Macaulay 2," which is free online. See also Schenck's "Computational Algebraic Geometry."
edit: Also, topos theory is definitely not the way to go here. Getting your hands dirty with algebraic curves is. See e.g. Griffiths' Introduction to Algebraic Curves.
The dual numbers, in my opinion, are a lot easier to understand with some background in differential geometry. In differential geometry, if I_x is the ideal of smooth functions vanishing at the point x in the ring of smooth functions, then I_x / I_x^2 is a real vector space called the cotangent space (these elements are given the suggestive dx moniker; the “infinitely short line” you mentioned above is just a modern interpretation of the infinitesimal from calculus) and dual of this vector space is the space of tangent vectors at this point. Another way to think about R[X,Y]/(X^2) dropping all the non-linear terms for X to create a flat (co)tangent space.
Also the original motivation for sheaves was about creating a way to deal with multi-valued complex function. The complex log function is multi-valued so in intro complex analysis it’s studied locally by choosing a branch of the range where it’s singular valued. Thus it’s impossible to “do differential geometry” by talking about a global ring of analytic functions. But you can talk about the “local ring of analytic functions” at a point and specific branch and glue these locally ringed spaces together to get global insight.
Yeah, you seem to be on a pretty good track here. Most people who attempt learn that usually have more mathematical experience, so some things seem to them more natural than to you, e.g.:
> Finally, I have a vague understanding that a scheme is the result of gluing some affine schemes together. Sheaf stuff is involved.
For most mathematicians, this step is actually rather clear, because the intuition here is that "you glue schemes from affine schemes same way you glue (topological/differential) manifolds from pieces that look like R^n", and people studying mathematics typically have extensive experience with manifolds before they encounter schemes.
> and in the category of affine schemes it becomes an equaliser, and equalisers on "spaces" are supposed to form subsets somehow.
The intuition here is something like this: let's assume that R is algebraically closed, for example let R = C, ring of complex numbers. Then C[x, y] is a ring of polynomial functions defined on complex space C^2, and C[x, y]/(x^2 + y^2 - 1) is a ring of polynomial functions defined on the subspace V = { x^2 + y^2 - 1 = 0 }: if you have two functions f, g \in C[x, y], such that f(z) = g(z) for all z in V, then function h = f - g must be zero on the entire V, so (by Hilbert's Nullstellensatz), h must be in the ideal (x^2 + y^2 - 1). So, two elements f, g of C[x, y] restrict to the same function on V = { x^2 + y^2 - 1 = 0 } precisely when their difference is in the ideal I = (x^2 + y^2 - 1), so the ring of functions on V is exactly the quotient ring C[x, y]/I (or, as you call it, equalizer, which is correct, but it's never called this way by geometers).
> I have no idea how you compute with this, especially given how elaborate the definitions are.
Let me give you an example that I found very illuminating, which just so happen to talk about the scheme Spec(R[X,Y]/(X^2 + Y^2 - 1)) you mentioned: see section "6.5.8. More examples of rational maps." in http://math.stanford.edu/~vakil/216blog/FOAGnov1817public.pd...
It’s not at all reducible. This is very complex stuff, requiring extensive mathematical knowledge and maturity. It’s only barely accessible to average person with a BS in mathematics.
Stacks, like EGA before it, is a wonderful reference but a terrible textbook. I think even the authors would agree with this! Fortunately there are many other books from which to learn algebraic geometry, after which Stacks will start to make a lot more sense.
it's used in the sense of 'a wiki project which tries to act like an encyclopedia' unlike for example the wiki wiki web. a similar kind of wiki is scholarpedia.org.
(via https://news.ycombinator.com/item?id=11054838, but nothing else there)