That's fine, but then in the 'infinite vertices case' we require an associated numbering scheme and defining an ordering over vertices which requires a bijection from whatever set you're working with into an index labeling (⊆ ℕ), etc, when this could just be abstracted away. In general, I don't think this is any cleaner than the few axioms defined whose structure we can attack with the tools of semirings.
Overall, I think both cases have their uses; matrices in the area of computation and abstract algebras where manipulations are straightforward. In particular, I'm interested here since the axioms can be written as a language and operations can be done over finite sets which may yield some nice computability results over this 'language of graphs.'
LolWolf, thank you very much for 'defending' the algebra much better than I could possibly have defended it myself :-)
Just to add: I don't understand why we should compare the algebra and the matrix encoding suggested above. The latter is just one possible implementation (a model) of the former. It's a good, useful model. But the point of having an algebra is that we can abstract of details of particular implementation and instead focus on the laws that all such implementations satisfy. Sometimes these laws can be interesting by themselves (at least they are interesting to me).
I agree: I don't really see what the obsession is with linear-algebra based graph theory. Yes, it's cool, yes it has interesting results, but it's not everything and we've got tools for semigroups and general algebras that may be useful here, along with natural generalizations that may emerge which may not be obvious in the linear-algebraic picture.
Almost every time I've implemented a graph algorithm professionally, it's been using matrix operations because they came with a GPU implementation and smashed the performance of a naive implementation.
But from a design and theory standpoint, I prefer to work independent of the representation as much as possible, and only switch in the matrix language as an implementation of a type in already working code. (Sometimes, this isnt quite possible and you need to, eg, reorder operations to make it efficient to calculate in your representation.)
So I would say that linear algebra is a big deal for implementations, if not as much of one theoretically. (Mostly cause my computer has really fast linear algebra chips/code, and not so much for random graph traversal.)
Sure, I agree; in this case I refer to "tools" as theoretical tools rather than computational ones.
There's no debate on the merits of computational tractability by converting problems to linear algebra ones; I was speaking from a purely graph-theoretic standpoint.
Thanks LolWolf! You mentioned a few interesting ideas about monotonicity/fixed-point theorems & computability/hardness and I'd very curious to know if you come up with anything on these fronts -- please drop me an email if you do!
Overall, I think both cases have their uses; matrices in the area of computation and abstract algebras where manipulations are straightforward. In particular, I'm interested here since the axioms can be written as a language and operations can be done over finite sets which may yield some nice computability results over this 'language of graphs.'