Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For your "I find it hard to believe that you wrote", you might read my:

"I'm sure that there is a lot of good material in that book, especially on core computer science topics."

If you look carefully, then you will find that as my first sentence.

I'm "aware" is about me, which is not appropriate, instead of about the book, which is appropriate, but, as explained early in the book, nearly no course covers the whole book.

But I was kind to the book: It made an even worse mess out of probability, and that material was farther from the back cover. Also see another post on this thread saying that generally in the book the proofs are poorly written.

"For those skimming the book, or perhaps more commonly, keeping it as a reference, it's a great compendium of algorithms references, much like a slightly more rigorous, paper version of Wikipedia." I can believe that, but it doesn't conflict with what I wrote. Indeed, I found and downloaded a PDF of the book for just the purpose you mentioned and that I mentioned in my first sentence: The book likely has some core computer science stuff, that is, not just poorly done, hijacked applied math that apparently I know much better than the authors, that I may be able to use as I finish the software for my server farm.

Gads, it looks like they want to use the approximation approach of starting with a minimum spanning tree to solve the traveling salesman problem in the plane. The idea also works in any Euclidean R^n. It's a CUTE idea. I first saw it in a paper of Karp who easily has some nice proofs of some astounding properties. Sadly in the book I saw some references for the idea but not from Karp, and I didn't see mention of some of the really nice properties. There's a chance of a biggie point there: For all the teeth grinding over NP-complete, that problem is NP-complete but surprisingly easy to solve with surprisingly good accuracy; so, the suggestion is that there is a crack in the armor of the NP-complete difficulty. That suggestion is actually more important than the problem itself which in practice people are not struggling with. Yes, the traveling salesman problem remains important, say, for production scheduling, but there the problem is not on a Euclidean R^n.

Some computer science is nice and useful. Hopefully there has been some such since I put down Knuth's TACP! So, maybe in some issues of handling deadlocks, transactions, race conditions, reliability via parallelism, and more, there will be something useful in the book it would take me more than 90 seconds to reinvent!

Still I'm correct: They should clean up their act in their math. We should get better from MIT.



>So, maybe in some issues of handling deadlocks, transactions, race conditions, reliability via parallelism, and more, there will be something useful in the book it would take me more than 90 seconds to reinvent!

I thought you were serious with all your writing, but after reading this bit, I can only conclude that you're either trolling, or delusional.


That remark is a bit facetious.


A bit full of yourself perhaps? Do you have anything substantiated to back up that sort of preening? (I ask seriously)


Seriously? It's not about me. It's about that book. For my remarks, you will have to look at them and not at me.

I nailed them on their description that the simplex algorithm was like Gaussian elimination for systems of linear inequalities.

No, The simplex algorithm is selecting elementary row operations to improve the value of the objective function.

Gaussian elimination also uses elementary row operations, but that is about the end of the direct connection. In particular, how Gaussian elimination selects elementary row operations is quite different from how simplex selects them.

For more detail, I explained the role of artificial variables where we just 'glue on' an identity matrix. So, since we just glue on an identity matrix, we never, directly in simplex, go through anything like Gaussian elimination to GET an identity matrix. Yes, at times, for numerical reasons, one should 'reinvert' the basis in simplex, and here Gaussian elimination, etc., might be used, but such 'reinversion' is not really simplex.

The authors mentioned Gaussian elimination when they should have mentioned elementary row operations. Next, what is special about simplex is improving the objective function, not working with linear inequalities. Indeed, simplex doesn't work with linear inequalities; instead, we just use 'slack' and 'surplus' variables to get a system of linear equalities (with non-negative variables) before simplex even gets the problem.

The authors were being sloppy. I called them out on it. It was appropriate that I do so.

If you believe that my remarks about simplex are in error, then by all means make your claims.

But the issue is not me; it's the book.


I nailed them on their description that the simplex algorithm was like Gaussian elimination for systems of linear inequalities.

I found your additional input on the topic helpful, but I'm having trouble picking out more helpful insights about LP from all the other unrelated discussion.


The core of the simplex algorithm, and the key to its considerable power, is a cute application of elementary row operations selected to improve the value of the objective function. That's the 'algebraic' part. There is a cute, and significant, geometric part: The collection of feasible solutions is an intersection of finitely many closed half spaces and, thus, is closed and convex. The set has finitely many extreme points. At each iteration, the simplex algorithm starts at an extreme point. The elementary row operation causes the algorithm to try to move along an edge that improves the value of the objective function. In case of 'degeneracy', the algorithm can execute algebraically but stand still geometrically, but there are techniques to do the right things in this case.

So, the core really isn't Gaussian elimination for systems of linear inequalities.

For more, LP and the simplex algorithm are surprisingly powerful: Part of the reason is, for a huge range of optimization problems, taking a local linear approximation and then attacking that with simplex is often the best technique known. There are also surprising connections with combinatorial optimization and some classic NP-complete problems. What LP and simplex do on networks is also surprising and powerful; e.g., there simplex takes on a special form based on spanning trees.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: