Right, when I was doing a math Ph.D my specialization was numerical optimization. One of the first classes was numerical linear algebra. Of course you shouldn't be inverting a matrix if you don't have to. But this is basic stuff you can read in any numerical linear algebtra textbook.
I don't think an apology was requested in there, just an FYI for would-be readers to get more discussion. It's been almost a year, that's plenty of time between submissions.
"Numerical Recipies" by Teukolsky et al. is a good, accessable, and pedagogical introduction to numerical methods of all sorts, including matrix operations. His course covers some areas the book doesn't, especially how to recognize and deal with numerical instabilities. If you go looking deeper in the literature, you'll find that a lot of what's in NR is obsolete.
It contains various topics from basic linear algebra (like solving systems with LR factorization) to solving (partial) differential equations with linear algebra. It also tells you where each method applicable (like, "you can use that if this, but that one method works almost everywhere you could need, except that it is much more expensive").
PS
Russian translation of that book quickly became a rarity.
Yup, that was a day one lesson in numerical linear algebra.
Another lesson: At times will find that are forming 'inner products', that is, for some positive integer n and for, say, single precision, floating point a(i), b(i) for i = 1, 2, ..., n, computing
a(1) x b(1) + a(2) x b(2) + ,,, + a(n) x b(n)
Well, do that computation in double precision. Convert the result back to single precision if necessary; still definitely, if possible do 'double precision inner product accumulation'.
Why? The first rule of numerical analysis is to avoid the difference of two large numbers whose difference is small.
Why? Because maybe 12345.6 and 12345.7 each have 6 decimal digits of precision but they differ by 0.1 which has only 1 significant digit of precision and, thus, is not very accurate and can feed inaccuracy into the subsequent calculations.
So, when accumulating an inner product, a product a(1) x b(1) is basically in double precision, so KEEP THOSE DIGITS, especially since the additions may be the small differences of large numbers. So, by doing double precision inner product accumulation, have a chance of having full single precision accuracy for the result.
Next, when you have a solution x^1 to Ax = b, substitute and get Ax^1 = b^1, form c^1 = b - b^1 as an 'error', and then solve Ay^1 = c^1 and correct for the error by setting x^2 = x^1 + y^1. So, if y^1 is accurate, then get
Ax^2 = A(x^1 + y^1) = b^1 + c^1 = b^1 + b - b^1 = b
Of course, could do several steps of iteration here.
This process is called 'iterative improvement'.
This and more are in
George E. Forsythe and Cleve B. Moler, 'Computer Solution of Linear Algebraic Systems', Prentice-Hall, Englewood Cliffs, 1967.
E.g., pay close attention to 'partial pivoting'.
Beware of the fact that Ax = b may have a solution but not a unique solution. If this is the case in your problem, then it would be hopeless to look for A^(-1) yet a solution might still be easy to find.
Note that if there is a solution, then can (1) notice that each of the numbers is in 'floating point' and, thus, just an integer times a power of 2 (10, whatever, assume 2). Thus, in Ax = b can multiply A and b by an appropriate power of 2 and, then, have a problem where all the given data is integers. Then, for some positive integer m pick m single precision integer prime numbers and for each of these numbers, using only single precision integer arithmetic, solve the system in the integers modulo that prime where find multiplicative inverses using the Euclidean greatest common divisor algorithm. Then take the resulting m results and use the Chinese remainder theorem to construct the full multiple precision rational solutions at the end. So, have solved the system exactly using only single precision integer arithmetic. See
Morris Newman, "Solving Equations Exactly," 'Journal of Research of the National Bureau of Standards -- B. Mathematics and Mathematical 'Physics', volume 71B, number 4, October-December, 1967, pages 171-179.