In my experience, there are usually enough differences in the downstream applications that it can be difficult to adapt these solvers to different use cases. These libraries often make subtle choices about the way data is laid out that can be hard to reconcile. I agree that not being able to read the author's mind can put you at a big disadvantage.
The libraries that are more generically useful are lower level ones like the linear algebra libraries: BLAS, LAPACK, SuiteSparse, etc. I think the key to their success is at least in part because of a shit simple common data format. A dense matrix is just a bunch of floats packed in memory... if that's your common format, you can do a lot.
Also, relative to other domains in scientific computing, there tend not to be too many ways of doing something in linear algebra. Gaussian elimination with optional full or partial pivoting is basically good enough. QR factorization, rank revealing or not.
On the other hand, even "simple" things like function approximation get complicated quickly. There is no one-size fits all solution. Do you want to do an L2 or minimax approximation? Or would you rather interpolate a function? What polynomial basis do you want to use? What degree of polynomial? Piecewise? Splines? Oh yeah, there are rationals, too...
Solving linear or nonlinear equations brings in many different iterative solvers. Maybe you need preconditioning?
Solving PDEs combines all this complication and difficulty and then throws geometry into the mix, which is even more subtle and difficult to get right than all the rest.
I think in a lot of cases writing your own stuff just means you can sidestep all this. Writing a library that is general and handles all this stuff elegantly and efficiently is really hard. Using one can be just as hard.
Edit: I think my point about the common data format also partly explains the success of "scientific Python" and MATLAB, with Julia struggling to gain a foothold. In MATLAB, everything is a matrix; in scientific Python, numpy's ndarray is the common data format. I've seen people waste a lot of time shuffling data back and forth between different Julia libraries' idiosyncratic data formats.
The libraries that are more generically useful are lower level ones like the linear algebra libraries: BLAS, LAPACK, SuiteSparse, etc. I think the key to their success is at least in part because of a shit simple common data format. A dense matrix is just a bunch of floats packed in memory... if that's your common format, you can do a lot.
Also, relative to other domains in scientific computing, there tend not to be too many ways of doing something in linear algebra. Gaussian elimination with optional full or partial pivoting is basically good enough. QR factorization, rank revealing or not.
On the other hand, even "simple" things like function approximation get complicated quickly. There is no one-size fits all solution. Do you want to do an L2 or minimax approximation? Or would you rather interpolate a function? What polynomial basis do you want to use? What degree of polynomial? Piecewise? Splines? Oh yeah, there are rationals, too...
Solving linear or nonlinear equations brings in many different iterative solvers. Maybe you need preconditioning?
Solving PDEs combines all this complication and difficulty and then throws geometry into the mix, which is even more subtle and difficult to get right than all the rest.
I think in a lot of cases writing your own stuff just means you can sidestep all this. Writing a library that is general and handles all this stuff elegantly and efficiently is really hard. Using one can be just as hard.
Edit: I think my point about the common data format also partly explains the success of "scientific Python" and MATLAB, with Julia struggling to gain a foothold. In MATLAB, everything is a matrix; in scientific Python, numpy's ndarray is the common data format. I've seen people waste a lot of time shuffling data back and forth between different Julia libraries' idiosyncratic data formats.