28th January 07:36
Solving simple 1-D Schroedinger equation numerically
Yes I did. But as I found that above 30 - 50 the eigenfunctions make no
sense anyway, I see that it's better to calculate just a few of them instead
of the whole bunch. Determining eigenvalues is not that bad using matrix
decomposition, but doing all those vectors makes it slow.
It agrees closely, although the Numerical recipes method doesn't use
Cholesky, but a different scheme.
I tested it on Octave, and the timing is pretty impressive compared to the
NR stuff in pascal
grid 500: 2,8 sec
900: 16,2 sec
1000: 21,7 sec
2000: 174 sec
NR-pascal at 500 gridsize: 12 sec.
Timed on Athlon-XP2700 (2,17 real GHz)
It closely fits O(N^3) behaviour, as expected.
I couldn't test the ARPACK thing as I don't have it installed (yet). But as
you determine only a constant number of eigenvalues/vectors it should scale
So I think it is good to use this prog as it has the advantage of
precompiled and highly optimized routines. Plotting is easy too.
In the data ****ysis project I had large 2D and 3D arrays which were
processed in loops... precisely the weakness you describe. Maybe I should
have vectorised more operations, but I was a bit lazy...
Might be interesting to take a look at. Another poster pointed out that the
errors scale as O((dx)^2) which agrees closely with the results.
This means the accuracy increases linearly with invested CPU cycles (with
the O(N^2) method), as long as you have enough RAM.
It might be the case that a n-th order approximation has errors scaling as
O((dx)^n), which would be quite an improvement.
I suppose one eventually gets to a 'spectral method' as someone mentioned
(then one uses very many points to approximate derivatives).