A certain computer algorithm used to solve very complicated differential equations uses an iterative method. That is, the algorithm solves the problem the first time very approximately, and then uses that first solution to help it solve the problem a second time just a little bit better, and then uses that second solution to help it solve the problem a third time just a little bit better, and so on. Unfortunately, each iteration (each new problem solved by using the previous solution) takes a progressively longer amount of time. In fact, the amount of time it takes to process the k-th iteration is given by T(k) = 1.2^k + 1 seconds.

The maximum error in the computer's solution after k iterations is given by Error = 2k^-2. Approximately how long (in hours) will it take the computer to process enough iterations to reduce the maximum error to below 0. 0001?

2/k^2 < 0.0001

k^2 > 20000
k > 141.42
The total time for that many iterations, approximated by the integral, is
∫[0,141.42] (1.2^k + 1) dk = 8.64916*10^11 seconds
or 27408 years