It is often desirable to solve mathematical problems as a limit of simpler problems. However, are such techniques always guaranteed to work? For instance, the problem of finding roots of polynomials of degree higher than three starting from some initial guess and then iterating was only solved in the 1980s (Newton's method isn't guaranteed to converge): Doyle and McMullen showed that this is only possible if one allows for multiple independent limits to be taken, not just one. They called such structures "towers of algorithms". In this talk I will apply this idea to other problems (such as computational quantum mechanics, inverse problems, spectral analysis), show that towers of algorithms are a necessary tool, and introduce the Solvability Complexity Index. An important consequence is that solutions to some problems can never be obtained as a limit of finite dimensional approximations (and hence can never be solved numerically). If time permits, I will mention connections with analogous notions in logic and theoretical computer science. Joint work with Anders Hansen (Cambridge), Olavi Nevalinna (Aalto) and Markus Seidel (Zwickau).