I see science, or more precisely the method by which science proceeds (the scientific method) as an algorithm which asks progressively more complex questions, and subsequently receives answers of proportional complexity. Roughly, the science (theory) algorithm goes something like this:
Here we take the chain "(large)-(large with branches)-(large with leaves on some branches)-..." as an analogue for a chain of concepts of increasing complexity.
Science: Is the object General Sherman (largest tree in the world) larger (taller) than 10 metres?
Science: Does General Sherman have branches (defined in one way or another)?
Science: Do some of General Sherman's branches have feathers (defined in one way or another)?
Science: Damn! Back to the drawing board.
Science: Do some of General Sherman's branches have leaves (defined in one way or another)?
Possibly ad infinitum.
A corollary of this model is, that it may be the case that our theories, as they evolve in discrete steps, one superseeding the previous one, will grow to arbitrary large complexity as they accommodate phenomena of gradually increasing complexity.
It is a theorem of Algorithmic Information Theory that the complexity of certain stings/objects/concepts put a lower bound on the complexity of the theory/program that generates them. Thus one cannot hope for a complete theory of everything (TOE) without some harboured assumptions about the complexity of the universe and the phenomena within it. It would certainly be presumptuous to harbour such assumptions since as far as I know the question whether the universe (multiverse) is finite or infinite, and hence it's complexity, remains an open question.
For it may be the case that our theories can only approach a limit which would be a TOE, but necessarily never reach it.
The above ruminations have been primarily inspired by the work of Gregory Chaitin.