The parallel numerical solution of time-dependent partial differential equations (PDEs) has long been the focus of the high performance computing community. In order to effectively utilize the large number of available processors in modern computing clusters, the classical approach is to apply a divide-and-conquer strategy, such as domain decomposition methods, to solve the spatial problem at each time step over many processors. For highly refined models, however, many time steps must be taken to maintain accuracy and/or stability, so the time stepping process often becomes the bottleneck. Consequently, parallelization in the time direction has become an active research topic in the last two decades; there is even an annual workshop dedicated to this topic, with its 9th edition being held in December 2020.
In this talk, I will present two recent algorithms for introducing parallelism in time. In both instances, one decomposes the time horizon into sub-intervals and solves the sub-interval problems in parallel. Communication between sub-intervals is accomplished by a coarse but inexpensive time integrator. Our first algorithm, which is designed for solving initial value problems, can be shown to be equivalent to the well-known and very effective Multigrid Reduction in Time (MGRIT) method; by studying its convergence properties, we obtain a new, more general analysis also for MGRIT. The second algorithm, called ParaOpt, is designed for solving optimal control problems. We derive a criterion for convergence when the underlying model is a linear diffusive problem, and we show that the method is scalable, i.e., its convergence is independent of the number of sub-intervals in the decomposition. Finally, we present numerical examples to illustrate the effectiveness of our approach.