We present an algorithm for the solution of risk-averse optimization problems. The setting is sufficiently general so as to encompass both finite-dimensional and PDE-constrained stochastic optimization problems. Due to a lack of smoothness of many popular risk measures and non-convexity of the objective functions, both the numerical approximation and numerical solution is a major computational challenge. The proposed algorithm addresses these issues in part by making use of the favorable dual properties of coherent risk measures. The algorithm itself is motivated by the classical method of multipliers and exploits recent results on epigraphical regularization of risk measures. Consequently, the algorithm requires the solution of a sequence of smooth problems using derivative-based methods. We prove convergence of the algorithm in the fully continuous setting and conclude with several numerical examples. The algorithm is seen to outperform a popular bundle-trust method and a direct smoothing-plus-continuation approach.