The tactic of 'relaxation' has often been used in one guise or another in order to cope with mathematical programs with a large number of constraints, some or all of which may be only implicitly available. By 'relaxation' one means the solution of a given problem via a sequence of smaller problems that are relaxed in that some of the inequality constraints are temporarily ignored. Relaxation has been used primarily in the context of linear programming, but in this paper a version that is valid for a general class of concave programs is examined. Constraints are dropped as well as added from relaxed problem to relaxed problem. A specialization to the completely linear case is shown to be equivalent to Lemke's Dual Method. This result permits some pertinent inferences to be drawn from the extensive computational experience available for the (primal) Simplex Method. Other matters pertaining to computational efficacy are discussed. An interpretation of relaxation in terms of the dual in the nonlinear case is also established. The optimal multipliers generated by successive relaxed problems turn out to comprise a sequence of improving feasible solutions to the minimax dual. When interpreted in this way, it becomes apparent that relaxation corresponds to just the opposite tactic -- called 'restriction' -- applied to the dual problem. Restriction is an equally interesting and useful tactic in its own right, and its main features are outlined.