Gradient based methods are the most widely used algorithms for convex optimization problems due to their simplicity and reliability. Applications based on gradient methods works well in absence of any external factors (noise, precision error etc.) with appropriate step size. In presence of uncertainty though, the shortcomings of the gradient methods become obvious from increased convergence time or in severe cases even the failure of convergence depending upon the noise level and the step size. This paper tries to address the issue of achieving the optimal performance from the gradient methods in presence of uncertainty. A case study of gradient descent algorithm has been conducted in this paper. The robust convergence of gradient based convex optimization methods is transformed into an equivalent stabilization problem in a dynamical system framework. The paper proposes a novel characterization of optimization algorithms by adopting the biased integral quadratic constraint (BIQC) framework which is generally used for robust analysis of dynamical systems. Using this approach an algorithmic procedure is devised to perform dynamics computation of upper bound of the step size which guarantees robust convergence towards an equilibrium point even in presence of uncertainty. © 2020 EUCA.