Effective hyper-parameter tuning is essential to guarantee the performance that neural networks have come to be known for.In thiswork, a principled approach to choosing the learning rate is proposed for shallow feedforward neural networks.We associate the learning rate with the gradient Lipschitz constant of the objective to be minimized while training. An upper bound on the mentioned constant is derived, and a search algorithm, which always results in non-divergent traces, is proposed to exploit the derived bound. It is shown through simulations that the proposed search method significantly outperforms the existing tuning methods such as Tree Parzen Estimators (TPE). The proposed method is applied to three different existing applications: a) channel estimation in OFDM systems, b) prediction of the exchange currency rates, and c) offset estimation in OFDM receivers, and it is shown to pick better learning rates than the existing methods using the same or lesser compute power. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.