Since Turing proposed the first test of intelligence, several modifications have been proposed with the aim of making Turing’s proposal more realistic and applicable in the search for artificial intelligence. In the modern context, it turns out that some of these definitions of intelligence and the corresponding tests merely measure computational power. Furthermore, in the framework of the original Turing test, for a system to prove itself to be intelligent, a certain amount of deceit is implicitly required which can have serious security implications for future human societies. In this article, we propose a unified framework for developing intelligence tests which takes care of important ethical and practical issues. Our proposed framework has several important consequences. Firstly, it results in the suggestion that it is not possible to construct a single, context independent, intelligence test. Secondly, any measure of intelligence must have access to the process by which a problem is solved by the system under consideration and not merely the final solution. Finally, it requires an intelligent agent to be evolutionary in nature with the flexibility to explore new algorithms on its own. © 2017, Springer-Verlag London Ltd.