Managing the actions of several agents to perform tasks which require coordination and cooperation pose significant research challenges. One such challenge is to synchronize the agents' view of the system to help them take the 'right' actions. In this paper, we propose an algorithm called MRCPG (Mobile Robot Coordination Point Gathering Algorithm) for coordinating the actions of a team of mobile robots. The aim is to gather these robots at a particular location in a 2-dimensional plane which is determined during execution. The robots are randomly deployed in the plane and they achieve the goal by communicating periodically. In addition, we impose a Reinforcement Learning framework and the robots learn a Shared Value Function (SVF) based on scalar rewards received. The SVF is used to select the best possible action in each state until at least one robot reaches the goal. Then a Reach-distance heuristic is used to direct the remaining robots to the goal. The algorithm was analyzed through simulations for up to 5 robots and the analysis indicates that communication helped robots perform significantly faster than when they acted independently - measured using the path-length of the first robot to reach the goal as the metric. We also observed that increasing team size enhances the effect of communication and hastens task completion. © 2008 IEEE.