With massive increase in the amount of data being generated each day, we need automated tools to oversee the evolution of the web and to quantify global effects like pagerank of webpages. Search engines crawl the web every now and then to build web graphs which store information about the structure of the web. This is an expensive and error prone process. Central to this problem is the notion of graph similarity (between two graphs spaced in time), which validates how well search engines secure content from web and the quality of the search results they produce. In this paper, we propose two different types of anomalies which occur during crawling and two novel similarity measures based on vertex neighbourhood, which overcomes the proposed anomalies. Extensive experimentation on real world datasets shows significant improvement over state of art signature similarity based methods. © 2016 IEEE.