In , a novel approach is proposed for automatically segmenting and transcribing continuous speech signal without the use of manually annotated speech corpora. In this approach, the continuous speech signal is first automatically segmented into syllable-like units and similar syllable segments are grouped together using an unsupervised and incremental clustering technique. Separate models are generated for each cluster of syllable segments and labels are assigned to them. These syllable models are then used for recognition/transcription. Even though the results in  are quite promising, there are some problems in the clustering technique due to (i) the presence of silence segments at the beginning and end of syllable boundaries. (ii) fragmentation of syllables (iii) merging of syllables and (iv) poor initialization of syllable models. In this paper we specifically address these issues, make several refinements to the baseline system, which has resulted in a significant performance improvement of 8% over that of the baseline system described in .