Word2vec is an iteration based method for creating word embedding for a given corpus. Word embedding is a way to represent a word with real valued vector such that it has the context of the adjoining words. Word2vec has two different models for word embedding, the continuous bag of words (CBOW) and the skip gram model. In this article we will discuss the mathematics behind the CBOW model. The python code for each section is also given.