Human visual system (HVS) is naturally attracted to the salient regions that appear distinctly in the foreground of a scene. However, for a machine, automatically detecting the region of saliency is a challenging problem. Recently, a generative model namely Saliency GAN (SalGAN) discriminates if a pixel is salient or not by generating the saliency map given the input image. The generator is guided by a content loss and adversarial loss. However, the generated saliency maps tend to be smooth lacking finer details. We propose an improvised generator called iSalGAN (improvised saliency GAN) that integrates both low-level and high-level features to produce finer saliency maps. Our iSalGAN is guided by a combination of multiple content losses and, the adversarial loss. Our model is trained on MSRA10K dataset and tested on ECSSD and DUT-OMRON datasets. Qualitative and quantitative evaluation of our model shows the superior performance of our model over state-of-the-art methods. Codes will be made publicly available.