Deep Neural Networks (DNN) are very effective in high performance applications such as computer vision, natural language processing and speech recognition. However, these networks are vulnerable to adversarial attacks that infuses perturbations in the input data which are imperceptible to human eyes. In this paper, we propose a novel decision-based targeted adversarial attack algorithm which exposes the vulnerability of the underlying DNN when implemented on a resource constrained computing edge. Experimental results show that the proposed model performs 4 seconds(s) faster on an average, in a single perturbed image generation than the state of the art RED-attack, while consuming 15% less time for the entire dataset. © 2020 IEEE.