Meta-learning (ML) utilizes extracted meta-knowledge from data to enable models to perform well on unseen data that they have not encountered before. Typically, this meta-knowledge is acquired from randomly sampled task batches and a critical assumption in the meta-learning is that all tasks in a batch equally contribute to the meta-knowledge. However, this assumption may not always hold true. In this study, we explore the impact of weighting tasks in a batch based on their contribution to meta-knowledge. We achieve this by introducing a learnable “task attention module” that can be integrated into any episodic training pipeline. We demonstrate that our approach improves the quality of the meta-knowledge obtained on standard meta-learning benchmarks such as miniImagenet, FC100 and tieredImagenet, as well as on noisy and cross-domain few-shot benchmarks. Additionally, we conduct a comprehensive analysis of the proposed task attention module to gain insights into its operation. © 2023, The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd.