To better understand how accurate opaque black box models work, it is necessary to explain their internal workings in terms of human-interpretable image sub-regions known as concepts. This explanation will provide insights into how these models perceive the sharedness of concepts across related classes, as frequently observed in the real world. With this objective in mind, the proposed work aims to leverage an incremental Non-negative Matrix Factorization technique to extract shared concepts in a memory-efficient manner, thereby reflecting the sharedness of concepts across classes. The relevance of the extracted concepts towards prediction, as well as the encoding of primitive image aspects such as color, texture and shape by the concept, will be estimated after training the concept extractor. This approach reduces training overhead and simplifies the explanation pipeline, enabling the elucidation of various concepts - some genuine, some spurious - on which different black box architectures trained on the Imagenet dataset group and distinguish related classes. © 2024 ACM.