We discuss interpretability and explainability of machine learning models. We introduce a universal interpretability index, JJ, to quantify and monitor the interpretability of a general-purpose model, which can be static or evolve incre-mentally from a data stream. The models can be transparent classifiers, predictors or controllers operating on partitions or granules of the data space, e.g., rule-based models, trees, proba-bilistic clustering models, modular or granular neural networks. Additionally, black boxes can be monitored either after the derivation of a global or local surrogate model as a result of the application of a model-agnostic explainer. The index does not de-pend on the type of algorithm that creates or updates the model, i.e., supervised, unsupervised, semi-supervised, or reinforcement. While loss or error-related indices, validity measures, processing time, and closed-loop criteria have been used to evaluate model effectiveness across different fields, a general interpretability index in consonance with explainable AI does not exist. The index JJ is computed straightforwardly. It reflects the principle of justifiable granularity by taking into account balanced volumes, number of partitions and dependent parameters, and features per partition. The index advocates that a concise model founded on balanced partitions offers a higher level of interpretability. It facilitates comparisons between models, can be used as a term in loss functions or embedded into learning procedures to motivate equilibrium of volumes, and ultimately support human-centered decision making. Computational experiments show the versatility of the index in a biomedical prediction problem from speech data, and in image classification.