A basic requirement for test and survey items is that they are able to detect variance with respect to a latent variable. To do this, item scales must discriminate between test subjects and must have a systematic, clear and sufficiently strong relationship with the underlying construct.
One possibility to examine the variability of an item and express it in a single value is the computation of the relative information content. The relative information content (also called relative entropy) is a dispersion measure for at least nominally scaled variables. But it also can be calculated on higher scale levels.
Fo more information real this [article][1].
[1]: https://www.linkedin.com/pulse/item-analysis-how-calculate-relative-entropy-r-jakob-tiebel/?published=t