Main content

Files | Discussion Wiki | Discussion | Discussion
default Loading...

Home

Menu

Loading wiki pages...

View
Wiki Version:
A basic requirement for test and survey items is that they are able to detect variance with respect to a latent variable. To do this, item scales must discriminate between test subjects and must have a systematic, clear and sufficiently strong relationship with the underlying construct. One possibility to examine the variability of an item and express it in a single value is the computation of the relative information content. The relative information content (also called relative entropy) is a dispersion measure for at least nominally scaled variables. But it also can be calculated on higher scale levels. Fo more information real this [article][1]. [1]: https://www.linkedin.com/pulse/item-analysis-how-calculate-relative-entropy-r-jakob-tiebel/?published=t
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.