Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Title:** Relating mental health apps' MARS ratings to app-level metrics **Collaborators:** John Bunyi, Benjamin Kaveladze, Veronica Ramirez, Akash Wasil, and Stephen Schueller **Background & Rationale:** Mobile apps differ widely in popularity and user engagement for various reasons, including marketing, user experience design, and the roles they play in users' lives. Some mental health apps are extremely popular, while many more have almost no users despite adherence to evidence-based therapeutic and design principles. An important question for researchers and developers is whether the quality of mental health apps predicts their success in a real-world marketplace. Baumel & Kane (2018) found that some expert-rated qualities of digital health app design predict real-world user engagement. Also, Wang, Markert & Sasangohar (2021) observed a strong positive correlation between the 16 most popular mental health apps' MARS mean scores and their ratings in the app store; however, app popularity (as measured by downloads) was not correlated with the MARS mean. Here we build on this previous work by exploring the link between the Mobile App Rating Scale (MARS) – a tool for evaluating mobile health app quality along dimensions of engagement, functionality, aesthetics, and information quality – and app-level metrics of popularity and engagement among mental health apps. **Design & Materials:** We have MARS rating data for 91 apps, collected from February 2020 through February 2021. The variables are as follow: MARS Mean, Subjective Impact, Perceived Impact, Subscales (Engagement, Functionality, Aesthetics, Information) that averaged together comprise the Mars Mean, and Rater site (UQueensland vs One Mind Psyberguide). Further, Apptopia (an online service that aggregates data from mobile apps) provided the research team with data on 73/91 apps for which we have MARS data. Among these 73 apps, 68 have data for at least one day of the month that the MARS review was completed and 56 have Apptoptia data for every day of the month that the MARS review was completed. This data is on daily and monthly active users, revenue, user retention, downloads, average revenue per user, average session length (seconds), and average app store rating across the Apple app store and/or Google Play store[0-5]. Apptopia also provided data on each app's average app store rating (across the Apple and Google play stores) and user retention (1,2,3,4,5,6,7,14,and 30 days after downloading), each of which has only one data point, corresponding to February 8, 2021. For 90/91 apps, we have data from the Apple app store or Google play store on app price, availability of in-app purchases, minimum iOS or Android version, download size, and age requirement. This data was collected on 3/2/2021. For 47/91 apps, we have data from One Mind PsyberGuide on each app's types of treatment (eg mindfulness and CBT) and targeted conditions (eg mood disorders and PTSD). Finally, for 44/91 apps we have data from One Mind Psyberguide on credibility, a metric that "combines information about research, development, purpose, and popularity. This measure aims to give users an idea of how credible a digital tool is, i.e. how likely it is that it will work" **Data availability** In the OSF files we provide a dataset with all the data except the Apptopia data (due to Apptopia data sharing rules). We also provide a dataset of the Apptopia variables with the app names removed and the apps presented in random order. For the variables in the apptopia dataset with daily values, we present only the average value for the month in which the MARS rating was collected. **Exclusion Criteria** 1. We will exclude those apps that lack Apptopia data for any of the days in the month in which that app's MARS review was conducted from analyses that include Apptopia data, as missing data may bias the data towards higher monthly average values. 2. The Apptopia data does not include data on number of sessions or total time spent using the app for several small apps (marking those variables as 0 even though the apps' daily active users are above 0). Therefore, we will exclude apps that have DAU>0 but sessions or total time equal to 0 from analyses that include sessions or total time. **Hypotheses and Planned Analyses:** 1. Larger-scale apps, in terms of app-level revenue, monthly active users, and downloads, will tend to have higher MARS Means. **Analysis:** We will calculate 3 separate Kendall rank correlation coefficients between the MARS mean and the mean revenue, monthly active users, and downloads (all variables besides MARS mean averaged across every day in the month the MARS review was completed). 2. The MARS's Engagement, Functionality, and Aesthetics subscales will predict user retention more strongly than the Information subscale. **Analysis:** We will calculate 9 separate Kendall rank correlation coefficients between each MARS subscale of interest (Engagement, Functionality, Aesthetics, and Information) and retention as measured by the percent of users who downloaded the app that opened the app 1 day, 7 days, and 30 days after downloading it. 3. Google Play store and Apple app store ratings will be correlated with the MARS mean. **Analysis:** We will calculate a Kendall rank correlation coefficient between the MARS mean and app store ratings. These ratings will be averaged across app ratings from the Google Play and Apple app store when data is available from both stores. *Exploratory analyses:* We calculated Kendall rank correlation coefficients between app downloads and each subscale of the MARS (Engagement, Functionality, Aesthetics, and Information).
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.