Heart vs. Mind: A Digital Cross-Sectional Analysis of Smartphone Apps for Cardiovascular and Mental Health Care (Preprint)
Abstract: Background: Digital health apps have become more common in patient care; however, regulatory and clinical frameworks for evaluating their key features, effectiveness, and outcomes are lacking. Objective: This study aims to add cardiovascular apps to the MINDApps rating platform to broadly compare their efficacy in the context of security and data collection with matched apps for psychiatric conditions. Methods: A total of 96 commercially available cardiovascular and mental health apps (48 Cardiovascular, 48 Psychology) were sampled. Using the MINDApps framework, key features for apps from each disease category were quantified. Data was tabulated and percentage of apps having specific features were compared between disease categories. Results: Reviewer 1 Abstract 1. Objective: Could you simplify because the aim is not clear? For example, “structured evaluation framework” can be simplified. RESPONSE: Thank you for the informative feedback, the objective aim has been simplified in the manuscript. 2. Methodology: it should be revised because there is a lack of information. RESPONSE: We would like to thank the reviewer for their helpful feedback, the methodology has been updated to better describe the methods used. 3. “This brief report explores expanding the MIND framework to commercially available smartphone apps for cardiovascular conditions. We sought to adapt the MIND framework to a selection of cardiovascular apps and compare these cross-sectional findings to a subset of mental health apps.” The objective of your study is not clear. Could you reformulate? RESPONSE: The objective previously detailed has been updated to better reflect the objective of the study. “This brief report explores the current app landscape of cardiovascular apps available on the market by expanding the MIND framework. We sought to adapt the MIND framework through the addition of CV related questions fields, select a sample of cardiovascular apps, and compare these cross-sectional findings to a matched subset of previously rated mental health apps.” Methods 4. Can you report your data following STROBE guidelines and the corresponding checklist in supplementary file. RESPONSE: We would like to thank the reviewer for their recommendation to report the data following STROBE guidelines, however, the STROBE guidelines do not apply to this data set given the criteria. We are adapting the MIND framework to work with CV apps and then adding CV apps to it to highlight how we met the objective. Thus this is not a traditional epidemiology where STROBE would apply. 5. Can you precise your inclusion/exclusion criteria: language of apps? Free accessibility? RESPONSE: The inclusion/exclusion criteria have been updated in the methodology section. “Apps were eligible for inclusion if they were updated within the prior 3 months and in English. They also had to be patient facing their purpose. We excluded apps that required referral from a health care provider. Cardiology apps were then matched to a subset of mental health apps from the MIND database for comparison purposes. 6. “The MIND framework is the largest publicly available database of mental health apps.[8] The framework consists of 105 objective questions (yes/no) based on the American Psychiatric Association’s app evaluation model and one free-write section for reviewers to add any additional information.[9]” Can you precise the meaning of “MIND”? To be more comprehensible for readers, can you delete “framework” in the 1st sentence? In the 2nd sentence, could you replace “framework” by “index”? RESPONSE: Thank you for the feedback, the changes have been done in the paper in order to improve clarity for the reader. MIND stands for the Mobile-Health Index and Navigation Database. 7. “Apps are rated by reviewers who have gone through interrater reliability training then reviewed by senior reviewers to ensure accuracy” Can you reformulate? How were the evaluators trained? How was inter-rater reliability assessed? Can you precise the rule of seniors? RESPONSE: This is a great point to convey to the reader, the training consists of an extensive process that has been details in the citation Lagan 2020. This involves standardized training and assessment with certification only after passing a level of interrater reliability. 8. Could you precise the data analysis performed? RESPONSE: Percentage of total apps having the variable were performed as well as Fishers Exact test. 9. Why didn't you use the MIND database app list? RESPONSE: We were unable to use the MIND database app list as the cardiology apps assessed were not a part of the original database. The framework was adjusted to better assess cardiology apps and these apps were then rated using the framework and analyzed. 10. Could you give details regarding the questionnaire? Number of sections? Aims of sections? RESPONSE: The details regarding the questionnaire have been added to the methods section to improve clarity in the form of an added Appendix B. 11. Could you precise the sample size? RESPONSE: The study sample consisted of 48 cardiovascular apps and 48 psychiatry apps that were used as comparison group. This has been added to the methods section. 12. Could you give information regarding raters (profession, age, sex…)? RESPONSE: App evaluators include psychologists, health professionals, academics, and end users: any interested individual can undergo the comprehensive training process to become a rater. App raters undergo a three-hour training that involves an online information module and a practice rating of two apps, from which initial reliability is calculated. Only potential raters who exceed a kappa score of 0.7 with the reference rating are accepted as raters. More details can be found in the citation Lagan 2020. 13. MIND score: How did the score of one app evaluate? What is the highest score? Could you precise the score corresponding to an interesting app…? RESPONSE: MIND takes the approach that a simple score is not a useful metric as what matters to one patient may not to another. Thus by having 105 questions, MIND invites the user to search for what matters to them and filter out apps of less interest. This is why MIND lives on mindapps.org where anyone in the world can access, search, filter, and find apps today. This includes the CV apps found in the paper. Results 14. Can you add a flowchart? RESPONSE: We would like to thank the reviewer for this suggestion, however, we do not think that a flow chart will help clarify the results of this study. 15. Characteristics of participants should be added. RESPONSE: This is not applicable in the case of this study as there are no participants. But we did have many apps to discuss and they are the focus of our results. 16. Characteristics of the included apps should be added in supplementary file. RESPONSE: This has been added in a supplementary file. 17. There are few results and it is very difficult to understand. You should add some sentences analyzing the score by section to determine the improvement to realize. RESPONSE: The results section has been edited to better analyze and explain results. 18. The results for each app should be presented in supplementary file. RESPONSE: The results for every application assessed can be found on the MINDApps website at mindapps.org Discussion 19. Why did you choose MIND questionnaire compared to other questionnaires specific of mHealth apps (MARS, uMARS…)? RESPONSE: The MIND questionnaire was chosen over the MARS/uMARS questionnaire because the MIND framework can be personalized to patient care. The MARS/uMARS frameworks give the user one final score for an app whereas the MIND framework can be filtered based off of the 105 objective questions to allow a patient to tailor what characteristics they need in an app. 20. Your discussion mainly focused on data security, but no score was underlined in the result section. Why did you focus on its section in the discussion? RESPONSE: The results and discussion section have been edited to better discuss this aspect of the study. “Most (cardiovascular 42 out of 48, 88%; mental health 43 out of 48 90%) did have privacy policies, yet 30 out of 48 (63%) cardiovascular apps and 28 out of 48 (58%) of mental health apps shared patient health information with a third party such as ad agencies.” 21. Could you explain why did you compare apps for cardiovascular care compared with those for mental health? RESPONSE: Cardiovascular care apps were chosen in this case due to our interest and background in cardiology related conditions and applications. Cardiovascular care apps are also a large part of the current application ecosystem in terms of fitness, blood pressure, and medication tracking apps. Given mental health apps are the most common and used health apps, they make a natural comparison group to at least ass what is the same or different across these apps. 22. Could you compare your results with other published results? RESPONSE: While there is no way to compare cardiovascular apps as this is the first time a framework has specifically looked at cardiovascular apps specifically using MINDApps, psychiatry apps can be compared and have been added to the discussion section. 23. Please add a limitation section. RESPONSE: A limitations section has been added. Conclusion 24. Your conclusion is not supported by results clearly presented in this article. RESPONSE: The conclusion has been updated to better reflect the results presented in the article. Minor comments --------------------- 25. “Survey data » Delete one space between the 2 words. RESPONSE: The space has been deleted. Reviewer 2 1. It is not clear how the cardiovascular apps were selected and analyzed according to the MIND Framework. To my understanding, MIND lists mental health apps and conducts a structured assessment of these apps using 105 questions. Was the sample of cardiovascular apps also subjected to this kind of assessment, and if so, how was it ensured that the inter-annotator agreement was high compared to the annotations reported by MIND? RESPONSE: We would like to thank the reviewer for their questions, this is a great point to clarify in the manuscript. The sample of Cardiovascular apps were also subjected to the same assessment as the psychiatry apps as well as the added cardiovascular relevant questions in Figure 1. Inter-annotator reliability was ensured through the same process outlined in Lagan, 2020. These details have been added to the manuscript as well for further clarification. 2. The title and abstract state the goal of providing a detailed analysis and comparison of cardiovascular and mental health apps. The results section only very broadly covers a comparison, and Table 1 only encompasses some differences between the two groups without stating any possible reasons for these differences or explaining more details about the identified differences. More detail on this part of the analysis would be highly beneficial for the manuscript. RESPONSE: Thank you for your comment, this is a great point to address in the manuscript. These details have been added to the manuscript. 3. It is stated that cardiovascular apps were matched to mental health apps. However, I could not find any details on this matching procedure in the methods section of the paper. Was there a specific reason why cardiovascular apps were chosen as the comparison group? RESPONSE: Cardiovascular apps were matched based on their platform ability (Android only, iOS only, Android + iOS, Android + iOS + Web) and by cost (totally free vs some form of payment). To closely mimic the functionality and intention of cardiology apps (blood pressure/medication trackers), only apps offering a mood or symptom tracker as their main feature were compared. 4. In the Abstract, it says "mental heal apps." Are "mental health apps" meant by that? RESPONSE: Yes, this has been corrected. Thank you for catching our error. 5. Is there a reason why apps with a referral by a physician were excluded? (line 88) RESPONSE: The study aimed to determine the landscape of widely available apps to the general population. The aim of the study was to mimic a patient who is looking on the app store for an easy to use and available app for their medical condition. 6. What were the main reasons for the exclusion of apps? (line 104) RESPONSE: Application inclusion/exclusion criteria has been added to the methods section. 7. It is stated that the apps "did not leverage new technologies of data collection" (line 123). Does this refer to the usage of wearable data? Was any longitudinal analysis conducted? I could imagine that this aspect changes over time. RESPONSE: This is a great question. This is in reference to wearable data. No longitudinal analysis was conducted; however, apps were assessed to determine if they can collect wearable data in the “Data Collection” category. Most cardiovascular apps (n = 39, 81.3%) collected user inputted data and only 5 (10.4%) had the ability to collect step counts data. No apps had the ability to automatically collect data such as blood pressure or heart rate, these data points had to be manually entered. 8. The conclusion does not mention the comparison between the two groups, although this was stated as the main objective of the paper. RESPONSE: The conclusion has been updated to better address the main objectives of the paper, thank you for your feedback. 9. To my understanding, the MINDapps were analyzed using the APA framework, not the MIND Framework (this is what I gathered from their webpage). RESPONSE: The MIND framework is a database that utilizes the APA framework to be more accessible and easier for public use with 105 objective questions allowing the user to filter based on specific components of an app they would want. But most of these 105 questions are not specific to the APA as questions about privacy, engagement, efficacy, etc are universal for psychiatry or cardiology apps. 10. It would be beneficial to include a list of all apps included in the analysis in the manuscript or Appendix. RESPONSE: The list of all Apps can be found on the MINDApps platform and the supplementary file. All apps are also posted on mindApps.org for the public to filter and explore today. Conclusions: The current app marketplace rests on a limited clinical foundation, introduces numerous privacy and security concerns, and only lightly implements unique data collection features to increase ease of access. Cardiovascular apps were found to use more biodata feedback features than psychology apps, however, both raised security concerns. The MIND platform allows for users to better understand which apps to avoid or implement when beginning care plans.
External IDs:doi:10.2196/63642
Loading