The CONFIDENT study protocol: a randomized controlled trial … – BMC Public Health
The CONFIDENT study protocol: a randomized controlled trial … BMC Public Health

Recruitment
Our recruitment strategy (see Table 1) will be guided by our LTCW stakeholders, including our NAHCA and East Carolina University (ECU) collaborators. Our stakeholders initially recommended online recruitment given its potential reach and practicality with the continued circulation of the COVID-19 virus. We will also recruit via a convenience sample of LTC settings identified by our study collaborators and stakeholders. Recruitment messaging (i.e., emails, social media ads, posters, table tents, and business cards) will include links to a study recruitment website and/or study eligibility screening questions. The recruitment website will include brief, plain-language study information in both video and written formats, and will contain links to access the study screening questions.
Consent and enrollment
Those interested in joining the study will first be screened for eligibility at the beginning of their baseline survey. If they meet all eligibility criteria, they will proceed to a study information sheet providing similar information to a traditional consent form (available on request). To improve accessibility and understanding of the study information, we developed an animated video version of the information sheet. Participants will be able to choose if they want to watch the video, read the information sheet, or both. The content will be identical. Participants will then indicate that they understand the information they just read or watched, and consent via an electronic checkbox to proceed to the remainder of the baseline survey.
Baseline data collection
In addition to screening and consent, the baseline survey will collect participants’ LTCW verification information, contact details, preferred method of contact (email or text), how they heard about the study, participant characteristics, contextual information, and baseline study outcome data (see ‘Outcomes’).
Verification process
Because of the online nature of the trial, incentives provided, and early instances of fraudulent activity (see progress of research in Additional File 6), we will implement several strategies to prevent, detect, and respond to fraudulent study enrollment. The strategies we will adopt were informed by a prior review of methods [48] and recommendations [49, 50] on this topic. We also engaged our stakeholders in this development process to ensure a range of strategies feasible and acceptable to LTCWs.
First, in the baseline survey, we will incorporate a Qualtrics reCAPTCHA bot detection filter [48, 49, 51], cookie-based settings that prevent multiple submissions from the same web browser [48, 49, 52], and a message discouraging duplicate survey completion [48, 49]. Second, we will collect information from participants that will allow us to verify their identity and that they have worked in a LTC setting in the past two years.
After providing consent, participants will be asked to provide the name of their current or former LTC workplace, the type of LTC setting, their role, and the length of time that they have worked in LTC. Participants will then choose one of four different options for LTCW verification, as outlined in Table 2. We will use the information provided in Table 2 and triangulate it with other information provided by participants (e.g., workplace information) to verify LTCW status. Reminder messages (text and email) and, ultimately, phone conversations and Zoom calls [48, 49], will be used to gather further information from participants who fail to provide the requested LTCW verification information, or where further clarification on information provided is needed.
We will also employ TransUnion’s TLOxp verification service (www.tlo.com), to confirm the identity of participants recruited online [50]. TLOxp aggregates publicly available databases and records to authenticate and verify identity information. We will develop a standardized process for identity verification using participant name, zip code, age range, cell phone number, and email address.
Participants who do not pass a verification check (LTCW status and/or identity) will be given the opportunity to provide additional information. If they do not provide verifiable information, they will be sent a message stating they have been unenrolled in the study [49]. Due to a delay in implementing TLOxp identity checks (see Additional File 6), some of the identity checks will be retrospective (i.e., post study completion). Those who cannot be verified will be removed from the study sample prior to statistical analyses (see Additional File 4).
Participants recruited in person will be presented with the same verification survey questions. However, due to verification not being necessary for this subpopulation, we will primarily rely on their reported workplace name to confirm their legitimacy as LTCWs. Identity checks will also not be performed on this group.
Intervention delivery
Primary interventions and control
Immediately following trial arm allocation, participants will be presented with a brief description of their assigned trial arm. They will be told it is important they engage in their relevant trial arm activity within the next three weeks. Upon submitting their baseline survey, participants will be automatically redirected to another website based on their assigned arm. Participants in the dialogue-based webinar arm will be able to register for an upcoming webinar at a pre-specified date and time via a separate Qualtrics survey and Zoom registration page. Those in the social media website arm will be able to create user accounts and access the site immediately. Those in the enhanced usual practice arm will have direct access to the public CDC website.
Participants will receive study enrollment confirmation messages immediately after joining and several additional reminders pertaining to their assigned trial arm over the following two weeks (see Additional File 3). We will use a combination of email, text, and phone calls for these reminders, with most messages sent to the preferred method of contact (text or email) that the participant selected in their baseline survey.
Refresher interventions and control
Participants will be given access to their respective refresher intervention or information via email one week before receiving their T2 Survey invitation (see Fig. 1 and Additional File 3). They will receive one reminder email two days later featuring the same content.
Follow-up data collection
Participants will be invited to complete three follow-up surveys. The surveys will be sent three weeks post-baseline survey/randomization (T1), three months post-baseline survey (T2), and six months post-baseline survey (T3). Participants will be sent a pre-reminder before each survey invitation is sent, and up to four reminders afterwards for surveys not completed. We will use a combination of email, text, and phone calls for these reminders, with most messages sent to participants’ preferred method of contact. If a follow-up survey is not completed, participants will still be sent subsequent surveys.
We will not collect data on the reasons people decline consent, drop out before randomization, withdraw from the study, or do not complete one or more follow-up surveys. We will however seek information on reasons people did not engage with primary interventions and refreshers (see ‘Outcomes, Process evaluation’ and ‘Aim 3’).
Retaining participants
We will maximize retention by: (1) using short-form questions and measures when possible; (2) using automated reminders and multiple methods of communication, and (3) compensating participants with a $30 gift card for each survey completed. Brueton et al. identified monetary incentives as an effective way of improving participant retention [53].
Outcomes
We will collect primary and secondary outcomes across four online surveys that we tested with our LTCW partners. Sample surveys are available on request. We will also collect online activity data from both interventions. Timepoints for primary and secondary outcome assessment are specified in Additional File 4.
Primary outcome measure
Our primary outcome is COVID-19 vaccine confidence, which will be assessed using an adapted version of the Vaccine Confidence Index (VCI) [54]. The VCI measures confidence in vaccine importance, safety, and efficacy. Each item will be rated on a 5-point scale ranging from 1 (‘strongly disagree’) to 5 (‘strongly agree’). Participants are considered ‘confident’ (score of ‘1’) if they respond ‘agree’ or ‘strongly agree’ on all three items. This scoring approach was determined via panel survey data, in which we explored optimal thresholds that predicted vaccine uptake.
Secondary outcome measures
COVID-19 vaccine uptake and intent. We will assess uptake of the COVID-19 vaccines (any dose, initial series completion, and booster completion) using four questions that we developed. For those who report not being vaccinated or boosted, intent to get a COVID-19 vaccine (initial series or booster) will be assessed using two questions broadly adapted from prior work [55]. All participants will also be asked if they would get regular vaccines in the future if they are recommended, using a single question.
Likelihood of recommending (promoting) COVID-19 vaccination. Adapted Net Promoter Score (NPS) questions [56] will assess the likelihood that participants would recommend (1) COVID-19 vaccination to others who are unvaccinated, and (2) COVID-19 booster vaccination to a coworker. Similar questions have been recommended previously [57, 58]. We will adopt the traditional scoring approach that categorizes respondents as promoters, passives, or detractors.
Feeling informed about the COVID-19 vaccines. We will assess the degree to which participants feel informed about the COVID-19 vaccines (have enough information and understand that information) using two questions that we developed. Our operational definition of feeling informed was influenced by the Decision Self-Efficacy Scale [59].
Identification of COVID-19 vaccine information and misinformation. We will assess identification of COVID-19 vaccine-related information and misinformation using four questions that were shown to have low rates of correct identification in our preliminary pilot work. We developed two questions and two were adapted from prior work [42].
Trust in COVID-19 information from different sources. We will assess trust in COVID-19 information provided by different people and organizations using three items broadly adapted from prior work [60].
Other data collected
Participant characteristics. We will assess participant characteristics using a combination of existing, adapted, and self-developed questions for age, gender [61, 62], zip code, educational attainment [63], race and ethnicity [63], health insurance [63, 64], health literacy [65, 66], religiosity [67], LTCW role, LTC setting type, duration of experience in LTC, extent of others’ influence in COVID-19 vaccination, and baseline vaccination status.
Contextual factors. We will identify contextual factors that may contribute to COVID-19 vaccine intentions and decisions, including personal COVID-19 and vaccine experiences and participation in other COVID-19 vaccine research, using a single question that we developed.
External factors. Outside of the study surveys and throughout the trial, we will monitor external factors that may impact participants’ views and actions towards the COVID-19 vaccines. This may include policy and vaccine mandate changes for LTCWs and changes in the nature of the pandemic, among other things.
Intervention engagement. We will monitor the extent to which participants engage with their assigned primary and refresher intervention content. We will collect online activity data (i.e., social media website user history, webinar attendance records, email click rates) and participant self-reported engagement data via surveys. We will prioritize the use of online activity data to minimize potential measurement error [68]. However, survey questions will be used where online activity data is not available or is incomplete; for example, to determine engagement with the webinar refresher recording (adapted from [68]) and enhanced usual practice information. Data on engagement will be used to inform secondary trial analyses, as well as Aim 3.
Process evaluation. We will conduct a process evaluation as a component of Aim 3 to inform implementation and sustainability activities. Process evaluation questions will be administered in all follow-up surveys (T1-T3). Acceptability of the interventions and control arm will be determined via adapted NPS questions [56] (i.e., likelihood of recommending to a coworker). Similar approaches have previously been used for evaluating SDM interventions [68,69,70,71]. We will also assess how new the information was that participants were exposed to, the comprehension of and trust in the information (informed by [72]), the degree to which they felt listened to and respected by those running the interventions (informed by [73]), and reasons for not engaging with the primary or refresher interventions or control arm (adapted from [71]).
Sample size and power calculation
Using historical VCI data [74] and current vaccination rates [75], a binary classifier for vaccine confidence was identified based on VCI responses across all three questions, with ‘confidence’ defined as responding ‘agree’ or ‘strongly agree.’ This binary classifier was significantly correlated with rates of vaccination (p = 0.0001). The sample size of 1,800 LTCWs (600 per arm) provides 80% power to detect an 8% difference in the rate of ‘confident’ participants in each of the three different pairwise comparisons of study arms at family-wise type I error rate of 0.05. The sample size is sufficient to retain 80% power to detect a 10% difference (assuming outcomes are randomly distributed across retained and lost participants) after 40% attrition.
Statistical analysis
All analyses pertaining to study Aims 1 and 2 will be conducted on an intention-to-treat (ITT) basis (i.e., the arm participants were assigned to) and as-treated basis (i.e., whether they engaged with their assigned intervention or control). A detailed data analysis methodology, including planned statistical tests, timepoints for outcome assessment, and treatment of missing data, is included in Additional File 4.
For Aim 1, hypothesis 1.1, we will conduct one-tailed tests (superiority analysis) to compare the impact of each of the two intervention arms against the enhanced usual practice arm on primary and secondary outcomes. For Aim 1, hypothesis 1.2, we will conduct a two-tailed test (equivalence analysis) of primary and secondary outcomes between the two intervention arms. While we hypothesize that the dialogue-based webinar intervention (Arm 1) will be superior to the social media website intervention (Arm 2), a finding of superiority, inferiority, or no distinguishable difference will be a valuable finding.
For Aim 2, hypothesis 2.1, our mediation analyses seek to identify the relationship between the interventions, mediator variables, and primary and secondary outcomes. We are interested in whether interventions operate through the mediator rather than directly affecting the outcome. If the results for Aim 1 are non-significant, we will determine whether it is a null effect of the intervention on the mediator or a null effect of the mediator on the outcome. We will determine mediation strength and mechanism generalizability by comparing effects across subgroups.
For Aim 2, our exploratory moderation analyses (also referred to as heterogeneity of treatment effects (HTE) analyses) seek to understand whether certain participant characteristics and beliefs influence the relationship between the interventions and vaccine confidence, as well as other secondary outcomes. We will explore moderators including (but not limited to) vaccination status, religious beliefs, age, race, ethnicity, perceived influence of others, and personal experiences with COVID-19.
Because of the size, scope, and complexity of this study, exploratory analyses of relationships within the data will be conducted to identify factors to inform future analysis (see Additional File 4). Exploratory, hypothesis-free analyses will be performed using data clustering to analyze participants’ demographic, geographic, or temporal links, which may define statistically unlikely outcome groupings.
Aim 3
We will examine the implementation and sustainability potential of the dialogue-based webinar and social media website interventions using an implementation mapping approach informed by relevant domains of the Consolidated Framework for Implementation Research (CFIR) [36, 76, 77] (see Fig. 2 and Additional File 5 for Aim 3 planned activities). Aim 3 will also include a separate but related process evaluation component.
We will interview a purposive sample of stakeholders in the planning, trial, and sustainability phases of the study, including LTCW partners, LTC leaders, trial investigators, partner organizations, and participants in the two intervention arms (n = up to 100 total based on data saturation within predetermined subgroups). These interviews will explore the delivery characteristics, contexts, and processes needed to sustain LTCWs’ use of the interventions. Trial participants will receive a $30 gift card for participating in an interview. We will also collect online activity data from both interventions, field notes, and observations of the adaptations made to the interventions during the trial using FRAME (expanded framework for reporting adaptations and modifications to evidence-based interventions) [78]. After completion of the RCT, intervention adaptations will be made (also tracked via FRAME) based on trial results and informed by Aim 3 interviews with key stakeholders. In the sustainability phase, we will open access to adapted interventions to all LTCWs outside the study participant sample and identify engagement rates via available online activity data. The Aim 3 process evaluation encompasses fidelity measurement, dose, reach, and reactions to the interventions and enhanced usual practice information (see ‘Outcomes, Process evaluation’).
Data management
Oversight of trial data will be the responsibility of the Program Director, Trial Manager, and Data Manager. Prior to trial recruitment commencing, we will work with a third party vendor to implement a customized Salesforce platform for participant management throughout the trial. This will include setting up integrations with Qualtrics, Zoom, Mogli (text message vendor), and ActiveCampaign (email distribution program). Using Salesforce will facilitate the automated distribution of emails and text messages to participants, and the continued management of participants at the group and individual level, throughout the trial.
Any identifying data exported from Qualtrics, Salesforce, the social media website, Zoom, or ActiveCampaign will be stored on computers, external hard drives, and cloud-based platforms that are password-protected. Access to all software and platforms where identifying data are stored or handled will only be given to research team members involved in data collection, management, and/or analysis and reporting, on a need-to-know basis.
Trial management and monitoring
Monitoring enrollment
Trial enrollment will be monitored weekly using Salesforce. The number of people who are ineligible and screened out and corresponding reasons why will also be reviewed. We will periodically monitor certain demographic characteristics (gender, age, race/ethnicity, education) of our study sample to determine if they are representative of the national LTCW population [19]. Where possible, we will focus subsequent enrollment on any underrepresented groups.
Adherence to protocol
We will train all relevant co-investigators, research staff, intervention facilitators, and other stakeholders in delivering the interventions and facilitating recruitment, verification, and retention according to the procedures outlined in the protocol. The protocol will be available to all team members and key stakeholders. Standard Operating Procedures (SOPs) will also document detailed tasks to be routinely performed by research staff to enhance fidelity and consistency of study functions.
Activity within each intervention will be monitored by research staff to ensure fidelity of intervention delivery and participants’ adherence to community standards. Fidelity observation grids will be completed weekly. Feedback on protocol adherence or nonadherence will be shared with relevant parties involved in intervention delivery on a regular basis.
Trial management
Dartmouth College will be responsible for centralized trial management and general oversight. The Dartmouth research team will maintain all aspects of the trial and will work closely with each collaborator (NAHCA, ECU, IHI) to coordinate all trial activities. The Center for Program Design and Evaluation (CPDE), a research service center at Dartmouth, will conduct the process evaluation as part of Aim 3. The research team will meet with CDPE on a regular basis to ensure coordination, access to Aim 3 interviews and necessary intervention data and information, and optimal timing of process evaluation activities.
Advisory groups
Three advisory groups will be assembled and will meet regularly throughout the study. A Stakeholder Advisory Group (SAG) will review progress towards study goals and objectives and offer feedback and guidance throughout. The SAG will consist of ten LTCW partners and other key stakeholders (members from advocacy, policy, and LTC organizations) and will meet on a quarterly basis. A Trial Steering Group (TSG) consisting of co-investigators, four core LTCW partners, and expert consultants in vaccine confidence, will monitor trial progress, offer advice, and make final decisions on pending study questions. The TSG will also meet quarterly. The group of ten LTCW partners will also meet bimonthly to discuss study progress and to offer feedback and advice on study materials, plans, and progress.
Data Safety and Monitoring Board
A Data Safety and Monitoring Board (DSMB) will be convened to provide additional oversight of the trial and to assess data safety, and will serve in an advisory role in making recommendations to the study staff. The DSMB will consist of four members (inclusive of the chair), including experts in or representatives from the areas of data safety monitoring, statistics, and clinical trial methodology. The DSMB will operate independently from the study funder and research team. It will meet three times during the course of the trial. The DSMB will review the protocol, data collected, and the performance of study operations and other relevant issues.
Trial activity will also be monitored by members of the study team on an ongoing basis. We believe the likelihood of serious adverse events in the trial to be extremely low, as there are no invasive procedures related to the interventions. All potential adverse events or unanticipated events/problems will be reported to the Principal Investigators (PIs). All events will be discussed by the PIs and relevant members of the study team. Any event deemed to be reportable will be submitted to Dartmouth Committee for the Protection of Human Subjects (CPHS), and if necessary, the DSMB will convene urgently to review the event in question and advise the PIs on any risk mitigation plans.
Ethics and dissemination
This study was approved by the Dartmouth CPHS [STUDY00032340]. Protocol modifications will be reported to the Dartmouth CPHS, DSMB, and/or the study funder, where relevant and required.
Once all study data are collected and analyzed, we will develop a pictorial lay summary of the research aims, methods, and key findings. The summary and the final results publication will be sent to interested participants. In addition, we will engage in broader dissemination efforts targeting community stakeholder organizations (e.g., press releases, newsletters, social media outreach). Determination of authorship on all results publications will adhere to the International Committee of Medical Journal Editors authorship criteria.
We will also prepare a full data package of anonymized RCT data. This data package will be maintained for at least seven years. As per funder requirements, we will make the package available to a data repository (designated by the funder) to facilitate data sharing with the broader scientific community.
Progress of research
Planning and executing a trial during the COVID-19 pandemic has presented unique challenges to the research process, requiring several unanticipated protocol adaptations as the pandemic evolved. We have also experienced increasingly sophisticated instances of fraudulent enrollment activity, necessitating continued adaptations to our verification processes. We have detailed these changes and their justifications in Additional File 6.