This study was approved by the institutional review board (IRB) of the VA St. Louis Health Care System; because of the observational and retrospective nature of the study, the IRB granted a waiver of informed consent (protocol number 1606333). Participants were not compensated.
Data from the US Department of Veterans Affairs’ electronic healthcare databases was utilized in this study. The Veterans Health Administration (VHA) is a branch of the US Department of Veterans affairs; VHA operates the largest nationally integrated healthcare system in the US which is comprised of 1255 healthcare facilities (including 170 VA Medical Centers and 1074 outpatient sites). Veterans who enroll in the VHA gain access to a comprehensive medical benefit package consisting of preventative and health maintenance care, outpatient and inpatient hospital care, prescriptions, mental health care, home health care, primary care, specialty care, geriatric and extended care, and medical equipment and prosthetics.
A flow chart describing cohort construction is provided in supplementary fig. 7.
Overall, we built a cohort of people with SARS-CoV-2 positive test who survived the first 30 days after the date of the positive test and compared them to a contemporary control group, and separately, to a historical control group where similar cohort selection criteria including surviving the first 30 days of the follow up were applied.
Veterans who used the VHA in 2019 (n = 6,244,069) with a positive COVID-19 test between March 1st, 2020 and January 15th, 2021 were enrolled into the COVID-19 cohort (n = 169,476). To ensure only post-acute COVID-19 outcomes were examined, we excluded participants who died within 30 days of receiving a positive COVID-19 test result, yielding a cohort of 154,068 participants. The date of the first COVID-19 positive test was set as the start of follow-up, denoted by T0; the end of follow-up was set to be the first occurrence of death or January 15th, 2022.
We then built 2 control groups, a contemporary control group of people who lived contemporaneously during the same enrollment period as those in COVID-19 group and, separately, a historical control group from a pre-pandemic era.
The contemporary control cohort initially consisted of veterans who used the VHA in 2019 (n = 6,244,069). Those alive by March 1st, 2020 (n = 5,963,205) and were not already in the COVID-19 cohort were further enrolled into the contemporary control cohort (n = 5,809,137). To ensure a similar distribution of follow-up between the COVID-19 and contemporary control, the start of follow-up for participants in the contemporary control was randomly assigned following the same distribution as participants receiving their first positive COVID-19 test result in the COVID-19 group. Out of the 5,660,999 participants alive at the beginning of follow-up, 5,638,795 were alive 30 days after the beginning of follow-up and were selected as the contemporary control cohort. Follow-up concluded on the first occurrence of death or January 15th, 2022.
We also built a historical control group consisting of 6,463,487 individuals who used the VHA in 2017. Out of those alive on March 1st, 2018 (n = 6,152,185), 6,009,794 participants who were not already part of the COVID-19 cohort were enrolled into the historical control. T0 was randomly assigned in the historical group using the same follow-up distribution as the COVID-19 group minus 2 years (730 d). In total, out of the 5,876,880 participants who were alive at T0, 5,859,621 were alive 30 days after T0 and were subsequently selected into the historical control group. Follow-up concluded on the first occurrence of death or January 15th, 2020.
Electronic health records from the VA Corporate Data Warehouse (CDW) were used in this study. Patient demographic information was obtained from the CDW Patient Domain. Outpatient and inpatient clinical information were obtained from the CDW Outpatient Encounters domain and CDW Inpatient Encounters domains, respectively. Medication prescriptions and fillings were obtained from the CDW Outpatient Pharmacy and CDW Bar Code Medication Administration domains. Laboratory test data was collected from the CDW Laboratory Results domain and the COVID-19 Shared Data Resource domain provided information relevant to COVID-19. Additionally, we used the Area Deprivation Index (ADI), defined as a summary measure of income, education, employment, and housing, as a composite variable of contextual factors present at each participants’ residential location45.
Pre-specified outcomes were selected based on our prior work on the systematic characterization of Long COVID1,17,22 and from evidence in prior literature8,11,46,47,48,49,50. Each gastrointestinal outcome was defined based on a corresponding international classification of diseases, 10th revision (ICD10) diagnostic codes1,16,17,19,51 or from laboratory test results. Additionally, individual outcomes were also aggregated into a related composite outcome (for example, coagulation outcome consisted of abnormally elevated PT, PTT, and INR). Furthermore, we specified a composite of any gastrointestinal outcome as the first incident occurrence of any of the predefined gastrointestinal outcomes examined in this study (including those based on diagnostic codes or laboratory tests). Incident individual and composite gastrointestinal outcomes during the post-acute phase of COVID-19 were assessed during the follow-up period between the 30 days after T0 until the end of follow-up in participants without any history of the outcome in the year prior to T0. In instances where the occurrence of outcome may result from medication use (e.g., PT, PTT, INR), the incident outcome was ascertained in participants without history of the related outcome and without exposure to the medications that may affect it in the year prior to T0.
We utilized a two pronged approach to covariate selection: 1) covariates were selected based on prior knowledge1,4,6,7,9,10,16,17,18,19,20,24,26,44,51,52, 2) in recognition that our knowledge of COVID-19 is evolving, we also employed an algorithmic approach to identify covariates in data domains consisting of diagnoses, medications and laboratory test results. Pre-defined and algorithmically selected covariates were used in modeling and were assessed in the year prior to T0.
Pre-defined covariates consisted of age, race (white, black, and other), sex, ADI, body mass index, smoking status (current, former, and never), and measures of healthcare utilization (number of outpatient encounters as well as long-term care utilization1,16,18). Additionally, several comorbidities including cancer, cardiovascular disease, chronic kidney disease, chronic lung disease, diabetes, and hypertension were used as pre-defined covariates. Laboratory values consisting of estimated glomerular filtration rate, systolic, and diastolic pressure were also used as pre-defined covariates. Continuous variables were transformed into restricted cubic spline functions to account for possible non-linear relationships.
To supplement our pre-defined covariates, we utilized algorithmically selected covariates from high dimensional data domains consisting of diagnoses, medications, and laboratory test results53. Data from patient encounter, prescription, and laboratory domains collected in the year prior to T0 were organized into 540 diagnostic groups, 543 medication types, and 62 laboratory test abnormalities. From these three domains (diagnoses, medications, and laboratory test results) we selected variables which occurred in at least 100 participants within each exposure group in acknowledgment of the fact that exceedingly rare variables (those that occurred in fewer than 100 participants in these cohorts) may not substantially influence the examined associations. Univariate relative risks between each variable and exposure was estimated and 100 variables with the highest relative risks were selected for use in statistical analyses54. The algorithmic selection process described above was used to independently select high dimensional covariates in each comparison (for example, the COVID-19 vs contemporary control and the COVID-19 vs historical control analyses to assess incident GERD).
Baseline characteristics of the COVID-19, contemporary, and historical control groups were described, and the standardized mean differences between COVID-19 and contemporary control, and between COVID-19 and historical control were calculated.
To estimate the risk of each incident gastrointestinal outcome, we first constructed a sub-cohort of participants without a history of the outcome of interest (for example, the risk of incident GERD was estimated within a sub-cohort of participants without any history of GERD) in the year prior to cohort enrollment.
Within each sub-cohort, three logistic regressions were built to estimate the probabilities of belonging to the target population of VHA users in 2019 (equivalent to the combination of the COVID-19 group and the contemporary control group) for the COVID-19, contemporary, and historical control groups. These probabilities were estimated based on pre-defined and comparison specific algorithmically selected high-dimensional variables and ultimately used as the propensity score. The propensity score was then used to calculate the inverse probability weight (propensity score/(1-propensity score)). To account for the influence of extreme weights and the sample size difference between comparison groups, we prespecified our analytic plan to truncate weight greater than 1000. There were no weights larger than 1000 hence no truncation was conducted. Covariate balance was assessed by standardized mean differences after application of weighting.
After application of inverse probability weighting, cause-specific hazard models where death was considered as a competing risk were used to estimate hazard ratios of incident gastrointestinal outcomes between the COVID-19 and contemporary control groups and the COVID-19 and historical control groups. The survival probability at 1-year within each group was used to estimate the burdens per 1000 participants at 1 year of follow-up in the COVID-19 and control groups; the difference of the estimated burdens between the COVID-19 and control groups was used to compute the excess burdens per 1000 participants at 1 year. Additionally, we conducted analyses in subgroups comprised of age, race, sex, obesity, smoking, diabetes, cardiovascular disease, chronic kidney disease, hyperlipidemia, and hypertension.
The association between COVID-19 and the risks of post-acute gastrointestinal outcomes were further examined by stratifying the COVID-19 cohort into mutually exclusive groups determined by each participants’ care setting during the acute phase of COVID-19 (that is, whether participants were non-hospitalized, hospitalized, or admitted to the intensive care unit during the first 30 days of infection). The statistical approach outlined in the previous paragraph was used to estimate inverse probability weights for each care setting group. Cause-specific hazard models utilizing inverse probability weighting were applied, and hazard ratios, burdens, and excess burdens were calculated.
We conducted a comparative analysis of individuals hospitalized with COVID-19 vs those hospitalized with seasonal influenza. Admission to the hospital was ascertained in the first 30 days after a positive test result (for both COVID-19 and seasonal influenza). Comparisons were conducted using weighted cause-specific hazard models.
We further tested the robustness of our study design by conducting multiple sensitivity analysis. (1) we modified our covariate selection by increasing covariate inclusion to 300 high dimensional variables (instead of the 100 high dimensional variables used in the main analysis) when constructing the inverse probability weight; (2) we restricted covariate selection only to pre-defined variables when constructing the inverse probability weight (no algorithmically selected variables were used); and (3) we applied a doubly robust approach, where associations were estimated by applying both covariate adjustment and the inverse probability weights to survival models55.
We tested whether our approach would reproduce known associations by testing fatigue as an outcome – considered a cardinal manifestation of Long COVID – as a positive outcome control. Additionally, we used the approach outlined by Lipsitch et al.56 to specify and test a set of negative outcome controls where no prior evidence supports the existence of a causal relationship between COVID-19 exposure and the specified negative outcome controls. Lastly, we tested a pair of negative-exposure controls. We hypothesized that exposure to the influenza vaccine on odd- vs even-numbered calendar days between March 1st, 2020 and January 15th, 2021 would not be associated with increased or decreased risks of the gastrointestinal outcomes examined in our analysis. If successful, application of these negative outcome and negative exposure controls might reduce concern about the presence of spurious biases in study design, covariate selection, analytic approach, outcome ascertainment, residual confounding, and other sources of latent biases56.
Estimation of variance when applying weightings was achieved through robust sandwich variance estimators. For every analysis, evidence of statistical significance was considered when a 95% confidence interval excluded unity. All analyses were conducted using SAS Enterprise Guide version 8.2 (SAS Institute), and visualization of results was accomplished using R version 4.04.
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.