Gaining Consent to Survey Respondents’ Partners: The Importance of Anchors’ Survey Experience in Self-administered Modes *

: Dyadic surveys aim to interview pairs of respondents, such as partners in a relationship. In dyadic surveys, it is often necessary to obtain the anchors’ consent to contact their partners and invite them to a survey. If the survey is operated in self-administered modes, no interviewer is present to improve the consent rate, for example, by providing convincing arguments and additional information. To overcome the challenges posed by self-administered modes for dyadic surveys and to improve consent rates, it is important to identify aspects that positively in ﬂ uence the likelihood of anchors giving consent to contact their partners. Ideally, these aspects are in the hands of the researchers, such as the survey design and aspects of the questionnaire. Thus, in this study, we analyzed the relationship between anchors’ survey experience and their willingness to consent to surveying their partners in self-administered modes. Based on data from the German Family Demography Panel Study (FReDA), we found that the anchors’ perceptions of the questionnaire as “interesting” or “too personal” were related to consent rates. These relationships were consistent across different survey modes and devices. Effects of other aspects of the questionnaire, such as “important for science” and “diverse” varied between modes and devices. We concluded with practical recommendations for survey research and an outlook for future research.


Introduction
Dyadic surveys aim to interview pairs of respondents (Barton et al. 2020). In many instances, dyadic surveys focus on partners in a relationship (spouse or intimate partner); other examples include friends (e.g., Chow et al. 2013) or kinships such as parents and children (e.g., Kalmijn/Liefbroer 2011). Depending on the sampling approach, different designs exist on how respondents are invited to participate in a dyadic survey (Pasteels 2015). When relying on register-based samples, individuals are fi rst sampled and invited to participate in a survey. These target persons, hereafter referred to as "anchors", are then asked for their consent to contact their partners. Only if the anchors' consent is obtained, the partners will be contacted and invited to participate in the survey as well. Pasteels (2015) termed this approach a singular multi-actor design.
In face-to-face mode, interviewers are present to conduct the anchor interview. Interviewers can positively impact the anchors' survey experience (cf. West /Blom 2017) and, thus, possibly their willingness to consent to the additional interviewing of their partners or children . At best, the other household members are also present, so the interview can be conducted directly with a partner or child. If the person in question is not on-site or is preoccupied for other reasons, an appointment can be arranged. Alternatively, the interviewer can leave the invitation letter and the questionnaire for the partner at the anchor's home. In pairfam ("Panel Analysis of Intimate Relationships and Family Dynamics"), for example, this approach has been successfully implemented (Huinink et al. 2011).
In self-administered modes, when no interviewers are involved, the process of inviting additional persons to a dyadic survey is different . Anchors must fi rst obtain consent from their partners, for example, that they agree to be contacted by a survey institute. Anchors must also provide their partners' contact information (i.e., names and addresses), which is especially important, if the anchor and their partner live in separate households. Only then survey invitations can be sent to partners living in the same or outside the anchors' household. In the case of a selfadministered survey, therefore, it is even more challenging to obtain the anchors' consent to survey their partners than in face-to-face studies, because there is no interviewer present to provide additional information about the benefi ts of the request and convince the anchors to agree to contact and invite their partners.
Surveying partners in self-administered dyadic surveys is a three-step process: First, an anchor person must provide consent to approach the partner with a survey invitation (consent); second, the anchor needs to provide valid contact information, so a partner can then be invited (invitation); and third, the partner must decide to participate in the survey (participation). The loss of partners is amplifi ed across these three steps of selection (Starks et al. 2015). In our view, it is important to investigate each step separately, so one can disentangle and better understand the mechanisms at work. We argue that this will help us fi nd ways to increase the number of partners who can successfully be surveyed in self-administered modes. In our study, we focus on the consent step. Having said that, we argue that further research into the other two steps is warranted to obtain a complete picture of the process of realizing a partner interview in dyadic surveys.
Although much research exists on respondents' survey consent (e.g., Jenkins et al. 2006;Singer 1993Singer , 2003 or their consent to data linking with, for example, population registers, sensors, apps, or paradata (e.g., Kunz/Gummer 2020; Sakshaug et al. 2012;Sakshaug/Kreuter 2012), research is still lacking regarding the process of anchors consenting for interviews of related persons (e.g., partners, children, parents, friends) in self-administered modes. We fi nd it reasonable that the decision about consent is different in a dyadic survey, because it is not about anchors agreeing to share their own data, but allowing contact with another person and potentially burdening them with the task of participating in a survey. Unfortunately, previous research on consent in dyadic surveys has solely investigated face-toface settings with a focus on the role of interviewers in these consent situations (Kalmijn/Liefbroer 2011;Schmiedeberg et al. 2016;Schröder et al. 2012;Schröder et al. 2016).
To overcome the challenges posed by self-administered modes for dyadic surveys and to improve anchors' consent to contact their partners, it is important to identify aspects of a survey that positively infl uence the likelihood of anchors giving consent. Ideally, these aspects are in the hands of the survey researchers, such as the survey design and aspects of the questionnaire. The existing gap in research on anchors' consent to contact their partners in self-administered dyadic surveys makes this a formidable challenge. Currently, researchers do not know which design aspects to focus their attention and resources on and which aspects to test in experiments. Experimental studies, especially when conducted with probabilitybased samples, can easily become laborious and costly.
In the present study, we analyze the relationship between anchors' survey experience and their willingness to consent to survey their partners in selfadministered dyadic surveys. We argue that how anchors experienced the survey themselves is a critical factor in their consent to contact their partners for a survey as well. Thus, we assume that anchors who have had a positive experience participating in a survey are more likely to convince their partners to participate and give consent to invite them. A variety of aspects infl uence respondents' survey experience. Among those that are under the researchers' control are the content of a questionnaire (e.g., Silber et al. 2021), the inclusion of sensitive questions (e.g., Tourangeau/Yan 2007), or the length of a questionnaire (e.g., Galesic/Bosnjak 2009). Even if these factors are the same for all respondents, the individual perception is likely to vary between them (e.g., respondents will differ in whether they perceive a questionnaire as interesting) (e.g., Yan/Williams 2022). However, aspects that vary across respondents and can also impact the survey experience are survey mode (de Leeuw/Berzelak 2016) and, in the case of web mode, the device used to complete the questionnaire (Couper/Peterson 2017).
Self-administered general population surveys are often designed as mixed-mode featuring web and paper mode (e.g., Luijkx et al. 2021;Wolf et al. 2021), so as not to introduce a coverage error for parts of the population who have no internet access or do not want to participate via the web Cornesse/Schaurer 2021). We follow the reasoning of previous studies on survey mode systems (e.g., Struminskaya et al. 2015) which argue that data collection processes differ between modes. From the respondents' perspective, this means that they experience the survey differently depending on the survey mode (i.e., whether they complete a web-based or paper-based questionnaire).
When participating via the web mode, respondents can use different devices, including desktop PCs, laptops, tablets, or smartphones (Gummer et al. 2023). Depending on the type of device, the questions may differ in their presentation (e.g., horizontal or vertical scale alignment) and navigation (e.g., data entry by mouse click or fi nger tap), which in turn may also infl uence the respondents' response behavior (Couper/Peterson 2017). To accommodate different device types and ensure a good survey experience, using a responsive questionnaire layout and adapting the layout to smaller screens is especially important (e.g., Antoun et al. 2017;Antoun et al. 2018;Tourangeau et al. 2013). But even then, we expect the survey experience to vary depending on the device used.
Considering the different survey modes and device types, there are different ways or "channels" through which respondents can answer a survey. In selfadministered surveys, these include the web or paper mode. In addition, in the web mode, respondents can participate using a desktop, laptop, tablet, or smartphone.
It remains an open question how survey experience relates to obtaining anchors' consent to contact their partners in each channel.
Our study aims to understand better the relationship between the anchors' survey experience and their likelihood of providing consent to invite their partners in selfadministered mixed-mode dyadic surveys, considering the multi-channel context of modern surveys. In our efforts, we focus on characteristics under the researcher's control that can be used to improve survey design and optimize consent rates. Our research questions are: 1. How does the anchors' survey experience affect the likelihood of providing consent to invite their partners to a survey?
2. Does the relationship between the anchors' survey experience and providing consent differ between web and paper mode?
3. Does the relationship between the anchors' survey experience and providing consent differ between devices in the web mode?
We relied on the German Family Demography Panel Study (FReDA), which is well-suited to answer our research questions. FReDA is designed as a multi-actor survey that includes anchors and their partners. It utilizes web and paper modes and asks respondents which device they used to complete the survey. In addition, the sample size is large enough to allow for subgroup analyses.
The remainder of this article is structured as follows: In the next section, we describe our data, the measures we used, and our analytical methods. After presenting our results, we close with concluding remarks, practical recommendations for survey research, and opportunities for future studies.

2
Data and method

Data
We drew on the German Family Demography Panel Study (FReDA). FReDA is a German research data infrastructure for family research (Hank et al. 2023;Schneider et al. 2021). FReDA covers demography, sociology, economics, and psychology topics, such as processes and transitions in couples' relationships, fertility and parenthood, economic situation, and attitudes. It also includes sensitive questions about respondents' health, sex life, and sexual orientation. Respondents are surveyed twice a year in self-administered mixed modes using web and paper questionnaires of 20-30 minutes. For the web mode, FReDA uses a responsive questionnaire layout that is optimized for small screens, which affects font or button sizes and question arrangement. In 2021, a new sample of panelists aged between 18 and 49 years was recruited for FReDA (FReDA-GGS sample). For this purpose, a probability-based sample was drawn from German municipalities' population registers. The recruitment survey (W1R) was fi elded between the 7 th of April and the 29 th of June, 2021, and yielded an AAPOR RR2 of 34.92 percent (N=37,783). For W1R, a dedicated and short (10 minutes) recruitment questionnaire was used. 26,725 respondents provided their panel consent in W1R to be re-contacted for the subsequent wave W1A, which was fi elded between the 7 th of July and the 22 nd of September, 2021, using a questionnaire of 20-30 minutes. 22,048 respondents completed the W1A questionnaire, resulting in an AAPOR RR2 of 85.4 percent. In W1A, 16,857 anchors who reported currently having a partner were asked to provide consent to contact their partner. These anchors represent our sample cases on which our subsequent analyses are based on. We used data release version 2.0.0 of FReDA that included W1R and W1A data (Bujard et al. 2023).

Dependent variable
As dependent variable, we used the anchors' responses to the question on whether they allowed us to invite their partners to our survey and send them a questionnaire. We created a dummy variable indicating consent (0=no consent; 1=consent). The consent question was asked only of those anchors currently in a relationship. Those without partners were coded as missing.
If respondents did not answer the consent question, they were coded as missing as well, which was the case for 1.54 percent of the analysis sample. We conducted a missing data analysis to support this coding decision , using logistic regression analysis with missing consent information as dependent variable and the variables we used in our later analyses as independent variables (see below). We found no statistically signifi cant effects for any of the variables. As we did not fi nd systematic differences between those respondents who answered the consent question and those who did not, we assumed that item nonresponse in the dependent variable did not impact our fi ndings.

Independent variables
Survey experience. The anchors' survey experience was measured based on the self-report scale developed by Kaczmirek et al. (2014). The scale was asked at the end of the questionnaire using a battery of 6 items on whether anchors considered the questionnaire as "interesting", "diverse", "important for science", "long", "diffi cult", and "too personal". For each item, we created a variable ranging from 0 (not at all agree) to 4 (completely agree).
Survey mode. We created a dummy variable indicating whether anchors completed the survey in paper or web mode (0=paper; 1=web).
Device type. Relying on a self-reported question that was asked at the end of the web survey, we created a dummy variable indicating whether respondents used a device with a larger screen size (i.e., desktop, laptop, tablet, or others) or smaller screen size (i.e., smartphone) to complete the survey (0=no smartphone; 1=smartphone).
Control variables. We relied on a set of control variables that previous research had shown to affect participation decisions by potential respondents in panels or survey participation in general. We assumed that reasons that would discourage anchors from participating (again) in a survey might also be relevant to the anchors' consent decision. Our models covered various factors of participation (Watson/ Wooden 2009), including the socio-demographic background of anchors such as gender (Behr et al. 2005;Lepkowski/Couper 2002), education (Behr et al. 2005;Wolf et al. 2021), and migration background (Voorpostel/Lipps 2011), the relevance of the survey topic (Gummer/Blumenstiel 2018; Lepkowski /Couper 2002), as well as how respondents were contacted and the infrastructure available to participate in the survey (Watson/Wooden 2009). In our models, we included gender (0=male; 1=female), age (18-25 (ref), 26-30, 31-40, 41+), education (low (ref), intermediate, high), satisfaction with relationship (rating scale, 0-10), urbanicity of the region of residency (rural (ref), small town, city), German language spoken at home (0=no; 1=yes), German citizenship (0=no; 1=yes), and cohabitation with a partner (0=no; 1=yes) as control variables.
Item nonresponse analysis. Table 1 details descriptive statistics for the independent variables (survey experience and control variables) that we included in our analyses. To investigate the sample selection process in our study, we show descriptive statistics for three samples: (i) all anchors with partners, (ii) after listwise deletion of all cases with missing values in the dependent variable (see above), and (iii) after listwise deletion of all cases with missing values in the independent variables. We did not include information on survey mode and device type here, as these are subject to extensive additional robustness analyses (see Section 2.3).
In total, 16,857 anchors reported having a partner and were asked for consent to contact their partners. Of these, 259 were coded as missing because they had not answered the consent question. In a further step, 782 cases were omitted from the analytical sample by listwise deletion due to missing values in the independent variables. In total, 10.08 percent of the sample was omitted due to item nonresponse. To gauge the potential impact of item nonresponse on our fi ndings, we fi rst compared mean values of all substantive variables between our analysis sample and the sample including all anchors with partners. For better comparability, we calculated Cohen's D as a measure of the strength of the differences. As we were not interested in directions of these differences, we used absolute Cohen's D values. With a minimum of 0.002 and a maximum of 0.32, with the exception of the maximum, all Cohen's D values remained within a range that can be considered as having no or small effects. Thus, our missing data analyses indicated that listwise deletion had only a weak impact on the distributions of the independent variables we used in our analyses. Thus, item nonresponse for the independent variables is unlikely to bias our analyses.

Models
We fi tted logistic regressions with consent as the dependent variable to investigate our research questions. As independent variables, we included the six survey experience items and the control variables, whereby the latter will not be the subject of further interpretation. We computed individual models for each channel (i.e., survey modes and device types) to investigate the consent process in different channels. For instance, we computed separate models on consent for the paper mode and the web mode, as well as models for using a smartphone and not using a smartphone. For all models, we reported average marginal effects (AMEs). To further test the magnitude of the effects of survey experience on consent rates, we estimated predicted probabilities (with all other variables at their means) for those survey experience items for which we found signifi cant effects in the models.
In addition to the separate models for each channel, we calculated two complete models relying on the full sample (i.e., not differentiating between modes) and all web participants (not differentiating between devices), respectively. The two models for survey modes and device types included the same variables as above and a dummy variable for mode or device, respectively.

Robustness checks
We conducted robustness checks in which we used weighting to account for selfselection into modes and devices. We were interested in whether differences in the consent obtained from anchors differed in total between modes and devices (i.e., a total effect). We were not primarily interested in identifying whether these differences stem from mode effects or self-selection into modes and devices.
Weighting for self-selection into survey modes. We utilized inverse probability weighting (IPW) to disentangle mode and self-selection effects. A recent illustration of IPW for the case of missing data is given by Little et al. (2022). Assumptions behind this approach have been discussed in prior research (Gummer/Roßmann 2019; Kreuter/Olson 2011; Little/Vartivarian 2005). To calculate the weights, we fi tted a logistic regression with mode choice as dependent variable and relevant independent variables. We selected variables that prior research (Gummer/ Struminskaya 2020; Pforr/Dannwolf 2017) had used to predict mode choice or that were relevant to the survey invitation process in FReDA: gender, region of residency, education, internet use, and contact strategy. Mode choice propensities, based on the regression model, were then used as weighting factors. Respondents likely to participate in their selected mode were weighted down, and those unlikely to participate in their selected mode were weighted up.
As a robustness check of our analyses and to gain additional insights, we reran our regression models with consent as the dependent variable (see above) and used the weights to control for self-selection into survey modes.
Weighting for self-selection into devices. To disentangle the device and selfselection effects, we again utilized IPW. As before, we specifi ed a logistic regression with device use (0=no smartphone; 1=smartphone) as the dependent variable and included a set of relevant independent variables that prior research had used to predict device choice and mobile web accessibility ( Again, as a robustness check, we reran our regression models with consent as the dependent variable (see above) and used the weights to control for self-selection into devices in the web mode.

Results
Overall, we found that 55.09 percent of the anchors who reported having a partner gave us consent to invite their partners to a survey. The consent rate signifi cantly differed between modes, with 58.00 percent in the web mode and 38.36 percent in the paper mode (χ²(1)=326.33, p<.000). When turning to the different devices used in the web mode, we found smaller but still signifi cant differences, with a consent rate of 60.28 percent for anchors completing the survey on a desktop/laptop/tablet/ other device and 56.18 percent for those using a smartphone (χ²(1)=24.23, p<.000). Concerning our fi rst research question on the effect of the anchors' survey experience on consent to invite their partners to a survey, Figure 1 details the results of our logistic regression models (see Table A1 in the Appendix for full regression models). Controlling for covariates, we found that selected aspects of the anchors' survey experience impacted whether consent was provided to invite their partners. In the web and paper mode, we found substantial and statistically signifi cant effects of whether anchors perceived the questionnaire as "interesting" or "too personal" on the likelihood of consent: the more interesting the anchors found the questionnaire, the more likely they were to give consent; the more the anchors perceived the questionnaire as too personal, the less likely they were to give consent. For the web mode but not the paper mode, we found an effect of how "important for science" and "long" the anchors perceived the questionnaire on the likelihood of providing consent: the more important anchors perceived the questionnaire, the higher the likelihood of consent. We found no signifi cant effects for the other aspects of the questionnaire ("diverse" and "diffi cult") on the anchors' consent to invite their partners -neither in web nor paper mode. The effects we obtained for the full sample were similar to those we found for the web model. Consequently, we found a positive main effect for participating in the web mode for the complete model using the full sample.
As Figure 1, right side, illustrates, weighting for self-selection into modes yielded the same results as without using weights for the anchors' perception of the questionnaire as "interesting" and "too personal" as well as "important for science" in the web mode. The effects of how "long" the questionnaire was perceived in the web mode disappeared when weighting, whereas we found effects of perceived diversity and length of the questionnaire in the paper mode. Here, when weighting the effect of perceiving the questionnaire as "too personal" disappeared. In the paper mode, weighting also improved the model fi t (Pseudo R² unweighted =0.08, Pseudo R² weighted =0.38). These fi ndings highlight that the total effect of survey experience includes parts that stem from self-selection.
Turning back to the unweighted models that yield the total effects of survey experience, we investigated the magnitude of these effects on consent. Figure 2 illustrates that in web and paper mode, the likelihood of consent could be changed by making the questionnaire more interesting and decreasing the respondents' feeling of intrusiveness (based on unweighted models). A change in the perception of the questionnaire as "interesting" from "not at all" to "completely" (i.e., minimum to maximum) would result in an increase in the consent probability by 21.41 percentage points in the web mode and 22.06 percentage points in the paper mode. Similarly, a change in the perception of the questionnaire as "too personal" from "completely" to "not at all" would increase the consent probability by 27.92 percentage points in the web mode and 22.06 percentage points in the paper mode. Only in the web mode, a change in the perception of the questionnaire as "important for science" from "not at all" to "completely" would increase the consent probability by 7.62 percentage points. Changing the perception of the questionnaire as "long" from "not at all" to "completely" would increase the consent probability by 3.58 percentage points.
Concerning our second research question on differences between modes, for the aspects with higher magnitude ("interesting" and "too personal"), the AMEs and predicted probabilities were surprisingly similar, given that in our fi rst descriptive analyses, the difference in consent rates between modes was 19.64 percentage points. The modes only differed concerning effects of whether anchors perceived

Fig. 1:
Average marginal effects of regressions on consent to contact partner by modes, unweighted (top) and weighted for self-selection into modes (bottom) the survey as "important for science" and "long". For these, however, our analyses of the predicted probabilities showed a comparatively small impact on the consent rates. Thus, concerning our second research question on mode differences in the consent process, we can note that the relationship between survey experience and consent partly differed between modes, but the major effects remained similar. Turning to our third research question on whether different devices resulted in different survey experiences for anchors and, thus, infl uenced the relationship between the anchors' survey experience and consent, Figure 3 depicts our devicespecifi c regression results (see Table A2 in the Appendix for full regression models). For both using a smartphone and not using a smartphone, we found effects of whether anchors perceived the survey as "interesting" and "too personal"; thus, replicating our previous fi ndings. For the aspects "important for science" and "diverse", we found signifi cant effects depending on the device used to complete the survey. The more important the anchors perceived the questionnaire for science, the more likely they were to consent when not completing the survey on a smartphone. In contrast, the more diverse the anchors perceived the questionnaire, the more likely they were to consent when using a smartphone but not when using a different device. These differences remained even after controlling for self-selection into devices using

Fig. 2:
Predicted probabilities for survey experience aspects with signifi cant effects in web and paper mode  (Fig. 3, right side). When weighting, we found an effect of perceiving the questionnaire as "important for science" when using a smartphone. As before, Figure 4 illustrates that consent rates can be changed by making the questionnaire more "interesting" and reducing the impression of a "too personal" questionnaire, irrespective of the device used to complete the survey. A change in the perception of the questionnaire as "interesting" from "not at all" to "completely" would result in an increase in the consent probability by 21.56 percentage points when using a smartphone and 21.86 percentage points when using another device. Similarly, a change in the perception of the questionnaire as "too personal" from "completely" to "not at all" would increase the consent probability by 25.87 percentage points when using a smartphone and 30.00 percentage points when not using a smartphone. Only when not utilizing a smartphone, a change in the perception of the questionnaire as "important for science" from "not at all" to "completely" would increase the consent probability by 8.33 percentage points. Instead, when using a smartphone, a change in the perception of the questionnaire as "diverse" from "not at all" to "completely" would increase the consent probability by 8.16 percentage points.

Fig. 4:
Predicted probabilities for survey experience aspects with signifi cant effects when using a smartphone or no smartphone in the web mode Note: Line = predicted probabilities, shaded area= 95% confi dence intervals, based on unweighted models. Source: Own calculations based on FReDA W1R and W1A Concerning our third research question on device differences in the consent process, the relationship between survey experience and consent partly differed between devices, but the major effects remained similar.

Discussion and conclusion
With the present study, we sought to investigate the research gap on obtaining the anchors' consent to invite their partners to participate in a dyadic survey in selfadministered mixed modes (web and paper). Specifi cally, we set out to investigate the importance of six dimensions of the anchors' survey experience (i.e., perceiving the questionnaire as diverse, interesting, important for science, long, diffi cult, or too personal) for obtaining their consent to invite their partners to a dyadic survey.
In conclusion, our study shows that a positive survey experience on the part of the anchors increases the likelihood that we obtain the anchors' consent to invite their partners, making dyadic surveys possible in the fi rst place. Maximizing the consent rate is key to increasing the statistical power of dyadic analyses. In FReDA, 55.09 percent of those anchors with partners provided consent to invite their partners to a survey. Across different channels of completing the survey (i.e., different survey modes and device types), we found that whether anchors perceived the questionnaire as "interesting" or "too personal" had an impact on the consent rates. In addition, regarding the anchors' perception of the questionnaire as "important for science", "long", and "diverse", we found that it depended on the channel used whether these aspects of survey experience impacted consent rates. Regarding their magnitude and potential to change consent rates, our analyses yielded that whether anchors perceived the questionnaire as "interesting" or "too personal" had the greatest potential to affect consent.
Our fi ndings have implications for survey research and practice. First, our focus on the survey experience of anchors was motivated by the fact that this experience can at least indirectly be infl uenced by survey design. Thus, it is in the hands of the researcher, for example, which survey modes are offered, whether a responsive questionnaire layout optimized for smartphones is used, what the main topics of the questionnaire are, how many questions are asked, and how many questions relate to sensitive topics. If researchers want to improve anchors' consent rates in self-administered mixed-mode surveys, we recommend choosing content that respondents fi nd interesting and that relates to their background, regardless of which channels to participate in the survey are offered. In addition, researchers should refrain from asking sensitive questions in the questionnaire, or at least not too many of them. Cognitive pretesting methods might yield insights on which topics and questions to include or exclude to achieve this goal. In this regard, our study provides fi rst insights that future research can utilize to effectively allocate its resources. Experimental studies are needed to identify and develop elements of survey design that improve respondents' survey experience and their perceptions of a survey as "interesting" and not "too personal". Our study suggests that changes along these dimensions will most likely improve anchors' consent rates to survey their partners in dyadic surveys.
Second, with our fi ndings on the relationship between survey experience and consent, we show an opportunity to employ an adaptive survey design (Schouten et al. 2018;Tourangeau et al. 2017) to increase consent rates among specifi c groups of respondents. Such groups might include anchors that are underrepresented in a survey and for which obtaining low consent rates would mean that partnership data would be even more sparse in the fi nal data set (e.g., persons with migration background or low education). These groups could be specifi cally targeted with content they previously rated as interesting or with a limited amount of sensitive content to improve their survey experience. However, in this case, comparability issues due to question order effects and survey burden should be carefully considered. Further, as stated above, we recommend using experimental designs to expand on our fi ndings to investigate the feasibility of adaptive survey design techniques.
Third, following the previous reasoning, dyadic data might be biased because specifi c couples are underrepresented in a survey. Socio-demographic characteristics (e.g., education, income) cannot be changed; the survey experience of the anchor, however, can. Consequently, it is possible to increase the likelihood of obtaining anchors' consent to invite their partners and, thus, gather complete couple data. Again, adaptive survey designs might target anchors at risk of not providing consent. Such measures could increase sample balance and thus reduce bias (Schouten et al. 2009;Schouten et al. 2016).
As always, our study is not without limitations that warrant further research. First, surveying the anchors' partners in dyadic surveys is at least a three-step process that includes consent, invitation, and participation processes. Our study focused on obtaining anchors' consent as we see merit in investigating each step separately. This separation enabled us to focus on the role of the anchors' survey experience on consent. When widening the scope, other factors come into play that need to be considered. For instance, considering the partners' participation process, the information included in the invitation letter and the design of the contact method and materials can be assumed to impact the participation decision. Yet, these factors will be of no relevance to the anchors' consent decision that happened before inviting the partner. Given the lack of research on participation of partners in dyadic surveys, we argue that as a starting point, it might be useful to investigate each step to learn about the specifi c mechanisms at play. For future research, we see merit in research that goes beyond our limited perspective and considers the complete three-step process. Such research could investigate selection processes happening at each step, comparing each step's relevance for obtaining net partner cases and how these steps might be related (e.g., spill-over effects). In such studies, an overarching theoretical framework should be developed and tested, including all steps and factors at the design, anchor, partner, and context levels. Again, we encourage explorative studies to provide fi rst insights before tackling this complex multi-step process with subsequent experimental studies.
Second, we focused on FReDA, a large-scale survey that allowed us to dive into detail and investigate different channels of participation, including different survey modes and devices in the web mode while controlling for a set of covariates. Nevertheless, replication in different countries is warranted to test the generalizability of our results. Furthermore, the target population of FReDA is relatively young (18 to 49 years), and consent processes might differ for older cohorts.
Third, utilizing FReDA enabled us to draw on a high-quality probability-based sample. Yet, we were unable to implement an experimental study on consent. Thus, our capability to make causal claims is limited. To cope with this challenge, we included control variables in our regression models, conducted missing data analyses, and performed extensive robustness checks for modes and devices. Nevertheless, we cannot rule out the possibility of unobserved heterogeneity impacting our fi ndings. Thus, we call for experimental studies that expand on our fi ndings and investigate the effects of specifi c survey design characteristics on anchors' consent rates to survey their partners. It would be an interesting addition to experimentally vary key features of a questionnaire (e.g., content, diffi culty, length, number of sensitive questions) and of survey materials such as invitation letters (e.g., different reasons for participation, the credibility of mentioned survey providers) to study their relationship with providing consent. Our study provides fi rst insights that can inform hypotheses and the experimental design of these future studies. The latter seems especially important when implementing it with a probability-based sample, which is likely expensive and limits the number of experimental groups that can be included. We investigated six survey experience dimensions. Conducting a fullfactorial experiment with six independent design variations is undoubtedly beyond the ability of many researchers. Our study suggests that design characteristics that affect how interesting and sensitive respondents perceive a survey should be investigated fi rst, thus, providing insights that may help to limit the complexity of subsequent experimental research.
Fourth, we used an IPW approach to disentangle self-selection into modes and devices from mode and device effects as a robustness check. As FReDA is a newly established panel survey and we drew on early waves, information to model both self-selection processes was relatively sparse. The success of IPW depends on these models. Thus, we encourage further research that revisits our research questions with more data, more elaborate models, or with different analytical approaches.