Gaining Consent to Survey Respondents’ Partners: The Importance of Anchors’ Survey Experience in Self-administered Modes


Gaining Consent to Survey Respondents’ Partners: 
The Importance of Anchors’ Survey Experience in Self-
administered Modes*

Tobias Gummer, Pablo Christmann, Tanja Kunz

Abstract: Dyadic surveys aim to interview pairs of respondents, such as partners in 
a relationship. In dyadic surveys, it is often necessary to obtain the anchors’ consent 
to contact their partners and invite them to a survey. If the survey is operated in 
self-administered modes, no interviewer is present to improve the consent rate, 
for example, by providing convincing arguments and additional information. To 
overcome the challenges posed by self-administered modes for dyadic surveys 
and to improve consent rates, it is important to identify aspects that positively 
infl uence the likelihood of anchors giving consent to contact their partners. Ideally, 
these aspects are in the hands of the researchers, such as the survey design and 
aspects of the questionnaire. Thus, in this study, we analyzed the relationship 
between anchors’ survey experience and their willingness to consent to surveying 
their partners in self-administered modes. Based on data from the German Family 
Demography Panel Study (FReDA), we found that the anchors’ perceptions of the 
questionnaire as “interesting” or “too personal” were related to consent rates. These 
relationships were consistent across different survey modes and devices. Effects of 
other aspects of the questionnaire, such as “important for science” and “diverse” 
varied between modes and devices. We concluded with practical recommendations 
for survey research and an outlook for future research.

Keywords: Consent · Dyadic survey · Self-administered modes · Panel survey · 
Survey experience

Comparative Population Studies
Vol. 48 (2023): 281-306 (Date of release: 06.07.2023)

Federal Institute for Population Research 2023 URL: www.comparativepopulationstudies.de
      DOI: https://doi.org/10.12765/CPoS-2023-12
      URN: urn:nbn:de:bib-cpos-2023-12en8
    

* This article belongs to a special issue on “Family Research and Demographic Analysis – New 
Insights from the German Family Demography Panel Study (FReDA)”.



•    Tobias Gummer, Pablo Christmann, Tanja Kunz282

1 Introduction

Dyadic surveys aim to interview pairs of respondents (Barton et al. 2020). In many 
instances, dyadic surveys focus on partners in a relationship (spouse or intimate 
partner); other examples include friends (e.g., Chow et al. 2013) or kinships such 
as parents and children (e.g., Kalmijn/Liefbroer 2011). Depending on the sampling 
approach, different designs exist on how respondents are invited to participate in a 
dyadic survey (Pasteels 2015). When relying on register-based samples, individuals 
are fi rst sampled and invited to participate in a survey. These target persons, 
hereafter referred to as “anchors”, are then asked for their consent to contact their 
partners. Only if the anchors’ consent is obtained, the partners will be contacted 
and invited to participate in the survey as well. Pasteels (2015) termed this approach 
a singular multi-actor design. 

In face-to-face mode, interviewers are present to conduct the anchor interview. 
Interviewers can positively impact the anchors’ survey experience (cf. West/Blom 
2017) and, thus, possibly their willingness to consent to the additional interviewing 
of their partners or children (Schröder et al. 2016). At best, the other household 
members are also present, so the interview can be conducted directly with a 
partner or child. If the person in question is not on-site or is preoccupied for other 
reasons, an appointment can be arranged. Alternatively, the interviewer can leave 
the invitation letter and the questionnaire for the partner at the anchor’s home. 
In pairfam (“Panel Analysis of Intimate Relationships and Family Dynamics”), for 
example, this approach has been successfully implemented (Huinink et al. 2011).

In self-administered modes, when no interviewers are involved, the process 
of inviting additional persons to a dyadic survey is different .  Anchors must fi rst 
obtain consent from their partners, for example, that they agree to be contacted by 
a survey institute. Anchors must also provide their partners’ contact information 
(i.e., names and addresses), which is especially important, if the anchor and their 
partner live in separate households. Only then survey invitations can be sent to 
partners living in the same or outside the anchors’ household. In the case of a self-
administered survey, therefore, it is even more challenging to obtain the anchors’ 
consent to survey their partners than in face-to-face studies, because there is no 
interviewer present to provide additional information about the benefi ts of the 
request and convince the anchors to agree to contact and invite their partners. 

Surveying partners in self-administered dyadic surveys is a three-step process: 
First, an anchor person must provide consent to approach the partner with a survey 
invitation (consent); second, the anchor needs to provide valid contact information, 
so a partner can then be invited (invitation); and third, the partner must decide to 
participate in the survey (participation). The loss of partners is amplifi ed across 
these three steps of selection (Starks et al. 2015). In our view, it is important to 
investigate each step separately, so one can disentangle and better understand 
the mechanisms at work. We argue that this will help us fi nd ways to increase the 
number of partners who can successfully be surveyed in self-administered modes. 
In our study, we focus on the consent step. Having said that, we argue that further 



Gaining Consent to Survey Respondents’ Partners    • 283

research into the other two steps is warranted to obtain a complete picture of the 
process of realizing a partner interview in dyadic surveys.

Although much research exists on respondents’ survey consent (e.g., Jenkins 
et al. 2006; Singer 1993, 2003) or their consent to data linking with, for example, 
population registers, sensors, apps, or paradata (e.g., Kunz/Gummer 2020; Sakshaug 
et al. 2012; Sakshaug/Kreuter 2012), research is still lacking regarding the process 
of anchors consenting for interviews of related persons (e.g., partners, children, 
parents, friends) in self-administered modes.  We fi nd it reasonable that the decision 
about consent is different in a dyadic survey, because it is not about anchors 
agreeing to share their own data, but allowing contact with another person and 
potentially burdening them with the task of participating in a survey. Unfortunately, 
previous research on consent in dyadic surveys has solely investigated face-to-
face settings with a focus on the role of interviewers in these consent situations 
(Kalmijn/Liefbroer 2011; Schmiedeberg et al. 2016; Schröder et al. 2012; Schröder 
et al. 2016).

To overcome the challenges posed by self-administered modes for dyadic 
surveys and to improve anchors’ consent to contact their partners, it is important to 
identify aspects of a survey that positively infl uence the likelihood of anchors giving 
consent. Ideally, these aspects are in the hands of the survey researchers, such as 
the survey design and aspects of the questionnaire. The existing gap in research 
on anchors’ consent to contact their partners in self-administered dyadic surveys 
makes this a formidable challenge. Currently, researchers do not know which 
design aspects to focus their attention and resources on and which aspects to test 
in experiments. Experimental studies, especially when conducted with probability-
based samples, can easily become laborious and costly. 

In the present study, we analyze the relationship between anchors’ survey 
experience and their willingness to consent to survey their partners in self-
administered dyadic surveys. We argue that how anchors experienced the survey 
themselves is a critical factor in their consent to contact their partners for a survey 
as well. Thus, we assume that anchors who have had a positive experience 
participating in a survey are more likely to convince their partners to participate 
and give consent to invite them. A variety of aspects infl uence respondents’ survey 
experience. Among those that are under the researchers’ control are the content of 
a questionnaire (e.g., Silber et al. 2021), the inclusion of sensitive questions (e.g., 
Tourangeau/Yan 2007), or the length of a questionnaire (e.g., Galesic/Bosnjak 2009). 
Even if these factors are the same for all respondents, the individual perception is 
likely to vary between them (e.g., respondents will differ in whether they perceive a 
questionnaire as interesting) (e.g., Yan/Williams 2022). However, aspects that vary 
across respondents and can also impact the survey experience are survey mode (de 
Leeuw/Berzelak 2016) and, in the case of web mode, the device used to complete 
the questionnaire (Couper/Peterson 2017).

Self-administered general population surveys are often designed as mixed-mode 
featuring web and paper mode (e.g., Luijkx et al. 2021; Wolf et al. 2021), so as not to 
introduce a coverage error for parts of the population who have no internet access 
or do not want to participate via the web (Blom et al. 2017; Cornesse/Schaurer 



•    Tobias Gummer, Pablo Christmann, Tanja Kunz284

2021). We follow the reasoning of previous studies on survey mode systems (e.g., 
Struminskaya et al. 2015) which argue that data collection processes differ between 
modes. From the respondents’ perspective, this means that they experience the 
survey differently depending on the survey mode (i.e., whether they complete a 
web-based or paper-based questionnaire). 

When participating via the web mode, respondents can use different devices, 
including desktop PCs, laptops, tablets, or smartphones (Gummer et al. 2023). 
Depending on the type of device, the questions may differ in their presentation (e.g., 
horizontal or vertical scale alignment) and navigation (e.g., data entry by mouse click 
or fi nger tap), which in turn may also infl uence the respondents’ response behavior 
(Couper/Peterson 2017). To accommodate different device types and ensure a good 
survey experience, using a responsive questionnaire layout and adapting the layout 
to smaller screens is especially important (e.g., Antoun et al. 2017; Antoun et al. 
2018; Tourangeau et al. 2013). But even then, we expect the survey experience to 
vary depending on the device used. 

Considering the different survey modes and device types, there are different 
ways or “channels” through which respondents can answer a survey. In self-
administered surveys, these include the web or paper mode. In addition, in the web 
mode, respondents can participate using a desktop, laptop, tablet, or smartphone. 
It remains an open question how survey experience relates to obtaining anchors’ 
consent to contact their partners in each channel.

Our study aims to understand better the relationship between the anchors’ survey 
experience and their likelihood of providing consent to invite their partners in self-
administered mixed-mode dyadic surveys, considering the multi-channel context of 
modern surveys. In our efforts, we focus on characteristics under the researcher’s 
control that can be used to improve survey design and optimize consent rates. Our 
research questions are:

1. How does the anchors’ survey experience affect the likelihood of providing 
consent to invite their partners to a survey?

2. Does the relationship between the anchors’ survey experience and providing 
consent differ between web and paper mode?

3. Does the relationship between the anchors’ survey experience and providing 
consent differ between devices in the web mode?

We relied on the German Family Demography Panel Study (FReDA), which is 
well-suited to answer our research questions. FReDA is designed as a multi-actor 
survey that includes anchors and their partners. It utilizes web and paper modes 
and asks respondents which device they used to complete the survey. In addition, 
the sample size is large enough to allow for subgroup analyses.

The remainder of this article is structured as follows: In the next section, we 
describe our data, the measures we used, and our analytical methods. After 



Gaining Consent to Survey Respondents’ Partners    • 285

presenting our results, we close with concluding remarks, practical recommendations 
for survey research, and opportunities for future studies. 

2 Data and method

2.1 Data

We drew on the German Family Demography Panel Study (FReDA). FReDA is 
a German research data infrastructure for family research (Hank et al. 2023; 
Schneider et al. 2021). FReDA covers demography, sociology, economics, and 
psychology topics, such as processes and transitions in couples’ relationships, 
fertility and parenthood, economic situation, and attitudes. It also includes sensitive 
questions about respondents’ health, sex life, and sexual orientation.  Respondents 
are surveyed twice a year in self-administered mixed modes using web and paper 
questionnaires of 20-30 minutes. For the web mode,  FReDA uses a responsive 
questionnaire layout that is optimized for small screens, which affects font or button 
sizes and question arrangement. 

In 2021, a new sample of panelists aged between 18 and 49 years was recruited 
for FReDA (FReDA-GGS sample). For this purpose, a probability-based sample was 
drawn from German municipalities’ population registers. The recruitment survey 
(W1R) was fi elded between the 7th of April and the 29th of June, 2021, and yielded 
an AAPOR RR2 of 34.92 percent (N=37,783). For W1R, a dedicated and short (10 
minutes) recruitment questionnaire was used. 26,725 respondents provided their 
panel consent in W1R to be re-contacted for the subsequent wave W1A, which was 
fi elded between the 7th of July and the 22nd of September, 2021, using a questionnaire 
of 20-30 minutes. 22,048 respondents completed the W1A questionnaire, resulting 
in an AAPOR RR2 of 85.4 percent. In W1A, 16,857 anchors who reported currently 
having a partner were asked to provide consent to contact their partner. These 
anchors represent our sample cases on which our subsequent analyses are based 
on. We used data release version 2.0.0 of FReDA that included W1R and W1A data 
(Bujard et al. 2023).

2.2 Operationalization

Dependent variable

 As dependent variable, we used the anchors’ responses to the question on whether 
they allowed us to invite their partners to our survey and send them a questionnaire. 
We created a dummy variable indicating consent (0=no consent; 1=consent). The 
consent question was asked only of those anchors currently in a relationship. Those 
without partners were coded as missing. 

If respondents did not answer the consent question, they were coded as missing 
as well, which was the case for 1.54 percent of the analysis sample. We conducted 
a missing data analysis to support this coding decision , using logistic regression 



•    Tobias Gummer, Pablo Christmann, Tanja Kunz286

analysis with missing consent information as dependent variable and the variables 
we used in our later analyses as independent variables (see below). We found no 
statistically signifi cant effects for any of the variables.  As we did not fi nd systematic 
differences between those respondents who answered the consent question and 
those who did not, we assumed that item nonresponse in the dependent variable 
did not impact our fi ndings. 

Independent variables

Survey experience. The anchors’ survey experience was measured based on the 
self-report scale developed by Kaczmirek et al. (2014). The scale was asked at the end 
of the questionnaire using a battery of 6 items on whether anchors considered the 
questionnaire as “interesting”, “diverse”, “important for science”, “long”, “diffi cult”, 
and “too personal”. For each item, we created a variable ranging from 0 (not at all 
agree) to 4 (completely agree). 

Survey mode. We created a dummy variable indicating whether anchors 
completed the survey in paper or web mode (0=paper; 1=web).

Device type. Relying on a self-reported question that was asked at the end of 
the web survey, we created a dummy variable indicating whether respondents 
used a device with a larger screen size (i.e., desktop, laptop, tablet, or others) or 
smaller screen size (i.e., smartphone) to complete the survey (0=no smartphone; 
1=smartphone). 

Control variables. We relied on a set of control variables that previous research 
had shown to affect participation decisions by potential respondents in panels or 
survey participation in general. We assumed that reasons that would discourage 
anchors from participating (again) in a survey might also be relevant to the anchors’ 
consent decision. Our models covered various factors of participation (Watson/
Wooden 2009), including the socio-demographic background of anchors such as 
gender (Behr et al. 2005; Lepkowski/Couper 2002), education (Behr et al. 2005; Wolf 
et al. 2021), and migration background (Voorpostel/Lipps 2011), the relevance of 
the survey topic (Gummer/Blumenstiel 2018; Lepkowski/Couper 2002), as well as 
how respondents were contacted and the infrastructure available to participate in 
the survey (Watson/Wooden 2009). In our models, we included gender (0=male; 
1=female), age (18-25 (ref), 26-30, 31-40, 41+), education (low (ref), intermediate, 
high), satisfaction with relationship (rating scale, 0-10), urbanicity of the region of 
residency (rural (ref), small town, city), German language spoken at home (0=no; 
1=yes), German citizenship (0=no; 1=yes), and cohabitation with a partner (0=no; 
1=yes) as control variables. 

Item nonresponse analysis. Table 1 details descriptive statistics for the 
independent variables (survey experience and control variables) that we included 
in our analyses. To investigate the sample selection process in our study, we show 
descriptive statistics for three samples: (i) all anchors with partners, (ii) after listwise 
deletion of all cases with missing values in the dependent variable (see above), 
and (iii) after listwise deletion of all cases with missing values in the independent 



Gaining Consent to Survey Respondents’ Partners    • 287

T
a

b
. 

1:
 

S
am

p
le

 s
e

le
c
ti

o
n

 p
ro

ce
ss

 a
n

d
 d

e
sc

ri
p

ti
v
e
 s

ta
ti

st
ic

s 
o

f 
su

b
st

an
ti

v
e
 v

a
ri

a
b

le
s

A
ll 

a
n

ch
o

rs
 w

it
h

 p
a
rt

n
e

rs
 

O
m

it
ti

n
g

 m
is

si
n

g
s 

in
 d

e
p

e
n

d
e

n
t

A
n

a
ly

si
s 

sa
m

p
le

v
a
ri

a
b

le
V

a
ri

a
b

le
M

e
a
n

M
in

.
M

a
x

.
N

M
e

a
n

M
in

.
M

a
x

.
N

M
e

a
n

M
in

.
M

a
x

.
N

C
o

h
e

n
's

 D

S
u

rv
e

y 
E

x
p

e
ri

e
n

ce
 

 
..

. 
in

te
re

st
in

g
2

.9
5

0
0

4
16

51
5

2
.9

51
0

4
16

4
3

7
2

.9
5

5
0

4
15

8
4

5
0

.1
4

3
..

. 
d

iv
e

rs
e

2
.9

8
3

0
4

16
51

3
2

.9
8

3
0

4
16

4
3

6
2

.9
8

6
0

4
15

8
4

5
0

.1
14

..
. 

im
p

o
rt

a
n

t 
fo

r 
sc

ie
n

ce
2

.8
8

6
0

4
16

3
15

2
.8

8
7

0
4

16
2

3
9

2
.8

8
9

0
4

15
8

4
5

0
.1

2
8

…
 l
o

n
g

2
.3

6
8

0
4

16
4

9
4

2
.3

6
7

0
4

16
41

7
2

.3
5

7
0

4
15

8
4

5
0

.2
4

4
…

 d
if
fi 

cu
lt

0
.8

9
4

0
4

16
4
7

9
0

.8
9

4
0

4
16

4
0

2
0

.8
8

8
0

4
15

8
4

5
0

.2
0

4
…

 t
o

o
 p

e
rs

o
n

a
l

2
.2

9
4

0
4

16
5

2
9

2
.2

9
2

0
4

16
4

5
0

2
.2

8
0

0
4

15
8

4
5

0
.2

6
8

fe
m

a
le

0
.5

7
8

0
1

16
8
16

0
.5

7
8

0
1

16
5

5
7

0
.5

74
0

1
15

8
4

5
0

.1
6

2
A

g
e …

 1
8

-2
5

0
.1

4
5

0
1

16
8

5
0

0
.1

4
5

0
1

16
5

9
1

0
.1

4
7

0
1

15
8

4
5

0
.1

3
6

…
 2

6
-3

0
0

.1
5

7
0

1
16

8
5

0
0

.1
5

7
0

1
16

5
9
1

0
.1

5
9

0
1

15
8

4
5

0
.0

8
2

…
 3

1
-4

0
0

.4
0

0
0

1
16

8
5

0
0

.4
0

0
0

1
16

5
9
1

0
.4

0
0

0
1

15
8

4
5

0
.0

0
8

…
 4

1+
0

.2
9

9
0

1
16

8
5

0
0

.2
9

8
0

1
16

5
9
1

0
.2

9
5

0
1

15
8

4
5

0
.1

61
E

d
u

ca
ti

o
n

…
 l
o

w
0

.0
8

2
0

1
16

7
5

7
0

.0
8
1

0
1

16
4

9
9

0
.0

7
9

0
1

15
8

4
5

0
.1

9
3

…
 i
n

te
rm

e
d

ia
te

0
.3

7
7

0
1

16
7

5
7

0
.3

7
6

0
1

16
4

9
9

0
.3

7
5

0
1

15
8

4
5

0
.0

3
6

…
 h

ig
h

0
.5

4
2

0
1

16
7

5
7

0
.5

4
3

0
1

16
4

9
9

0
.5

4
6

0
1

15
8

4
5

0
.1

41
re

la
ti

o
n

sh
ip

 s
a
ti

sf
a
c
ti

o
n

8
.2

5
9

0
10

16
7

8
2

8
.2

61
0

10
16

5
2

8
8

.2
6

4
0

10
15

8
4

5
0

.0
4

3
U

rb
a

n
ic

it
y

…
 r

u
ra

l
0

.2
2

9
0

1
16

8
5

7
0

.2
2

9
0

1
16

5
9

8
0

.2
2

8
0

1
15

8
4

5
0

.0
2

9
…

 s
m

a
lle

r 
to

w
n

0
.3

7
6

0
1

16
8

5
7

0
.3

7
6

0
1

16
5

9
8

0
.3

7
6

0
1

15
8

4
5

0
.0

0
1

…
 c

it
y

0
.3

9
5

0
1

16
8

5
7

0
.3

9
5

0
1

16
5

9
8

0
.3

9
5

0
1

15
8

4
5

0
.0

2
4

G
e

rm
a
n

 a
t 

h
o

m
e

0
.9

3
2

0
1

16
8

0
3

0
.9

3
2

0
1

16
5

4
4

0
.9

3
6

0
1

15
8

4
5

0
.3

19
n

o
 c

it
iz

e
n

sh
ip

0
.0

5
8

0
1

16
8
16

0
.0

5
8

0
1

16
5

5
7

0
.0

5
7

0
1

15
8

4
5

0
.1

13
li
v

in
g

 t
o

g
e

th
e

r
0

.8
3

3
0

1
16

8
41

0
.8

3
3

0
1

16
5

8
2

0
.8

3
2

0
1

15
8

4
5

0
.0

4
7

N
o

te
: 

C
o

h
e
n’

s 
D

 p
re

se
n

te
d

 a
s 

ab
so

lu
te

 v
al

u
e
s.

S
o

u
rc

e
: 

O
w

n
 c

al
cu

la
ti

o
n

s 
b

as
e
d

 o
n

 F
R

e
D

A
 W

1
R

 a
n

d
 W

1
A



•    Tobias Gummer, Pablo Christmann, Tanja Kunz288

variables. We did not include information on survey mode and device type here, 
as these are subject to extensive additional robustness analyses (see Section 2.3).

In total, 16,857 anchors reported having a partner and were asked for consent 
to contact their partners. Of these, 259 were coded as missing because they had 
not answered the consent question. In a further step, 782 cases were omitted 
from the analytical sample by listwise deletion due to missing values in the 
independent variables. In total, 10.08 percent of the sample was omitted due to item 
nonresponse. To gauge the potential impact of item nonresponse on our fi ndings, 
we fi rst compared mean values of all substantive variables between our analysis 
sample and the sample including all anchors with partners. For better comparability, 
we calculated Cohen’s D as a measure of the strength of the differences. As we 
were not interested in directions of these differences, we used absolute Cohen’s D 
values. With a minimum of 0.002 and a maximum of 0.32, with the exception of the 
maximum, all Cohen’s D values remained within a range that can be considered as 
having no or small effects. Thus, our missing data analyses indicated that listwise 
deletion had only a weak impact on the distributions of the independent variables 
we used in our analyses. Thus, item nonresponse for the independent variables is 
unlikely to bias our analyses. 

2.3 Method

Models

We fi tted logistic regressions with consent as the dependent variable to investigate 
our research questions. As independent variables, we included the six survey 
experience items and the control variables, whereby the latter will not be the 
subject of further interpretation. We computed individual models for each channel 
(i.e., survey modes and device types) to investigate the consent process in different 
channels. For instance, we computed separate models on consent for the paper 
mode and the web mode, as well as models for using a smartphone and not using 
a smartphone. For all models, we reported average marginal effects (AMEs). To 
further test the magnitude of the effects of survey experience on consent rates, we 
estimated predicted probabilities (with all other variables at their means) for those 
survey experience items for which we found signifi cant effects in the models.

In addition to the separate models for each channel, we calculated two complete 
models relying on the full sample (i.e., not differentiating between modes) and 
all web participants (not differentiating between devices), respectively. The two 
models for survey modes and device types included the same variables as above 
and a dummy variable for mode or device, respectively. 

Robustness checks

We conducted robustness checks in which we used weighting to account for self-
selection into modes and devices. We were interested in whether differences in 
the consent obtained from anchors differed in total between modes and devices 



Gaining Consent to Survey Respondents’ Partners    • 289

(i.e., a total effect). We were not primarily interested in identifying whether these 
differences stem from mode effects or self-selection into modes and devices. 

Weighting for self-selection into survey modes. We utilized inverse probability 
weighting (IPW) to disentangle mode and self-selection effects. A recent illustration 
of IPW for the case of missing data is given by Little et al. (2022). Assumptions 
behind this approach have been discussed in prior research (Gummer/Roßmann 
2019; Kreuter/Olson 2011; Little/Vartivarian 2005). To calculate the weights, we 
fi tted a logistic regression with mode choice as dependent variable and relevant 
independent variables. We selected variables that prior research (Gummer/
Struminskaya 2020; Pforr/Dannwolf 2017) had used to predict mode choice or 
that were relevant to the survey invitation process in FReDA: gender, region of 
residency, education, internet use, and contact strategy. Mode choice propensities, 
based on the regression model, were then used as weighting factors. Respondents 
likely to participate in their selected mode were weighted down, and those unlikely 
to participate in their selected mode were weighted up. 

As a robustness check of our analyses and to gain additional insights, we reran 
our regression models with consent as the dependent variable (see above) and used 
the weights to control for self-selection into survey modes. 

Weighting for self-selection into devices. To disentangle the device and self-
selection effects, we again utilized IPW. As before, we specifi ed a logistic regression 
with device use (0=no smartphone; 1=smartphone) as the dependent variable and 
included a set of relevant independent variables that prior research had used to 
predict device choice and mobile web accessibility (de Bruijne/Wijnant 2013; Fuchs/
Busse 2009; Gummer et al. 2019; Toepoel/Lugtig 2014): gender, age, education, 
and urbanicity. Device choice propensities obtained from the model were used as 
weighting factors. As device choice only applied to respondents in the web mode, 
we combined the mode choice and device choice weights. 

Again, as a robustness check, we reran our regression models with consent as 
the dependent variable (see above) and used the weights to control for self-selection 
into devices in the web mode.

3 Results

Overall, we found that 55.09 percent of the anchors who reported having a partner 
gave us consent to invite their partners to a survey. The consent rate signifi cantly 
differed between modes, with 58.00 percent in the web mode and 38.36 percent in 
the paper mode (χ²(1)=326.33, p<.000). When turning to the different devices used 
in the web mode, we found smaller but still signifi cant differences, with a consent 
rate of 60.28 percent for anchors completing the survey on a desktop/laptop/tablet/
other device and 56.18 percent for those using a smartphone (χ²(1)=24.23, p<.000). 

Concerning our fi rst research question on the effect of the anchors’ survey 
experience on consent to invite their partners to a survey, Figure 1 details the 
results of our logistic regression models (see Table A1 in the Appendix for full 
regression models). Controlling for covariates, we found that selected aspects of 



•    Tobias Gummer, Pablo Christmann, Tanja Kunz290

the anchors’ survey experience impacted whether consent was provided to invite 
their partners. In the web and paper mode, we found substantial and statistically 
signifi cant effects of whether anchors perceived the questionnaire as “interesting” 
or “too personal” on the likelihood of consent: the more interesting the anchors 
found the questionnaire, the more likely they were to give consent; the more the 
anchors perceived the questionnaire as too personal, the less likely they were to 
give consent. For the web mode but not the paper mode, we found an effect of 
how “important for science” and “long” the anchors perceived the questionnaire 
on the likelihood of providing consent: the more important anchors perceived the 
questionnaire, the higher the likelihood of consent. We found no signifi cant effects 
for the other aspects of the questionnaire (“diverse” and “diffi cult”) on the anchors’ 
consent to invite their partners – neither in web nor paper mode. The effects we 
obtained for the full sample were similar to those we found for the web model. 
Consequently, we found a positive main effect for participating in the web mode for 
the complete model using the full sample.

As Figure 1, right side, illustrates, weighting for self-selection into modes 
yielded the same results as without using weights for the anchors’ perception of the 
questionnaire as “interesting” and “too personal” as well as “important for science” 
in the web mode. The effects of how “long” the questionnaire was perceived in the 
web mode disappeared when weighting, whereas we found effects of perceived 
diversity and length of the questionnaire in the paper mode. Here, when weighting 
the effect of perceiving the questionnaire as “too personal” disappeared. In the 
paper mode, weighting also improved the model fi t (Pseudo R²unweighted=0.08, 
Pseudo R²weighted=0.38). These fi ndings highlight that the total effect of survey 
experience includes parts that stem from self-selection.

Turning back to the unweighted models that yield the total effects of survey 
experience, we investigated the magnitude of these effects on consent. Figure 2 
illustrates that in web and paper mode, the likelihood of consent could be changed by 
making the questionnaire more interesting and decreasing the respondents’ feeling 
of intrusiveness (based on unweighted models). A change in the perception of the 
questionnaire as “interesting” from “not at all” to “completely” (i.e., minimum to 
maximum) would result in an increase in the consent probability by 21.41 percentage 
points in the web mode and 22.06 percentage points in the paper mode. Similarly, a 
change in the perception of the questionnaire as “too personal” from “completely” to 
“not at all” would increase the consent probability by 27.92 percentage points in the 
web mode and 22.06 percentage points in the paper mode. Only in the web mode, 
a change in the perception of the questionnaire as “important for science” from “not 
at all” to “completely” would increase the consent probability by 7.62 percentage 
points. Changing the perception of the questionnaire as “long” from “not at all” to 
“completely” would increase the consent probability by 3.58 percentage points.

Concerning our second research question on differences between modes, for 
the aspects with higher magnitude (“interesting” and “too personal”), the AMEs and 
predicted probabilities were surprisingly similar, given that in our fi rst descriptive 
analyses, the difference in consent rates between modes was 19.64 percentage 
points. The modes only differed concerning effects of whether anchors perceived 



Gaining Consent to Survey Respondents’ Partners    • 291

Fig. 1: Average marginal effects of regressions on consent to contact partner 
by modes, unweighted (top) and weighted for self-selection into modes 
(bottom)

S
u

rv
e
y
 a

ss
e
ss

m
e
n

t
C

o
n

tr
o

l v
ar

ia
b

le
s

interesting
diverse

important for science
long

difficult
too personal

female

26-30
31-40

41+

intermediate
high

relationship satisfaction

smaller town
city

german at home
no citizenship

living together
web mode

age, ref. 18-25

education, ref. low

municipality, ref. rural

-.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4
Average Marginal Effects

mail mode web mode total

unweighted

S
u

rv
e
y
 a

ss
e
ss

m
e
n

t
C

o
n

tr
o

l v
ar

ia
b

le
s

interesting
diverse

important for science
long

difficult
too personal

female

26-30
31-40

41+

intermediate
high

relationship satisfaction

smaller town
city

german at home
no citizenship

living together
web mode

age, ref. 18-25

education, ref. low

municipality, ref. rural

-.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4
Average Marginal Effects

mail mode web mode total

weighted

Source: Own calculations based on FReDA W1R and W1A



•    Tobias Gummer, Pablo Christmann, Tanja Kunz292

the survey as “important for science” and “long”. For these, however, our analyses 
of the predicted probabilities showed a comparatively small impact on the consent 
rates. Thus, concerning our second research question on mode differences in the 
consent process, we can note that the relationship between survey experience and 
consent partly differed between modes, but the major effects remained similar. 

Turning to our third research question on whether different devices resulted 
in different survey experiences for anchors and, thus, infl uenced the relationship 
between the anchors’ survey experience and consent, Figure 3 depicts our device-
specifi c regression results (see Table A2 in the Appendix for full regression models). 
For both using a smartphone and not using a smartphone, we found effects of whether 
anchors perceived the survey as “interesting” and “too personal”; thus, replicating 
our previous fi ndings. For the aspects “important for science” and “diverse”, we 
found signifi cant effects depending on the device used to complete the survey. The 
more important the anchors perceived the questionnaire for science, the more likely 
they were to consent when not completing the survey on a smartphone. In contrast, 
the more diverse the anchors perceived the questionnaire, the more likely they were 
to consent when using a smartphone but not when using a different device. These 
differences remained even after controlling for self-selection into devices using 

Fig. 2: Predicted probabilities for survey experience aspects with signifi cant 
effects in web and paper mode

0.
2.

4.
6.

81
P

r(
co

n
se

n
t)

0 1 2 3 4
interesting

Web mode

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
important for science

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
long

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
too personal

0.
2.

4.
6.

81
P

r(
co

n
se

n
t)

0 1 2 3 4
interesting

Paper mode

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
too personal

Note: Line = predicted probabilities, shaded area = 95% confi dence intervals, based on 
unweighted models.
Source: Own calculations based on FReDA W1R and W1A



Gaining Consent to Survey Respondents’ Partners    • 293

Fig. 3: Average marginal effects of regressions on consent to contact partner 
by devices used in the web mode, unweighted (top) and weighted for 
self-selection into devices (bottom)

S
u

rv
e
y
 a

ss
e
ss

m
e
n

t
C

o
n

tr
o

l v
ar

ia
b

le
s

interesting
diverse

important for science
long

difficult
too personal

female

26-30
31-40

41+

intermediate
high

relationship satisfaction

smaller town
city

german at home
no citizenship

living together
smartphone

age, ref. 18-25

education, ref. low

municipality, ref. rural

-.3 -.2 -.1 0 .1 .2 .3
Average Marginal Effects

desktop/laptop smartphone total

unweighted

S
u

rv
e
y
 a

ss
e
ss

m
e
n

t
C

o
n

tr
o

l v
ar

ia
b

le
s

interesting
diverse

important for science
long

difficult
too personal

female

26-30
31-40

41+

intermediate
high

relationship satisfaction

smaller town
city

german at home
no citizenship

living together
smartphone

age, ref. 18-25

education, ref. low

municipality, ref. rural

-.3 -.2 -.1 0 .1 .2 .3
Average Marginal Effects

desktop/laptop smartphone total

weighted

Source: Own calculations based on FReDA W1R and W1A



•    Tobias Gummer, Pablo Christmann, Tanja Kunz294

weights (Fig. 3, right side). When weighting, we found an effect of perceiving the 
questionnaire as “important for science” when using a smartphone.

As before, Figure 4 illustrates that consent rates can be changed by making the 
questionnaire more “interesting” and reducing the impression of a “too personal” 
questionnaire, irrespective of the device used to complete the survey. A change in 
the perception of the questionnaire as “interesting” from “not at all” to “completely” 
would result in an increase in the consent probability by 21.56 percentage points 
when using a smartphone and 21.86 percentage points when using another 
device. Similarly, a change in the perception of the questionnaire as “too personal” 
from “completely” to “not at all” would increase the consent probability by 25.87 
percentage points when using a smartphone and 30.00 percentage points when 
not using a smartphone. Only when not utilizing a smartphone, a change in the 
perception of the questionnaire as “important for science” from “not at all” to 
“completely” would increase the consent probability by 8.33 percentage points. 
Instead, when using a smartphone, a change in the perception of the questionnaire 
as “diverse” from “not at all” to “completely” would increase the consent probability 
by 8.16 percentage points. 

Fig. 4: Predicted probabilities for survey experience aspects with signifi cant 
effects when using a smartphone or no smartphone in the web mode

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
interesting

no smartphone

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
important for science

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
too personal

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
interesting

smartphone

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
diverse

0
.2

.4
.6

.8
1

P
r(

co
n

se
n

t)

0 1 2 3 4
too personal

Note: Line = predicted probabilities, shaded area= 95% confi dence intervals, based on 
unweighted models.
Source: Own calculations based on FReDA W1R and W1A



Gaining Consent to Survey Respondents’ Partners    • 295

Concerning our third research question on device differences in the consent 
process, the relationship between survey experience and consent partly differed 
between devices, but the major effects remained similar.

4 Discussion and conclusion

With the present study, we sought to investigate the research gap on obtaining the 
anchors’ consent to invite their partners to participate in a dyadic survey in self-
administered mixed modes (web and paper). Specifi cally, we set out to investigate 
the importance of six dimensions of the anchors’ survey experience (i.e., perceiving 
the questionnaire as diverse, interesting, important for science, long, diffi cult, or 
too personal) for obtaining their consent to invite their partners to a dyadic survey. 
In conclusion, our study shows that a positive survey experience on the part of the 
anchors increases the likelihood that we obtain the anchors’ consent to invite their 
partners, making dyadic surveys possible in the fi rst place. Maximizing the consent 
rate is key to increasing the statistical power of dyadic analyses.

In FReDA, 55.09 percent of those anchors with partners provided consent to 
invite their partners to a survey. Across different channels of completing the survey 
(i.e., different survey modes and device types), we found that whether anchors 
perceived the questionnaire as “interesting” or “too personal” had an impact on the 
consent rates. In addition, regarding the anchors’ perception of the questionnaire 
as “important for science”, “long”, and “diverse”, we found that it depended on the 
channel used whether these aspects of survey experience impacted consent rates. 
Regarding their magnitude and potential to change consent rates, our analyses 
yielded that whether anchors perceived the questionnaire as “interesting” or “too 
personal” had the greatest potential to affect consent.

Our fi ndings have implications for survey research and practice. First, our focus 
on the survey experience of anchors was motivated by the fact that this experience 
can at least indirectly be infl uenced by survey design. Thus, it is in the hands of the 
researcher, for example, which survey modes are offered, whether a responsive 
questionnaire layout optimized for smartphones is used, what the main topics of 
the questionnaire are, how many questions are asked, and how many questions 
relate to sensitive topics. If researchers want to improve anchors’ consent rates 
in self-administered mixed-mode surveys, we recommend choosing content that 
respondents fi nd interesting and that relates to their background, regardless of 
which channels to participate in the survey are offered. In addition, researchers 
should refrain from asking sensitive questions in the questionnaire, or at least not 
too many of them. Cognitive pretesting methods might yield insights on which 
topics and questions to include or exclude to achieve this goal.  In this regard, our 
study provides fi rst insights that future research can utilize to effectively allocate 
its resources. Experimental studies are needed to identify and develop elements of 
survey design that improve respondents’ survey experience and their perceptions 
of a survey as “interesting” and not “too personal”. Our study suggests that changes 



•    Tobias Gummer, Pablo Christmann, Tanja Kunz296

along these dimensions will most likely improve anchors’ consent rates to survey 
their partners in dyadic surveys. 

Second, with our fi ndings on the relationship between survey experience and 
consent, we show an opportunity to employ an adaptive survey design (Schouten 
et al. 2018; Tourangeau et al. 2017) to increase consent rates among specifi c groups 
of respondents. Such groups might include anchors that are underrepresented in 
a survey and for which obtaining low consent rates would mean that partnership 
data would be even more sparse in the fi nal data set (e.g., persons with migration 
background or low education). These groups could be specifi cally targeted with 
content they previously rated as interesting or with a limited amount of sensitive 
content to improve their survey experience. However, in this case, comparability 
issues due to question order effects and survey burden should be carefully 
considered. Further, as stated above, we recommend using experimental designs 
to expand on our fi ndings to investigate the feasibility of adaptive survey design 
techniques.

Third, following the previous reasoning, dyadic data might be biased 
because specifi c couples are underrepresented in a survey. Socio-demographic 
characteristics (e.g., education, income) cannot be changed; the survey experience 
of the anchor, however, can. Consequently, it is possible to increase the likelihood 
of obtaining anchors’ consent to invite their partners and, thus, gather complete 
couple data. Again, adaptive survey designs might target anchors at risk of not 
providing consent. Such measures could increase sample balance and thus reduce 
bias (Schouten et al. 2009; Schouten et al. 2016). 

As always, our study is not without limitations that warrant further research. First, 
 surveying the anchors’ partners in dyadic surveys is at least a three-step process 
that includes consent, invitation, and participation processes. Our study focused on 
obtaining anchors’ consent as we see merit in investigating each step separately. 
This separation enabled us to focus on the role of the anchors’ survey experience 
on consent. When widening the scope, other factors come into play that need to 
be considered. For instance, considering the partners’ participation process, the 
information included in the invitation letter and the design of the contact method 
and materials can be assumed to impact the participation decision. Yet, these 
factors will be of no relevance to the anchors’ consent decision that happened 
before inviting the partner. Given the lack of research on participation of partners 
in dyadic surveys, we argue that as a starting point, it might be useful to investigate 
each step to learn about the specifi c mechanisms at play. For future research, we 
see merit in research that goes beyond our limited perspective and considers the 
complete three-step process. Such research could investigate selection processes 
happening at each step, comparing each step’s relevance for obtaining net partner 
cases and how these steps might be related (e.g., spill-over effects). In such studies, 
an overarching theoretical framework should be developed and tested, including 
all steps and factors at the design, anchor, partner, and context levels. Again, we 
encourage explorative studies to provide fi rst insights before tackling this complex 
multi-step process with subsequent experimental studies. 



Gaining Consent to Survey Respondents’ Partners    • 297

Second, we focused on FReDA, a large-scale survey that allowed us to dive 
into detail and investigate different channels of participation, including different 
survey modes and devices in the web mode while controlling for a set of covariates. 
Nevertheless, replication in different countries is warranted to test the generalizability 
of our results. Furthermore, the target population of FReDA is relatively young (18 to 
49 years), and consent processes might differ for older cohorts.

Third, utilizing FReDA enabled us to draw on a high-quality probability-based 
sample. Yet, we were unable to implement an experimental study on consent. Thus, 
our capability to make causal claims is limited. To cope with this challenge, we included 
control variables in our regression models, conducted missing data analyses, and 
performed extensive robustness checks for modes and devices. Nevertheless, we 
cannot rule out the possibility of unobserved heterogeneity impacting our fi ndings. 
Thus, we call for experimental studies that expand on our fi ndings and investigate 
the effects of specifi c survey design characteristics on anchors’ consent rates to 
survey their partners. It would be an interesting addition to experimentally vary 
key features of a questionnaire (e.g., content, diffi culty, length, number of sensitive 
questions) and of survey materials such as invitation letters (e.g., different reasons 
for participation, the credibility of mentioned survey providers) to study their 
relationship with providing consent. Our study provides fi rst insights that can 
inform hypotheses and the experimental design of these future studies. The latter 
seems especially important when implementing it with a probability-based sample, 
which is likely expensive and limits the number of experimental groups that can 
be included. We investigated six survey experience dimensions. Conducting a full-
factorial experiment with six independent design variations is undoubtedly beyond 
the ability of many researchers. Our study suggests that design characteristics 
that affect how interesting and sensitive respondents perceive a survey should be 
investigated fi rst, thus, providing insights that may help to limit the complexity of 
subsequent experimental research.

Fourth, we used an IPW approach to disentangle self-selection into modes and 
devices from mode and device effects as a robustness check. As FReDA is a newly 
established panel survey and we drew on early waves, information to model both 
self-selection processes was relatively sparse. The success of IPW depends on these 
models. Thus, we encourage further research that revisits our research questions 
with more data, more elaborate models, or with different analytical approaches. 

Acknowledgements
This work was funded by the German Federal Ministry of Education and Research 
(BMBF) as part of FReDA (grant number 01UW2001B).



•    Tobias Gummer, Pablo Christmann, Tanja Kunz298

References

Antoun, Christopher; Couper, Mick P.; Conrad, Frederick G. 2017: Effects of Mobile 
versus PC Web on Survey Response Quality: A Crossover Experiment in a Probability 
Web Panel. In: Public Opinion Quarterly 81,1: 280-306. 
https://doi.org/10.1093/poq/nfw088

Antoun, Christopher et al. 2018: Design Heuristics for Effective Smartphone 
Questionnaires. In: Social Science Computer Review 36,5: 557-574. 
https://doi.org/10.1177/0894439317727072

Barton, Allen W. et al. 2020: “Will You Complete This Survey Too?” Differences Between 
Individual Versus Dyadic Samples in Relationship Research. In: Journal of Family 
Psychology 34,2: 196-203. https://doi.org/10.1037/fam0000583

Behr, Andreas; Bellgardt, Egon; Rendtel, Ulrich 2005: Extent and Determinants of Panel 
Attrition in the European Community Household Panel. In: European Sociological 
Review 21,5: 489-512. https://doi.org/10.1093/esr/jci037

Blom, Annelies G. et al. 2017: Does the Recruitment of Offl ine Households Increase the 
Sample Representativeness of Probability-Based Online Panels? Evidence From the 
German Internet Panel. In: Social Science Computer Review 35,4: 498-520. 
https://doi.org/10.1177/0894439316651584

Bujard, Matin et al. 2023: FReDA – The German Family Demography Panel Study, ZA7777 
Data File Version 2.0.0. Cologne: GESIS.

Chow, Chong Man; Ruhl, Holly; Buhrmester, Duane 2013: The mediating role of 
interpersonal competence between adolescents’ empathy and friendship quality: A 
dyadic approach. In: Journal of Adolescence 36,1: 191-200. 
https://doi.org/10.1016/j.adolescence.2012.10.004

Cornesse, Carina; Schaurer, Ines 2021: The Long-Term Impact of Different Offl ine 
Population Inclusion Strategies in Probability-Based Online Panels: Evidence From 
the German Internet Panel and the GESIS Panel. In: Social Science Computer Review 
39,4: 687-704. https://doi.org/10.1177/0894439320984131

Couper, Mick P.; Peterson, Gregg J. 2017: Why Do Web Surveys Take Longer on 
Smartphones? In: Social Science Computer Review 35,3: 357-377. 
https://doi.org/10.1177/0894439316629932

de Bruijne, Marika; Wijnant, Arnaud 2013: Comparing Survey Results Obtained via Mobile 
Devices and Computers: An Experiment With a Mobile Web Survey on a Heterogeneous 
Group of Mobile Devices Versus a Computer-Assisted Web Survey. In: Social Science 
Computer Review 31,4: 482-504. https://doi.org/10.1177/0894439313483976

de Leeuw, Edith; Berzelak, Nejc 2016: Survey Mode or Survey Modes? In: Wolf, Christof 
et al. (Eds.): The SAGE Handbook of Survey Methodology. London: SAGE: 142-156.

Fuchs, Marek; Busse, Britta 2009: The Coverage Bias of Mobile Web Surveys Across 
European Countries. In: International Journal of Internet Science 4,1: 21-33. 

Galesic, Mirta; Bosnjak, Michael 2009: Effects of Questionnaire Length on Participation 
and Indicators of Response Quality in a Web Survey. In: Public Opinion Quarterly 73,2: 
349-360. https://doi.org/10.1093/poq/nfp031

Gummer, Tobias; Blumenstiel, Jan Eric 2018: Experimental Evidence on Reducing 
Nonresponse Bias through Case Prioritization: The Allocation of Interviewers. In: Field 
Methods 30,2: 124-139. https://doi.org/10.1177/1525822X18757967



Gaining Consent to Survey Respondents’ Partners    • 299

Gummer, Tobias et al. 2023: Is There a Growing Use of Mobile Devices in Web Surveys? 
Evidence from 128 Web Surveys in Germany. In: Quality & Quantity. 
https://doi.org/10.1007/s11135-022-01601-8

Gummer, Tobias; Quoß, Franziska; Roßmann, Joss 2019: Does increasing mobile device 
coverage reduce heterogeneity in completing web surveys on smartphones? In: Social 
Science Computer Review 37,3: 371-384. https://doi.org/10.1177/0894439318766836

Gummer, Tobias; Roßmann, Joss 2019: The effects of propensity score weighting 
on attrition biases in attitudinal, behavioral, and socio-demographic variables in a 
short-term web-based panel survey. In: International Journal of Social Research 
Methodology 22,1: 81-95. https://doi.org/10.1080/13645579.2018.1496052

Gummer, Tobias; Struminskaya, Bella 2020: Early and Late Participation during the 
Field Period: Response Timing in a Mixed-Mode Probability-Based Panel Survey. In: 
Sociological Methods & Research. https://doi.org/10.1177/0049124120914921

Hank, Karsten et al. 2023: A new data infrastructure for family research and demographic 
analysis: The German Family Demography Panel Study (FReDA). In: European 
Sociological Review [forthcoming].

Huinink, Johannes et al. 2011: Panel analysis of intimate relationships and family dynamics 
(pairfam): Conceptual framework and design. In: Zeitschrift für Familienforschung 
23,1: 77-101. https://doi.org/10.20377/jfr-235

Jenkins, Stephen P. et al. 2006: Patterns of consent: evidence from a general household 
survey. In: Journal of the Royal Statistical Society A (Statistics in Society) 169,4: 701-
722. https://doi.org/10.1111/j.1467-985X.2006.00417.x

Kaczmirek, Lars et al. 2014: GESIS Online Panel Pilot: multitopic introductory wave 
(survey 1). ZA5582 Data fi le Version 1.0.0. Cologne: GESIS Data Archive. 
https://doi.org/10.4232/1.11570

Kalmijn, Matthijs; Liefbroer, Aart C. 2011: Nonresponse of Secondary Respondents 
in Multi-Actor Surveys: Determinants, Consequences, and Possible Remedies. In: 
Journal of Family Issues 32,6: 735-766. https://doi.org/10.1177/0192513X10390184

Kreuter, Frauke; Olson, Kristen 2011: Multiple auxiliary variables in nonresponse 
adjustment. In: Sociological Methods & Research 40,2: 311-332. 
https://doi.org/10.1177/0049124111400042

Kunz, Tanja; Gummer, Tobias 2020: Understanding respondents’ attitudes toward web 
paradata use. In: Social Science Computer Review 38,6: 739-753. 
https://doi.org/10.1177/0894439319826904

Lepkowski, James M.; Couper, Mick P. 2002: Nonresponse in the Second Wave 
of Longitudinal Household Surveys. In: Groves, Robert M. et al. (Ed.): Survey 
Nonresponse. New York: Wiley: 259-272. 

Little, Roderick J. A.; Vartivarian, Sonya L. 2005: Does weighting for nonresponse 
increase the variance of survey means? In: Survey Methodology 31,2: 161-168. 

Little, Roderick J.; Carpenter, James R.; Lee, Katherine J. 2022: A Comparison of 
Three Popular Methods for Handling Missing Data: Complete-Case Analysis, Inverse 
Probability Weighting, and Multiple Imputation. In: Sociological Methods & Research. 
https://doi.org/10.1177/00491241221113873

Luijkx, Ruud et al. 2021: The European Values Study 2017: On the way to the future using 
mixed-modes. In: European Sociological Review 37,2: 330-346. 
https://doi.org/10.1093/esr/jcaa049

Pasteels, Inge 2015: How to weight survey data with a dyadic multi-actor design? In: 
Survey Methods: Insights from the Field. https://doi.org/10.13094/SMIF-2015-00007



•    Tobias Gummer, Pablo Christmann, Tanja Kunz300

Pforr, Klaus; Dannwolf, Tanja 2017: What do we lose with online-only surveys? Estimating 
the bias in selected political variables due to online mode restriction. In: Statistics, 
Politics and Policy 8,1: 105-120. https://doi.org/10.1515/spp-2016-0004

Sakshaug, Joseph W. et al. 2012: Linking Survey and Administrative Records: 
Mechanisms of Consent. In: Sociological Methods & Research 41,4: 535-569. 
https://doi.org/10.1177/0049124112460381

Sakshaug, Joseph W.; Kreuter, Frauke 2012: Assessing the Magnitude of Non-Consent 
Biases in Linked Survey and Administrative Data. In: Survey Research Methods 6,2: 
113-122. https://doi.org/10.18148/srm/2012.v6i2.5094

Schmiedeberg, Claudia; Castiglioni, Laura; Schröder, Jette 2016: Secondary Respondent 
Consent in the German Family Panel. In: Bulletin of Sociological Methodology 131,1: 
66-77. https://doi.org/10.1177/0759106316642707

Schneider, Norbert F. et al 2021: Family Research and Demographic Analysis (FReDA): 
Evolution, Framework, Objectives, and Design of “The German Family Demography 
Panel Study”. In: Comparative Population Studies 46: 149-186. 
https://doi.org/10.12765/CPoS-2021-06

Schouten, Barry; Cobben, Fannie; Bethlehem, Jelke 2009: Indicators for the 
representativeness of survey response. In: Survey Methodology 35,1: 101-113. 

Schouten, Barry et al. 2016: Does more balanced survey response imply less non-
response bias? In: Journal of the Royal Statistical Society Series A 179,3: 727-748. 
https://doi.org/10.1111/rssa.12152 

Schouten, Barry; Peytchev, Andy; Wagner, James 2018: Adaptive Survey Design. Boca 
Raton: Routledge.

Schröder, Jette et al. 2012: The Influence of Relationship Quality on the Participation 
of Secondary Respondents: Results from the German Family Panel. In: Comparative 
Population Studies 37,3-4: 591-614. https://doi.org/10.12765/CPoS-2012-07

Schröder, Jette; Schmiedeberg, Claudia; Castiglioni, Laura 2016: The effect of 
interviewers’ motivation and attitudes on respondents’ consent to contact secondary 
respondents in a multi-actor design. In: Survey Methods: Insights from the Field. 
https://doi.org/10.13094/SMIF-2016-00005

Silber, Henning et al. 2021: The Effects of Question, Respondent, and Interviewer 
Characteristics on Two Types of Item Nonresponse. In: Journal of the Royal Statistical 
Society 184,3: 1052-1069. https://doi.org/10.1111/rssa.12703

Singer, Eleanor 1993: Informed Consent and Survey Response: A Summary of the 
Empirical Literature. In: Journal of Offi cial Statistics 9,2: 361-375. 

Singer, Eleanor 2003: Exploring the Meaning of Consent: Participation in Research and 
Beliefs about Risks and Benefi ts. In: Journal of Offi cial Statistics 19,3: 273-285. 

Starks, Tyrel J.; Millar, Brett M.; Parsons, Jeffrey T. 2015: Correlates of Individual Versus 
Joint Participation in Online Survey Research with Same-Sex Male Couples. In: AIDS 
and Behavior 19: 963-969. https://doi.org/10.1007/s10461-014-0962-1

Struminskaya, Bella; de Leeuw, Edith; Kaczmirek, Lars 2015: Mode System Effects in an 
Online Panel Study: Comparing a Probability-Based Online Panel with Two Face-to-
Face Reference Surveys. In: methods, data, analyses 9,1: 3-56. 
https://doi.org/10.12758/mda.2015.001

Toepoel, Vera; Lugtig, Peter 2014: What Happens if You Offer a Mobile Option to Your 
Web Panel? Evidence From a Probability-Based Panel of Internet Users. In: Social 
Science Computer Review 32,4: 544-560. https://doi.org/10.1177/0894439313510482



Gaining Consent to Survey Respondents’ Partners    • 301

Tourangeau, Roger et al. 2017: Adaptive and responsive survey designs: a review and 
assessment. In: Journal of the Royal Statistical Society 180,1: 203-223. 
https://doi.org/10.1111/rssa.12186

Tourangeau, Roger; Conrad, Frederick G.; Couper, Mick P. 2013: The Science of Web 
Surveys. New York, NY: Oxford University Press.

Tourangeau, Roger; Yan, Ting 2007: Sensitive Questions in Surveys. In: Psychological 
Bulletin 133,5: 859-883. http://dx.doi.org/10.1037/0033-2909.133.5.859

Voorpostel, Marieke; Lipps, Oliver 2011: Attrition in the Swiss Household Panel: Is 
Change Associated with Drop-out? In: Journal of Offi cial Statistics 27,2: 301-318. 

Watson, Nicole; Wooden, Mark 2009: Identifying Factors Affecting Longitudinal Survey 
Response. In: Lynn, Peter (Ed.): Methodology of Longitudinal Surveys. Chichester: 
Wiley: 157-182

West, Brady T.; Blom, Annelies G. 2017: Explaining Interviewer Effects: A Research 
Synthesis. In: Journal of Survey Statistics and Methodology 5,2: 175-211. 
https://doi.org/10.1093/jssam/smw024

Wolf, Christof et al. 2021: Conducting General Social Surveys as Self-Administered 
Mixed-Mode Surveys. In: Public Opinion Quarterly 85,2: 623-648. 
https://doi.org/10.1093/poq/nfab039

Yan, Ting; Williams, Douglas 2022: Response Burden – Review and Conceptual 
Framework. In: Journal of Offi cial Statistics 38,4: 939-961.  
https://doi.org/10.2478/jos-2022-0041

Date of submission: 27.10.2022  Date of acceptance: 14.04.202

Dr. Tobias Gummer (), Dr. Pablo Christmann, Dr. Tanja Kunz. GESIS – Leibniz Institute 
for the Social Sciences. Mannheim, Germany. 
E-mail: tobias.gummer@gesis.org; pablo.christmann@gesis.org; tanja.kunz@gesis.org
URL: https://www.gesis.org/en/institute/staff/person/tobias.gummer

https://www.gesis.org/en/institute/staff/person/Pablo.Christmann
https://www.gesis.org/en/institute/staff/person/Tanja.Kunz



•    Tobias Gummer, Pablo Christmann, Tanja Kunz302

Appendix

Tab. A1: Logistic regressions on consent to contact partner by modes

web mail total
unweighted weighted unweighted weighted unweighted weighted

Parameter coeff (se) coeff (se) coeff (se) coeff (se) coeff (se) coeff (se)

Survey Experience
interesting 0.219*** 0.240*** 0.276** 1.461** 0.226*** 0.300**

(0.032) (0.033) (0.084) (0.477) (0.030) (0.111)
diverse 0.050 0.049 -0.032 -0.632* 0.038 -0.025

(0.030) (0.030) (0.075) (0.307) (0.028) (0.097)
important for science 0.078** 0.084** 0.020 -0.274 0.072** 0.069

(0.026) (0.026) (0.064) (0.215) (0.024) (0.080)
long -0.037* -0.027 -0.034 -0.724*** -0.039* -0.147**

(0.018) (0.018) (0.043) (0.174) (0.017) (0.056)
diffi cult 0.012 0.007 -0.007 -0.078 0.011 0.088

(0.022) (0.022) (0.052) (0.216) (0.020) (0.069)
too personal -0.302*** -0.293*** -0.235*** -0.054 -0.291*** -0.228***

(0.019) (0.019) (0.046) (0.170) (0.017) (0.059)
female -0.453*** -0.466*** -0.627*** 1.472*** -0.473*** -0.423***

(0.038) (0.038) (0.100) (0.407) (0.035) (0.121)
Age
18-25 Ref. Ref. Ref. Ref. Ref. Ref.
26-30 -0.162* -0.196** -0.023 -4.881*** -0.147* -0.564*

(0.072) (0.072) (0.187) (0.926) (0.067) (0.232)
31-40 -0.376*** -0.388*** -0.271 -1.647** -0.366*** -0.616**

(0.065) (0.065) (0.163) (0.539) (0.060) (0.199)
41+ -0.825*** -0.826*** -0.746*** -3.256*** -0.818*** -1.262***

(0.067) (0.068) (0.170) (0.559) (0.063) (0.211)
Education
low Ref. Ref. Ref. Ref. Ref. Ref.
intermediate -0.022 -0.079 -0.005 -2.069*** -0.036 -0.486*

(0.075) (0.075) (0.157) (0.626) (0.067) (0.231)
high 0.239** 0.171* -0.072 -0.535 0.180** -0.098

(0.073) (0.074) (0.160) (0.634) (0.067) (0.227)
relationship satisfaction 0.085*** 0.073*** 0.046 -0.347** 0.079*** -0.033

(0.011) (0.011) (0.027) (0.117) (0.010) (0.035)
Urbanicity
rural Ref. Ref. Ref. Ref. Ref. Ref.
smaller town 0.006 -0.030 0.239* -0.100 0.038 -0.177

(0.049) (0.049) (0.119) (0.411) (0.045) (0.154)
city 0.080 0.053 0.124 -1.698*** 0.087 -0.300

(0.050) (0.051) (0.125) (0.442) (0.046) (0.157)



Gaining Consent to Survey Respondents’ Partners    • 303

Tab. A1: Continuation

Note: Coef = regression coeffi cient, se = standard error, p<0.05, ** p<0.01, *** p<0.001.
Source: Own calculations based on FReDA W1R and W1A

web mail total
unweighted weighted unweighted weighted unweighted weighted

Parameter coeff (se) coeff (se) coeff (se) coeff (se) coeff (se) coeff (se)

German at home 0.025 0.046 0.346 0.746 0.052 0.154
(0.089) (0.089) (0.207) (1.297) (0.081) (0.281)

no citizenship 0.205* 0.257** -0.254 -3.632** 0.162 -0.420
(0.094) (0.096) (0.255) (1.272) (0.086) (0.287)

living together 1.023*** 1.004*** 1.107*** 1.631** 1.030*** 1.074***
(0.057) (0.058) (0.154) (0.530) (0.053) (0.191)

web mode 0.813*** 1.175 ***
(0.051) (0.123)

Intercept -0.949*** -0.901*** -1.629*** 2.840 -1.704*** -0.437
(0.189) (0.191) (0.469) (2.096) (0.179) (0.612)

N 13,716 13,709 2,129 2,127 15,845 15,836 
Pseudo R² 0.084 0.082 0.076 0.382 0.092 0.134



•    Tobias Gummer, Pablo Christmann, Tanja Kunz304

Tab. A2: Logistic regressions on consent to contact partner by device

pc smartphone total
unweighted weighted unweighted weighted unweighted weighted

Parameter coeff (se) coeff (se) coeff (se) coeff (se) coeff (se) coeff (se)

Survey Experience
interesting 0.226*** 0.277*** 0.219*** 0.216*** 0.221*** 0.244***

(0.047) (0.048) (0.045) (0.046) (0.032) (0.033)
diverse 0.020 -0.007 0.083* 0.112** 0.053 0.056

(0.044) (0.044) (0.041) (0.042) (0.030) (0.030)
important for science 0.087* 0.101* 0.067 0.072* 0.076** 0.085**

(0.039) (0.040) (0.035) (0.035) (0.026) (0.026)
long -0.050 -0.045 -0.010 0.012 -0.030 -0.016

(0.027) (0.027) (0.025) (0.026) (0.018) (0.018)
diffi cult 0.039 0.042 -0.016 -0.021 0.009 0.008

(0.032) (0.032) (0.029) (0.030) (0.022) (0.022)
too personal -0.334*** -0.311*** -0.274*** -0.285*** -0.300*** -0.297***

(0.028) (0.028) (0.025) (0.026) (0.019) (0.019)
female -0.362*** -0.379*** -0.498*** -0.506*** -0.435*** -0.445***

(0.056) (0.057) (0.052) (0.052) (0.038) (0.038)
Age
18-25 Ref. Ref. Ref. Ref. Ref. Ref.
26-30 -0.141 -0.143 -0.205* -0.260** -0.168* -0.199**

(0.114) (0.110) (0.093) (0.098) (0.072) (0.073)
31-40 -0.379*** -0.350*** -0.411*** -0.478*** -0.388*** -0.416***

(0.104) (0.100) (0.084) (0.088) (0.065) (0.066)
41+ -0.869*** -0.764*** -0.857*** -0.919*** -0.855*** -0.841***

(0.105) (0.103) (0.090) (0.092) (0.068) (0.069)
Education
low Ref. Ref. Ref. Ref. Ref. Ref.
intermediate 0.191 -0.016 -0.157 -0.165 -0.032 -0.101

(0.123) (0.114) (0.095) (0.101) (0.075) (0.076)
high 0.396*** 0.186 0.116 0.121 0.214** 0.145

(0.120) (0.112) (0.094) (0.099) (0.074) (0.074)
relationship satisfaction 0.102*** 0.083*** 0.068*** 0.066*** 0.085*** 0.075***

(0.016) (0.016) (0.015) (0.015) (0.011) (0.011)
Urbanicity
rural Ref. Ref. Ref. Ref. Ref. Ref.
smaller town 0.023 -0.060 -0.004 -0.016 0.008 -0.038

(0.075) (0.073) (0.065) (0.067) (0.049) (0.050)
city 0.028 -0.048 0.113 0.111 0.078 0.038

(0.075) (0.075) (0.068) (0.069) (0.050) (0.051)



Gaining Consent to Survey Respondents’ Partners    • 305

Tab. A2: Continuation

Note: Coef = regression coeffi cient, se = standard error, p<0.05, ** p<0.01, *** p<0.001.
Source: Own calculations based on FReDA W1R and W1A

pc smartphone total
unweighted weighted unweighted weighted unweighted weighted

Parameter coeff (se) coeff (se) coeff (se) coeff (se) coeff (se) coeff (se)

German at home -0.042 0.054 0.081 0.025 0.029 0.038
(0.135) (0.130) (0.120) (0.122) (0.089) (0.089)

no citizenship 0.130 0.225 0.269* 0.256* 0.201* 0.232*
(0.141) (0.140) (0.128) (0.131) (0.095) (0.095)

living together 0.945*** 0.848*** 1.110*** 1.141*** 1.031*** 0.996***
(0.084) (0.085) (0.078) (0.080) (0.057) (0.058)

device -0.154*** -0.122**
(0.038) (0.037)

Intercept -0.944*** -0.806** -1.013*** -1.030*** -0.884*** -0.844***
(0.281) (0.274) (0.260) (0.267) (0.190) (0.191)

N 6,288 6,285 7,421 7,417 13,709 13,702 
Pseudo R² 0.089 0.081 0.081 0.085 0.085 0.082



Published by
Federal Institute for Population Research 
(BiB)
65180 Wiesbaden / Germany

Managing Publisher
Dr. Nikola Sander

 2023

Editor 
Prof. Frans Willekens

Managing Editor
Dr. Katrin Schiefer

Editorial Assistant
Beatriz Feiler-Fuchs
Wiebke Hamann

Layout
Beatriz Feiler-Fuchs

E-mail: cpos@bib.bund.de

Scientifi c Advisory Board
Kieron Barclay (Stockholm)
Karsten Hank (Cologne)
Ridhi Kashyap (Oxford)
Natalie Nitsche (Rostock)
Alyson van Raalte (Rostock)
Pia S. Schober (Tübingen)
Rainer Wehrhahn (Kiel)

Comparative Population Studies

www.comparativepopulationstudies.de

ISSN: 1869-8980 (Print) – 1869-8999 (Internet)

Board of Reviewers
Bruno Arpino (Barcelona)
Laura Bernardi (Lausanne)
Gabriele Doblhammer (Rostock)
Anette Eva Fasang (Berlin)
Michael Feldhaus (Oldenburg)
Alexia Fürnkranz-Prskawetz (Vienna)
Birgit Glorius (Chemnitz)
Fanny Janssen (Groningen)
Frank Kalter (Mannheim)
Stefanie Kley (Hamburg)
Bernhard Köppen (Koblenz)
Anne-Kristin Kuhnt (Rostock)
Hill Kulu (St Andrews)
Nadja Milewski (Wiesbaden)
Roland Rau (Rostock)
Thorsten Schneider (Leipzig)
Tomas Sobotka (Vienna)
Jeroen J. A. Spijker (Barcelona)
Heike Trappe (Rostock)
Helga de Valk (The Hague)
Sergi Vidal (Barcelona)
Michael Wagner (Cologne)