Generalization, epistemology and concrete: what can social sciences learn from the common sense of engineers
URN:NBN:fi:tsv-oa77626
DOI: 10.11143/fennia.77626
Generalization, epistemology and concrete: what can social
sciences learn from the common sense of engineers
SIMONE TULUMELLO
Tulumello, S. (2019) Generalization, epistemology and concrete: what can
social sciences learn from the common sense of engineers. Fennia 197(1)
121–131. https://doi.org/10.11143/fennia.77626
In this essay I debate critically, and somehow playfully, some
assumptions and shortcomings of quantitative/positivist social
research, using a dash of common sense typical of engineers.
Civil engineers, in designing concrete structures, particularly those made
up of concrete, have to continuously consider the error embedded in the
limits of available systems of calculation, ending up adopting substantial
factors of safety as counter-measures. The study of resistance of concrete
structures is a good metaphor for social research; and yet, quantitative/
positivist researchers, in their search for “falsifiable generalizations”, often
forget about the omnipresence of error, let alone adopt the factors of
safety. In short, the common sense of engineers is useful to casts some
not-so-frequently-considered doubts over the capacity of quantitative
methods and positivist epistemologies to create generalizable social
science findings in face of uncertainty and the complexity of human
societies. By casting such doubts, I advocate for a more relaxed (but not
less rigorous) approach to social research and its complexity.
Keywords: epistemology, quantitative research, qualitative research,
research methods, peer-review, falsifiable generalizations
Simone Tulumello, Universidade de Lisboa, Instituto de Ciências Sociais, Urban
Transitions Hub, Av. Prof. A. Bettencourt 9, 1600-189, Lisbon, Portugal. E-mail:
simone.tulumello@ics.ulisboa.pt
This essay is more than the result of my own ideas and work. Not only does its final form result from a lively
and insightful conversation allowed by Fennia’s open review process, but that very conversation continues
within the text: having Guntram Herb, Jouni Häkli and Ossi Kotavaara generously agreed to publish their
comments alongside my essay, I had the opportunity to explicitly engage with their insights. This allowed me
to structure the essay as follows: the main text contains the main argument, which, for the sake of crafting a
text I wanted to be provocative and engaging, is quite straightforward and direct – at times indeed reductionist;
and I used footnotes quite heavily to provide nuance to the argument and engage in conversation with the
reviewers’ comments. As such, there are more ways to read this essay, with or without the footnotes, by itself
or jumping back and forth to the three comments. I am thus very grateful to Guntram, Jouni and Ossi; and to
Editor-in-Chief Kirsi Pauliina Kallio for offering the opportunity of the open review process and her support
throughout the editorial process. I am also grateful to Pedro Magalhães, who read an older version of this
essay and with whom I have discussed issues of epistemology several times. Though they may not be aware
of that, I was inspired by discussions with a number of fellow scholars and friends, including Andy Inch,
Andrea Pavoni, Eleonora Tulumello, Marco Allegra and Rui Costa Lopes. This notwithstanding, this essay’s
many shortcoming – and especially the insolence that may surface here and there – are my own responsibility.
© 2019 by the author. This open access article is licensed under
a Creative Commons Attribution 4.0 International License.
https://doi.org/10.11143/fennia.77626
122 FENNIA 197(1) (2019)Reviews and Essays
Prologue: a qualitative researcher’s Reviewer 2
As a researcher that has been mainly, indeed almost exclusively, employing qualitative methods and
case study research, I have lost the count of the times a peer-reviewer has criticized my work on the
basis of the refrain “thou shalt not generalize from one case!” When it happens, that is, almost every
time I submit an article, my first reaction is I want to write the editors an angry response, something
on the lines of: “Dear Editors, please provide peer-reviewers with the actual competences to assess
case study research, that is, judge whether, in light of the assumptions of case study research, my
article is capable (or not) of building that kind of theory that (a fitting reviewer must know!) case study
can indeed produce.”1
But then, most often, once I have read the feedbacks and editorial decision, I would take a few days
to let off steam, before sitting down and decide what to do. Eventually, if I was lucky enough to get a
Major Revision or Revise and Resubmit (that kind of reviews is never followed by requests of Minor
Revision), I would give up. I would expand the methodological section with more references on “how
to theorize from case study research”, temper the tone of the discussion and add a line in the
conclusions that sounds like: “Although the preliminary findings of this article need to be confirmed
by further research with wider panels of data, we can set out the following conclusions…” Although
they usually help clearing the peer-review and have the paper published (as my fellows precarious
researchers out there know, this is not a secondary matter), these changes signal an implicit abdication
to the (allegedly) superior role of generalization to that of theorization in knowledge production – and,
besides, they take space (the word limit…) that could be more profitably used to provide a better
description of the case or theoretical discussion.
One of the last times this happened, I shared my frustration on the Facebook page Reviewer 2 Must
Be Stopped, which is frequented by scholars from the most different backgrounds, ensuing a lively
debate among those that were sympathetic with my frustration and those who would insist I cannot
generalize from one single case.2 There is one simple evidence coming out from this, and other, more
rigorous, debates (e.g. Flyvbjerg 2004, 285–286): While qualitative social scientists, and especially
those working with case studies,3 are often even too conscious of the assumptions behind, and the
shortcomings of, their epistemological approaches, quantitative/positivist4 researchers tend to
consider “real science” that which stems from their epistemological assumptions only – most often,
analyses over statistically significant samples or experiments said to produce “falsifiable
generalizations”.5 For instance, most methodological works about case study research are extremely,
if often excessively, cautious, as if their authors were expecting at any moment a Reviewer 2 to shout
“thou shalt not generalize from one case!”6 On the contrary, in articles grounded on a quantitative/
positivist paradigm, I have barely found explicit discussions of the epistemological assumptions and
shortcomings of those methods: those assumptions go without saying, so to speak, and their
shortcomings are easily forgotten, let alone debated (but see Pepinski 2016, for an exception).7
In this essay, I will debate, somehow playfully, some of those assumptions and shortcomings, using
a dash of common sense typical of engineers – before embracing human geography and social
sciences, I took a master’s in civil-engineering. The common sense of engineers, I will suggest, casts
some not-so-frequently-considered doubts over the capacity of quantitative methods and positivist
epistemologies to create generalizable social science findings. By casting such doubts, I hope to
contribute to a more relaxed (but not less rigorous) approach to social research and its complexity.
Some insolent remarks on quantitative research
Quantitative, positivistic-oriented social research works, to put it bluntly, through the creation of
simplified models of social phenomena. By simplification I mean the process by which the researcher
would select a number of variables they consider sufficient and adequate to create a realistic model,
that is, a model capable of describing a given (social) phenomenon with acceptable accuracy. More
variables, and more nuanced relations among the variables, more reliable the model will be, but at
the cost of more work and computation. The perfect model, the society itself, is made up of an infinite
number of variables, hence the need for infinite time to collect data and infinite time to process
FENNIA 197(1) (2019) 123Simone Tulumello
findings – Werner Heisenberg’s uncertainty principle (published in 1927) made precisely this point
with regard to physics.8 The core of the “scientific” work quantitative/positivist researchers do, apart
from the mechanic work of collecting data and running the models, is deciding what variables, hence
what data, are to be used and how to create links among those variables. In so doing, the researchers
obviously influence the results on the grounds of their judgment – unfortunately however, in the public
domain this is as obvious as Edgar Allan Poe’s Purloined Letter. In Flyvbjerg’s (2006, 235) words:
The element of arbitrary subjectivism will be significant in the choice of categories and variables for
a quantitative or structural investigation, such as a structured questionnaire to be used across a
large sample of cases. And the probability is high that (1) this subjectivism survives without being
thoroughly corrected during the study and (2) that it may affect the results, quite simply because
the quantitative/structural researcher does not get as close to those under study as does the case-
study researcher and therefore is less likely to be corrected by the study objects “talking back”.
Put in other words, all social scientists employ judgment based on their epistemological and theoretical
assumptions. While qualitative researchers employ it mainly “downstream”, that is, when critically
interpreting their findings; quantitative/positivist scientists employ most of it “upstream”, that is, in
the design of the models.9 It is exactly the placement of judgment upstream the production of findings
that which creates the illusion quantitative/positivist science is “objective”. This is the very first reason
to be skeptical of the alleged superior capacity of quantitative research to produce generalized theory,
when compared with qualitative research. To move a step further, let me now use a dash of engineering
common sense.
Studying civil engineering; and a ventured metaphor
To begin with, let me briefly outline the way civil engineers are trained in Italy. During the first couple
of years (of five, my degree was an integrated bachelor/master), the aspiring engineers would study
almost exclusively theoretical classes such as Mathematical Analysis, Geometry, Classical Physics. In
these classes, theoretical problems are always resolved through rigorous mathematical methods.
During these years, “error” means the same as “mistake”. During the third year, the aspiring engineers
would study Building Science (Scienza delle Costruzioni), which starts as a theoretical class too. The
students would learn the mathematics behind the equations that could, theoretically, solve any real
structural problem. But, one day midway through the course, the professor would say that,
unfortunately, there is no way to solve those equations with mathematical methods: they are too
complex. Some simplifications are then introduced that allow the equations to be solved, but at the
cost of introducing elements of uncertainty to the solution. For the first time, the aspiring engineers are
faced with the existence of the error. The following year, in the course Building Techniques (Tecnica
delle Costruzioni), the aspiring engineers would learn how to calculate a real structure, being introduced
to further problems with mathematics and practice: on the one hand, that even the simplified equations
they had learned in Building Science cannot be solved once they are applied to complex structures
typical of real life; and, on the other, that it is not even possible to know with absolute precision the load
real materials are capable of absorbing before breaking. Eventually, the engineers will be trained to use
the software they will employ in their profession, software that uses simplified models of structures.
This organization of the degree may well be delusional for some students; but I happened to enjoy
it, because it forces the aspiring engineers to face several crises of their previous knowledge, thus
stimulating them to keep searching for different ways to solve problems. Moreover, this process trains
engineers to preliminarily assess the resistance of structures at a first sight: they learn how to look at
the preliminary design of a structure and rapidly assess whether it may be feasible, how it may be
feasible and – what else could be more important in a capitalist world? – how much it could cost. This
common sense helps engineers in several ways: while calculating a structure, they will start running
tentative models, which will not be too far from the final one; or, they would usually be capable of
warning the architects pretty early that the particular structure they have in mind will probably not be
feasible (or would cost much more than they are trying to sell to their clients) before spending time in
running complex models. At the same time, and crucially for my argument, engineers are trained to
recognize where errors appear, and estimate their magnitude before calculating structures.
124 FENNIA 197(1) (2019)Reviews and Essays
Because of the use of simplified models and the resulting errors, engineers would eventually apply
a factor of safety to their results, meaning that they will load the real structure with a fraction of the
maximum loads the model can resist. The factor of safety depends on both the geometry of the
structure (an estimation of the calculation error) and the material used. With regard to materials, the
factor of safety is smaller when using steel (in common structures, less than 10%), which is a
homogeneous and highly predictable material. The factor of safety becomes much bigger for concrete:
In common concrete structures it can go up to 50% – meaning that the structure will be designed to
resist 1.5 times the maximum real loads.
The factor of safety is particularly high for concrete because of its complexity. Concrete is a
composite, made of coarse aggregate (sand, gravel, crushed stones) bonded together with fluid
cement. The variables that determine the strength of concrete are several: at the macro level, the
average size of aggregate grains, the quality of the cement, and the proportion of aggregate, cement
and water used; at the micro level, the ways the various grains and the cement are organized (the
distance between grains, the respective position of edges and surfaces).
The point I want to make here is that the study of the resistance of structures made up of concrete
works as a good metaphor for social research.10 Let’s say each grain represents an individual human
being, while the cement is the bond of relationships and affects among them, characterized by
complex mutual tensions. The whole structure, the society, is made up of pillars and beams (and
interactions thereof), that is, interacting sub-divisions of the society: families, groups, communities,
classes, races, genders, you name it. Qualitative case study research can be seen, from this perspective,
as the in-depth study of a small piece of concrete looking at the micro-variables. Quantitative social
science can be seen as the study of the resistance of the entire structure, or of a part of it (a beam, a
pillar, a node…) through macro-level variables (shape, estimated resistance of the concrete…).
The day I remembered of my engineering training
Keeping this in mind, let me recall the day my rusty engineering common sense was awaken while I
was reading quantitative social findings. During the very same days I was discussing what is “real
science” on Reviewer 2 Must Be Stopped, I happened to read an article on crime and security in US
cities by Ellen, Lens and O’Regan (2012). The article has a very ambitious and important goal, that is,
testing the commonplace that housing vouchers policies heighten crime. In the USA, housing vouchers
have been given to households previously living in public housing “projects”, where crime rates tend
to be high; and, the commonplace goes, the displacement and dispersion of those households will
displace and disperse crime, which will thus increase in the neighborhoods of destination. Indeed,
rich evidence exists of the correlation between settlement of households with vouchers and crime
increase.11 However, the causal relation has always been given for granted, but never “scientifically”
verified, in public and even some academic debates – a process that accurately defines a
“commonplace”.12 Ellen and her colleagues thence decided to study those causal relations, using a
longitudinal analysis (1996–2008) and regression models over panel data at the census tract level
from 10 large US cities. Not only does the article cast serious doubts on the commonplace, but it finds
some evidence of the reverse causal history: the authors conclude that voucher holders may tend to
move to neighborhoods where crime is already increasing – there, rental prices may be lower for that
very reason and more landlords may then accept to rent in the voucher market.
While I was reading the article, I could feel my engineering common sense raising the eyebrows. Let
me put this very clear, I am not criticizing the validity or the rigor of the article: it is well written,
rigorous in the use of methodology and convincing in its line of thought; and, though I have no specific
skills to judge the regressions, the article cleared peer-review in a good journal and I thus give for
granted the quantitative work was well done. Not the quality of the research, rather the underlying
epistemological assumptions about the capacity of these methods to generalize social science findings
stimulated my engineering common sense – in short, let me be Reviewer 2, for once.
So, what is the problem? As every experienced researcher knows, collecting data about human
beings and their interactions is everything but a simple task. In order to collect perfect data and be
capable of picking the variables that best fit a model, the researchers should have access to every
FENNIA 197(1) (2019) 125Simone Tulumello
possible information about each individual in the population “sample”. Of course, collecting all
possible data on every individual would entail immense amount of work, including detailed
ethnographies. While following the “objects” of study in their daily life, and paraphrasing what Werner
Heisenberg’s uncertainty principle has taught us, the better ethnographic data is collected, the more
the action of the “objects” of study is influenced.13 Back to our concrete, perfect data means knowing
without margin of error the size and shape of every grain and its exact position: to do this, we have no
other chance but ultimately break it. Indeed, researchers accustomed to participant observation and
action research are well aware of their role in changing the very processes they are observing.
This is why quantitative/positivist researchers, whose aim is studying social phenomena without
influencing them, use above all aggregated data collected by other parties (statistical data) or data
provided by the “objects” of study themselves (surveys) – they construct a simplification of the
average composition of the concrete and look more widely at the structure, or maybe one specific
pillar or beam. This mediation creates error, because statistical data is always a simplification of the
reality and surveys, well, for instance there really is no way to know the extent to which the respondent
is being sincere.
This is well evident in the article by Ellen and her colleagues (2012). The authors use administrative
data and admit they have faced important challenges.14 The authors employ several smart tactics to
deal with such challenges, using linear interpolations among available data, comparing models that
make or make not use of problematic data, comparing the whole model with a smaller model where
cities with problems with data have been removed. The point is, there is no exact way to measure the
error that the aggregate effect of such problems will produce. In fact, “error” is barely quantified in
this kind of studies – with the exception of the statistical error of regressions.
Let me stress that these problems with data are not specific to this particular article: every set of
statistical or aggregated data suffer of some kind of error for, as the principle of uncertainty tells us,
there is no way to collect perfect data about any given phenomenon without influencing it. I know
Reviewer 2 is ready to shout at me I am generalizing from one single case, but the harsh reality is there
is no quantitative/positivist study that does not suffer of some problem with data – if Reviewer 2 has
some doubts, they may want to pick a statistically significant sample of said articles and look into
them, one after the other. Moreover, and this is another common issue with this kind of articles, Ellen
and her colleagues (2012) do not discuss whether, and with what accuracy, are the 10 selected cities
representative of the urban USA.
So, what does this mean for the main finding of the research, the value of the variable chosen to
test the causal history reverse to the commonplace? According to linear regressions, the variable is
statistically significant in the three models used: 0.167 (p < 0.01), 0.157 (p < 0.05) and 0.160 (p < 0.05)
(Ellen et al. 2012, Table 4). This means that, according to the data available and their causal model, a
household with voucher will be about 16% more likely to move to a census tract where crime increased
in the previous year. The engineer in me would ask, what is the margin of error of the model? Is it 0.05
(meaning that the likeliness in the real world would still be positive, between 11 and 21%)? Or is it 0.20
(meaning that the likeliness in the real world could be between 37% or slightly negative)? Looking at
the size of problems with data, my engineering common sense suggests, I am afraid, that the latter is
more likely than the former.
Against generalization?
Again, my goal here is not falsifying the robust findings a rigorous research has produced in light of its
epistemological assumptions. This is to say, in light of such assumptions, taking in consideration the risk
that voucher holders may end up moving to neighborhoods where crime is already increasing makes
a lot of sense, both conceptually and practically. These findings help better understand the effects of
vouchers (e.g. reconsider “freedom of choice” rhetoric) and support a more informed policy decision
at the local level (e.g. in extremely unequal and segregated cities, vouchers not accompanied by
further policies may cause more problems than those they solve). As for the “generalization” of a
socio-spatial trend (“where do voucher holders go?”), however, we have no way to be sure what the
margin of error is and, frankly, I do not think there is a way to be. Does anyone know of any method
126 FENNIA 197(1) (2019)Reviews and Essays
to systematically measure the error of quantitative/positivist social research (the error with data, the
error with the simplification made by the model, and the cumulated error of both)?
Remember, engineers use factors of safety precisely to account for the impossibility to determine
the error with accuracy. And, for a complex material like concrete, it is a very big factor, most often
too big. But it is written down in laws, because we all acknowledge that protecting houses from
crumbling down is more important than wasting some construction material: legal protections of
this kind show the capacity, in the political and public sphere, to understand the limits of engineering
studies and research. Surprisingly, the political and public discourses seem to have much less
capacity to discern the limits of generalizations made in quantitative/positivist social research, which,
by the way, can produce more significant damages than a house crumbling down.15 The magnitude
of damages of the economic crisis started in 2007, together with the role certain assumptions and
certain models used to “predict” how markets work had in creating the conditions for the crisis, is a
case in point.16
So what? Taking research cum grano salis
I could have stopped here, but this is exactly where Flyvbjerg (2001) would ask the “so what” question.
What is the point of my critique? Or, what does an engineering common sense suggest to social
scientists? On the one hand, it suggests qualitative scientists to be aware that the particular piece of
concrete they are studying may not represent the whole structure – and, as we all know, this is pretty
well accepted among reviewers, particularly Reviewer 2. On the other hand, it suggests quantitative/
positivist scientists to remember to always use very big factors of safety when interpreting their
findings, inasmuch as human societies are at least as complex as concrete – to be honest, much more
complex, if anything because concrete changes very slowly in time.17 Unfortunately, this latter
suggestion is barely heard around, let alone listened to.
Now, I am not suggesting Reviewer 2 to shout loud “thou shalt not generalize from panel data
findings!” – while, I am afraid, this is at least as valid a claim as “thou shalt not generalize from one
case!” What I am advocating is that it is high time we accept taking the generalizations of social
phenomena based on panels of actually-existing data cum grano salis, that is, with the same caution we
take theory produced through qualitative research. We need to learn that there is no such thing as
“real science” and “hard data”, as opposed to “high-quality journalism” (as several positivist scientists
still consider ethnography and qualitative research) – quite the opposite, that the alleged objectiveness
of some methods is a sharp way to conceal judgment within the process. On the contrary, social
research would benefit a lot by the internalization that our work is always contingent to some
assumptions, that is, the acceptance of the very irreducibility and different value of findings produced
through different methodological and epistemological lenses. Is not, after all, Reviewer 2 basically the
incapacity to accept such an irreducibility and the pretense to force one’s assumptions upon others’
research?18 Is not Reviewer 2 the incapacity to accept that generalization is not the hallmark of social
research, and that the production of theory – as opposed to laws – is as relevant an endeavor?
In conclusion, and beyond the politics of peer-review, are we ready to embrace the fact that
contradictions are inherent expressions of the complexity of the human (and non-human) world?19
And, well, let me conclude with a personal suggestion. Always discuss your research ideas and
methods with an engineer before running complex models or making lengthy on-field research: that
can help save a lot of time!
Epilogue: the shelter we have
(Social) science has been traditionally imagined as a gothic cathedral, a perfect construction that will
be completed and perfected in the moment the keystone will be put in its place. Of course, scientists
have always been aware that the process was complex, slow and painful; and that external events
may had forced reconstructing a part or even reconsidering something in the foundations. But the
keystone has always been, and still seems to be for many, the ultimate goal, the ultimate answer –
“42!”, Douglas Adams would say.20
FENNIA 197(1) (2019) 127Simone Tulumello
Then, this has been put into debate, from the uncertainty principle all the way to deconstruction,
post-structuralism, post-modernism and the like. In time, the building has been savaged and seems
now to be more a messy structure made up of ruins, shining glasses, shacks. There is the neoclassic
glass and steel skyscraper from which economists enjoy the real world follow different paths, while
sipping champagne, blaming state regulation, and accusing people and politicians of not abiding by
the rules of the system. There is the post-structuralist field of ruins of the relentless critique, where
the “so what” question echoes perennially. The construction seems to be now made up of parts that
do not interact or, worst, create mutual structural problems.21
But, after all, it is important to acknowledge the scientific construction for its fragility, not solidity;
for its continuous need of maintenance, refurbishing and restructuring. For, precarious as it is, it is the
only shelter we have.
Notes
1 Throughout the text, I distinguish purposefully between generalization and theory as two rather
different goals of social research and knowledge production (see especially the concluding section “So
what?”) – Jouni Häkli suggested I should pay special attention to this distinction.
2 Reviewer 2 Must Be Stopped is a quasi-ironic forum for sharing hanger and discussing bad peer-
review. The discussion is publicly available here: www.facebook.com/groups/reviewer2/
permalink/10153956215715469/.
3 Here and afterwards, I use case study as the main point of reference when discussing qualitative
methodological approaches for two reasons: first, because it has recently assumed a central role in
qualitatively-oriented urban and geographic research; and, second, because it is the methodology I
have most experience with.
4 Let us not forget – as Ossi Kotavaara and Guntram Herb correctly pointed out – that not all quantitative
research seeks, from a positivist paradigm, to build global societal “laws”, thence the use of
“quantitative/positivist” throughout the text.
5 I am aware that the dichotomic opposition I adopt throughout the essay is a simplification of the
landscape of social research – and, at times, runs the risk of building two “caricatures”, as pointed out
by Jouni Häkli. For one, the dichotomy qualitative versus quantitative/positivist methodologies/
epistemologies is often fuzzy, as there is a growing “gray area” (a definition suggested by Häkli) made
up of experimentation with different approaches, the use of mixed methods and cross-fertilization
among long-separated methodological and epistemological “fronts”. In retrospect, Hanson (2008)
argues that the dichotomy is more “apparent than real” throughout the history of social research.
More recently, a group of scholars based at the Sciences Po Médialab (founded by Bruno Latour) has
developed an argument about the capacity of digital methods and big data analysis to overcome the
quantitative/qualitative divide, creating a more “continuous” sociology (Venturini et al. 2017). And yet,
the reality of actually-existing social research is characterized by fierce debate and contraposition
among different schools of thought, the consequences of which we all have, sooner or later, came to
experience, for instance when receiving a report by Reviewer 2.
Guntram Herb also spotted an imbalance in the quantitative/positivist versus qualitative dichotomy.
Indeed, methods and epistemologies are not straightforwardly and directly associated (see Bryman
1984, for a discussion). And yet, with regard to the argument I develop on the relationship between
methodologies/epistemologies and the production of social research, I see that, independently from
their epistemological orientation, qualitatively-oriented scholars more or less agree on the limits of
their own epistemological assumptions – and hence “qualitative research”, throughout the text, means
“qualitative research informed by a diverse set of epistemological assumptions”. At any rate, I agree
with Herb where he suggested I owe the reader sharing openly my own approach, because this
informs my perspective over the issues at stake. I see my personal epistemological endeavor as the
search for critical theory, which I understand to have a twofold meaning: on the one hand, “a ruthless
criticism of everything existing, ruthless in two senses: the criticism must not be afraid of its own
conclusions, nor of conflict with the powers that be” (Marx 1978 [1844], 13; emphasis in the translation
http://www.facebook.com/groups/reviewer2/permalink/10153956215715469/
http://www.facebook.com/groups/reviewer2/permalink/10153956215715469/
128 FENNIA 197(1) (2019)Reviews and Essays
quoted); and, on the other hand, a theory that seeks to foster, inform and support transformative
action (see Marcuse 2010).
6 Even Yin (1994), author of Case Study Research, now at its sixth revised edition and possibly the most
used reference in this field, seems to believe case study is a minor research method that, being
incapable of building “social science generalizations”, should mostly be used as a preliminary or
exploratory tool. Another example of this approach is an otherwise adorable article on persuasion in
case study research by Suggelkow (2007). Flyvbjerg (2004, 2006) is, to the best of my knowledge, the
scholar that has been most straightforward in advocating for, and fully exploiting, the potentialities of
case study research for theorization.
7 This may not be particularly the case for human geography, Ossi Kotavaara suggests. I agree, and
wish to speculate that this may be in part due to the relatively young history of human geography
when compared to disciplines such sociology, anthropology or political science – all disciplines with
longer and fiercer epistemological/methodological debates, from which human geography could
learn from. The main target of my reflection is what Kotavaara suggested terming the “quantitative
positivist paradigm or discourse” (a definition I fully embrace). What prompted me to write this essay
is that it seems to me that the quantitative/positivist paradigm is still pretty strong within the whole
body of social sciences – maybe not as an ideology but indeed as a “practice” of research and research
evaluation (see Flyvbjerg 2004, 285–286) – and in particular in disciplines like sociology, social
psychology or political science (e.g. Desch 2019).
8 According to uncertainty, there is no way to know with absolute accuracy both the position and
speed of a particle at a given moment, because the more careful the observation, the bigger the
impact over the particle’s trajectory. One of the implications is that forecasting the future trajectory of
a single particle – and by extension of any system – is, pure and simply, impossible. It is quite surprising
to me how this principle, a basic tenet of natural sciences, is basically ignored in many strands of
social sciences, where it should be a truism. The ultimate version of the refusal to acknowledge
uncertainty in social sciences is epitomized in grand claims, by some advocates of big data, about the
“end of theory”: “Scientists no longer have to make educated guesses, construct hypotheses and
models, and test them with data-based experiments and examples. Instead, they can mine the
complete set of data for patterns that reveal effects, producing scientific conclusions without further
experimentation” (Prensky 2009; see Kitchin 2014, for a critical overview).
9 Jouni Häkli correctly pointed out that subjective judgement plays “upstream” in qualitatively-driven
research as well, as it shapes “conceptual, philosophical and ontological starting points” (in his words).
I perfectly agree, and this is why I use of the term “mainly” in this sentence. Still, I have two responses:
one, that the concepts of positionality and reflexivity have been developed – in qualitatively-oriented
research – exactly to acknowledge and embrace the role of one’s own judgement (and even prejudice);
and, two, that my main point here is emphasizing the generalized lack of such an acknowledgement
in quantitative/positivist research.
10 As Jouni Häkli and Guntram Herb commented, this is a “deliberately mechanistic” (in Häkli’s words)
and, indeed, reductionist metaphor. Indeed, my goal is not so much using the study of concrete to
reflect on social research latu sensu, but rather using it to focus on the differences among methodological
and epistemological approaches.
11 A caveat is necessary from a critical criminological standpoint (e.g. Sutherland & Cressey 1978;
Reiner 2016). One should always remind that crime statistics describe “reported crimes”, that is, those
crimes that are known to the police and the judiciary, which definitely do not correspond to the totality
of crime as a social phenomenon. Reported crime is heavily influenced by the likeliness that a
particular crime is reported, by reporting methodologies and by police priorities – certain crimes, for
instance drug crimes, are almost never reported and are thus registered only when actively enforced
by the police, which may prioritize this or that typology of crime, this or that location for their activity.
More than that, crime itself is a socio-political construction, as many activities that cause harm are not
legally defined as crimes – think of the fact that “honor killing”, that is, killing a wife, is still legal in many
countries and was legal in many more just a few decades ago. As such, using crime statistics to
conclude that “crime is high in the projects” may be problematic in the first place. Moreover, one could
hypothesize that (Ellen and her colleagues (2012) did not consider this possibility), having policing
FENNIA 197(1) (2019) 129Simone Tulumello
been historically particularly aggressive with (poor, mostly Black or Latinx) people living in public
housing, the dispersion of those households may be followed by a “dispersion of policing” and, ceteris
paribus, contribute to the dispersion of (reported) crime. That said, I will nonetheless consider reported
crime as crime – like the article under analysis does – for the sake of the argument and because
incorporating those reflections would just add strength to the point I will make about the uncertainty
surrounding findings based on crime data.
12 In particular, Ellen and her colleagues (2012) were prompted by the conclusions of a journalistic
report on the case of Memphis published on The Atlantic (Rosin 2008).
13 For a brief presentation of the principle, see note 8 above.
14 A list of some of the simplifications and problems they point out (Ellen et al. 2012, 557–558 and
Appendix 1) follows: crime data are collected at the census tract scale for all cities but one, where they
are available at the “neighborhood” level (usually made up of two or three census tracts); voucher data
are missing in some cities during some periods of time (in some cities they are not available for the
large majority of years in the period of study); anomalies on voucher data are found and attributed to
geocoding problems (census track ID is missing, each year, in 8 to 20% of cases; in about 2% of tract/
years values deviate sharply from precedent and following years); demographic data are available
only for 1990, 2000 and an average for 2005–2009.
15 The most evident example of this is the simplification and spectacularization of research findings by
mass media – epitomized by sentences like “scientists say…”, “according to [insert highly-ranked
university]…” While researchers cannot automatically be blamed for the simplifications and distortions
of science reporting, this trend has fed back into the way quantitative/positivist social research is
carried out, as researchers are increasingly pushed to produce novel, confirmatory, “groundbreaking”
findings in order to publish in top-ranking journals. One of the effects is the growing concern with
“p-hacking”, the use of various strategies (from data mining to unduly influencing data collection
techniques) to forcefully extract statistically significant findings from vast collections of data – see the
overview by Head and colleagues (2015) and the (in)famous case of the retraction of several articles
by food behavior scientist Wansink (Resnick & Belluz 2018).
16 In his work on austerity politics, Blyth (2013, 32 ff.) focuses on the role played by models – based on
neoclassical understandings of economics and used in the financial industry to measure the risk of
loss from investments – in justifying the decisions that brought to the 2007 financial crash. According
to the metrics provided by such technologies, systemic crises of financial markets – those very crises
that have recurrently happened during the last century or so – should basically never occur.
17 Guntram Herb suggested that the metaphor is not fully applicable for two reasons: first, because
concrete has no agency or, at the very least, it has much less agency than humans have; and, second,
because the testing of a structure is essentially about true or false (“will the concrete hold?”), while
social research deals with less clear outcomes. Beyond reiterating that this metaphor is above all
useful to focus on differences among paradigms (see note 10), let me add a couple reflections. First,
let us not forget that there are both positivist and post-positivist perspectives that would conceptualize
agency quite differently. For instance, radical structuralism would suggest that the individual human
being under a capitalist system is not really more free to act than an individual grain within a concrete
conglomerate; and Actor Network Theory would maybe contest that the point is measuring and
comparing the agency of concrete and human beings – and rather advocate considering agency as the
network of relationships among them. At any rate, second, it seems to me that applied quantitative/
positivist research prioritizes seeking “solutions” to social problems – that is, it often mimics the true/
false approach of structure testing. Here, I wish to add that the core endeavor of critical social research
(see note 5) is precisely that of questioning the definition of social problems as opposed to seeking
direct, (allegedly) neutral and technical, solutions to them – see Gusfield’s (1989) discussion of the
relationship between “political issues” and “social problems”, and my transposition of his argument to
the field of urban security (Tulumello 2017).
18 I believe virtually anyone has, at least once in their life, received (and given!) a review that suggested
rejection on the basis of comments that basically denied the very epistemological or ontological
assumptions of the manuscript under analysis. The common comment on the impossibility to
generalize from one case often tells precisely of the incapacity, on the side of the reviewer, to
130 FENNIA 197(1) (2019)Reviews and Essays
conceptualize that valid social research exists that has no interest in producing generalizations in the
first place.
19 One such attempt is the project of the school of transdisciplinarity led by Nicolescu (see Nicolescu
2010, for a summary), which posits that different “levels of reality” exist, and that every discipline is
incomplete because it has to remain within one of those levels. This breaks open with classical rational
logic and its axioms of identity and non-contradiction (ibid., 29): the idea, at the core for instance of
classical physics, that “A is A” and “A cannot be not-A”. As quantum mechanics has shown that entities
exist that are at the same time A and not-A, the existence of different levels of reality explains how both
classical physics and quantum mechanics can be internally correct and rigorous despite the inherent
contradictions among their core assumptions and their findings.
20 Spoiler alert: I am referring to Adams’ epic sci-fi series The Hitchhiker’s Guide to the Galaxy, which
revolves around the research of the answer to the “Ultimate Question of Life, the Universe and
Everything” – which eventually turns out being… “42”.
21 Another metaphor for scientific knowledge is Spencer’s (1863) “sphere” floating in a space of
ignorance. As the sphere grows, also the surface of contact with ignorance grows, allowing for both a
pessimist and an optimist interpretation: if the amount of knowledge is represented by the radius of
the sphere, this grows more slowly than the surface, meaning that the process will produce a relative
increase of ignorance; if knowledge is the volume of the sphere, then it grows faster than the surface,
and ignorance will relatively decrease in time. While Spencer’s sphere is way a more elegant metaphor
than my cathedral, it is quite typical of a positivist conception of knowledge as a homogeneous,
harmonic totality, which makes scarce space for conflict, debate, contradiction and the messiness of
the real world.
References
Blyth, M. (2013) Austerity: The History of a Dangerous Idea. Oxford University Press, Oxford.
Bryman, A. (1984) The debate about quantitative and qualitative research: a question of method or
epistemology? The British Journal of Sociology 35(1) 75–92. http://doi.org/10.2307/590553
Desch, M. C. (2019) How political science became irrelevant. The field turned its back on the Beltway.
The Chronicle of Higher Education 27.2.2019. 26.4.2019.
Ellen, I. G., Lens, M. C. & O’Regan, K. (2012) American murder mystery revisited: do housing voucher
households cause crime? Housing Policy Debate 22(4) 551–572. https://doi.org/10.1080/10511482.
2012.697913
Flyvbjerg, B. (2001) Social Science Matter: Why Social Inquiry Fails and How it Can Succeed Again.
Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511810503
Flyvbjerg, B. (2004) Phronetic planning research: theoretical and methodological considerations.
Planning Theory and Practice 5(3) 283–306. https://doi.org/10.1080/1464935042000250195
Flyvbjerg, B. (2006) Five misunderstandings about case-study research. Qualitative Inquiry 12(2)
219–245. https://doi.org/10.1177/1077800405284363
Gusfield, J. R. (1989) Constructing the ownership of social problems: fun and profit in the welfare
state. Social Problems 36(5) 431–441. https://doi.org/10.2307/3096810
Hanson, B. (2008) Wither qualitative/quantitative?: grounds for methodological convergence. Quality
and Quantity 42(1) 97–111. https://doi.org/10.1007/s11135-006-9041-7
Head, M. L., Holman, L., Kahn, A. T. & Jennions, M. D. (2015) The extent and consequences of p-hacking
in science. PLoS Biology 13(3) e1002106. https://doi.org/10.1371/journal.pbio.1002106
Kitchin, R. (2014) Big data, new epistemologies and paradigm shifts. Big Data and Society 1(1). https://
doi.org/10.1177/2053951714528481
Marcuse, P. (2010) In defense of theory in practice. City: Analysis of Urban Trends, Culture, Theory,
Policy, Action 14(1–2) 4–12. https://doi.org/10.1080/13604810903529126
Marx, K. (1978 [1844]) For a ruthless criticism of everything existing. In Tucker, R. C. (ed.) The Marx-
Engels Reader, 12–15. 2nd ed. W.W. Norton & Company, New York.
Nicolescu, B. (2010) Methodology of transdisciplinarity – Levels of reality, logic of the included middle
and complexity. Transdisciplinary Journal of Engineering and Science 1(1) 19–38.
Pepinsky, T. (2016) Methods debates for humanists. Tom Pepinsky 17.06.2016. 26.4.2019.
http://doi.org/10.2307/590553
http://www.chronicle.com/article/How-Political-Science-Became/245777/
http://www.chronicle.com/article/How-Political-Science-Became/245777/
https://doi.org/10.1080/10511482.2012.697913
https://doi.org/10.1080/10511482.2012.697913
https://doi.org/10.1017/CBO9780511810503
https://doi.org/10.1080/1464935042000250195
https://doi.org/10.1177/1077800405284363
https://doi.org/10.2307/3096810
https://doi.org/10.1007/s11135-006-9041-7
https://doi.org/10.1371/journal.pbio.1002106
https://doi.org/10.1177/2053951714528481
https://doi.org/10.1177/2053951714528481
https://doi.org/10.1080/13604810903529126
https://tompepinsky.com/2016/06/17/methods-debates-for-humanists/
https://tompepinsky.com/2016/06/17/methods-debates-for-humanists/
FENNIA 197(1) (2019) 131Simone Tulumello
Prensky, M. (2009) H. sapiens digital: from digital immigrants and digital natives to digital wisdom.
Innovate. Journal of Online Education 5(3) article 1.
Reiner, R. (2016) Crime. The Mystery of the Common-sense Concept. Polity, Cambridge.
Resnick, B. & Belluz, J. (2018) A top Cornell food researcher has had 15 studies retracted. That’s a lot.
Vox 24.10.2018. 26.4.2019.
Rosin, H. (2008) American murder mystery. The Atlantic July/August 2018. 26.4.2019.
Spencer, H. (1863) First Principles. Williams and Norgate, London.
Suggelkow, N. (2007) Persuasion with case studies. Academy of Management Journal 50(1) 20–24.
https://doi.org/10.5465/amj.2007.24160882
Sutherland, E. & Cressey, D. R. (1978) Criminology. Lippincott, Philadelphia.
Tulumello, S. (2017) Toward a critical understanding of urban security within the institutional
practice of urban planning: the case of the Lisbon Metropolitan Area. Journal of Planning Education
and Research 37(4) 397–410. https://doi.org/10.1177/0739456X16664786
Venturini, T., Jacomy, M., Meunier, A. & Latour, B. (2017) An unexpected journey: a few lessons from
sciences po médialab’s experience. Big Data and Society 4(2) 205395171772094. https://doi.
org/10.1177/2053951717720949
Yin, R. (1994) Case Study Research. Design and Methods. Sage, Thousand Oaks.
http://www.vox.com/science-and-health/2018/9/19/17879102/brian-wansink-cornell-food-brand-lab-retractions-jama
http://www.vox.com/science-and-health/2018/9/19/17879102/brian-wansink-cornell-food-brand-lab-retractions-jama
http://www.theatlantic.com/magazine/archive/2008/07/americanmurdermystery/306872/
http://www.theatlantic.com/magazine/archive/2008/07/americanmurdermystery/306872/
https://doi.org/10.5465/amj.2007.24160882
https://doi.org/10.1177/0739456X16664786
https://doi.org/10.1177/2053951717720949
https://doi.org/10.1177/2053951717720949