Contrasting roles of measurement knowledge systems in confounding or creating sustainable change


ACTA IMEKO 
ISSN: 2221-870X 
December 2022, Volume 11, Number 4, 1 - 6 

 

ACTA IMEKO | www.imeko.org December 2022 | Volume 11 | Number 4 | 1 

Contrasting roles of measurement knowledge systems in 
confounding or creating sustainable change  

William P. Fisher, Jr.1 

1 Research Institutes of Sweden, Gothenburg, Sweden; BEAR Center, University of California, Berkeley, USA; Living Capital Metrics LLC,  
  Sausalito, California 94965, USA  

 

 

Section: RESEARCH PAPER  

Keywords: modelling; measurement; complexity; sustainability 

Citation: William P. Fisher, Jr. , Contrasting roles of measurement knowledge systems in confounding or creating sustainable change, Acta IMEKO, vol. 11, 
no. 4, article 7, December 2022, identifier: IMEKO-ACTA-11 (2022)-04-07 

Section Editor: Eric Benoit, Université Savoie Mont Blanc, France  

Received July 9, 2022; In final form December 4, 2022; Published December 2022 

Copyright: This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, 
distribution, and reproduction in any medium, provided the original author and source are credited. 

Corresponding author: William P. Fisher, Jr., e-mail: wpfisherjr@livingcapitalmetrics.com  

 

1. INTRODUCTION 

A little known but landmark article [1] defines five conditions 
for success in creating sustainable systems change (Figure 1). 
Different approaches to meeting these conditions can produce 
results that vary dramatically in their sustainability. Of particular 
importance is the measurement knowledge infrastructure 
context in which the five conditions are deployed.  

Systems change initiatives in organizations ranging from 
schools to hospitals to private firms typically make use of 
information on processes, structures, and outcomes obtained 
from tests, assessments, or surveys of students, patients, 
employees, customers, suppliers, and other key stakeholders. 
This information can be aggregated and reported in markedly 
different ways, with associated variation in its meaningfulness, 
utility, and consequences for success in creating sustainable 
change. 

The primary points of contrast between opposing poles of 
information quality can be summarized in terms of two 
approaches to measurement. On one end of this quality 
continuum are models lacking distinctions between 
discontinuous levels of complexity, and at the other end are 
models addressing these levels in ways that facilitate their 
practical management. The polar opposites come to the fore in 
oft-repeated but rarely heeded contrasts between statistical 
analyses of ordinal data and scientific models of interval units.  

These contrasts emphasize differences between unexamined 
assumptions about causal relationships and the meaningfulness 
of ordinal scores, on the one hand, and, on the other, intentional 
requirements of meaningful interval unit definitions [2]-[9]. 
Where the former focuses on the concrete terms of objective 
facts, the latter instead focuses on the abstract and formal terms 
of objectively reproducible unit quantities. The statistical focus 
on ordinal scores manages what counts in relation to 
accountability reporting systems, assuming the whole is the sum 

ABSTRACT 
Sustainable change initiatives are often short-circuited by failures in modelling. Unexamined assumptions about measurement and 
numbers push modelling into the background as a presupposition rarely articulated as an explicit operation. Even when models of system 
dynamics are planned components of a sustainable change effort, the key role of measurement is typically overlooked. The crux of the 
matter concerns the distinction between numeric counts and measured quantities. Mistaking the former for the latter confuses levels 
of complexity and fundamentally compromises communications. Reconceiving measurement as modelling multilevel distributed 
decision processes offers new alternatives aligned with historically successful efforts in creating sustainable change. Five conditions for 
successful sustainable change are contrasted from the perspectives of single-level vs multilevel modelling: vision, plans, skills, resources, 
and incentives. Omitting any one of these from efforts at creating change result, respectively, in confusion, treadmills, anxiety, 
frustration, and resistance. The shortcomings of typically implemented single-level approaches to measurement result in the widespread 
experience of these negative consequences. Results show that new potentials for creating sustainable change can be expected to follow 
from implementations of multilevel distributed decision processes that effectively counteract organizational amnesia by embedding 
new learning in an externally materialized knowledge infrastructure incorporating a shared cultural memory. 

mailto:wpfisherjr@livingcapitalmetrics.com


 

ACTA IMEKO | www.imeko.org December 2022 | Volume 11 | Number 4 | 2 

of the parts. The scientific focus on interval quantities manages 
what adds up in relation to the overall mission, requiring the 
whole to be more than the sum of the parts. 

The end results of the statistical focus on ordinal scores for 
sustainable change involve a kind of myopia unable to focus 
beyond the limits of local circumstances to global concerns [10]. 
A systematic literature review of almost 300 articles on lean 
thinking practices in health care, for instance, found that “tool-
myopic thinking tends to be a prevalent practice and often 
governs implementations” [11]. The tendency here is to not be 
able to see the forest for the trees. 

Measurement is commonly defined as the assignment of 
numbers to observations according to a rule. Decades of 
criticism as to the insufficiency of this definition [2]-[9] can be 
traced back to Whitehead’s 1925 fallacy of misplaced 
concreteness [12]. Parallel demonstrations of superior definitions 
of measurement dating from the 1960s have had little impact on 
practice [13], [14].  

Measurement is usually, then, deemed achieved by means of 
ordinal, nonlinear score models irrevocably attached to the 
specific questions asked, with no experimental test of causal 
hypotheses and no uncertainty estimates. Scientific approaches 
instead fit data to models of interval and linear measurements 
whose meanings are demonstrably independent of the specific 
questions asked, are contextualized by tested causal hypotheses 
and uncertainties, and are deployed in networks of quality-
assured instruments distributed throughout multilevel networks 
of end users [15]-[17].  

Contrasts between these paradigmatically opposed 
approaches to quantification (see Table 1) illuminate new 
potentials for creating sustainable change. When the differences 

between statistical and scientific measurement modelling 
approaches are grasped, today's organizational cultures can be 
seen as having counterproductively adapted to accepting as 
inevitable failure to meet the conditions for successful 
implementation of sustainable change. In other words, because 
of the widespread but unexamined assumption that low quality 
measurement information is the best that can be expected, 
confusion, feeling caught on a repetitive treadmill, anxiety, 
frustration, and resistance are built into organizational cultures in 
often unnoticed but pervasive ways. 

Organizational amnesia [18], [19] of this kind can, however, 
be counteracted by scientific measurement modelling 
approaches that retain learning and incorporate it organically into 
scaffolding built into the external environment as a kind of 
cultural memory. Research into learning embodied and locally 
situated in the institutional environment [20]-[22] points toward 
new goals organizations can realistically set for achieving 
sustainable change using scientific measurement models instead 
of statistical data models.   

2. TYPICAL STATISTICAL MODELLING 

2.1. Vision 

Visualizing the future requires anticipation of highly abstract 
arrays of possible scenarios. The kinds of challenges that might 
be encountered must be conceived in conjunction with the kinds 
of responses needed to address them. All too often, however, 
vision statements focus so narrowly and myopically on local 
concrete circumstances [10], [11] that long-term planning, 
staffing, resourcing, and incentivizing are inadvertently 
sabotaged. 

 

Figure 1. Conditions for sustainable change [1] 



 

ACTA IMEKO | www.imeko.org December 2022 | Volume 11 | Number 4 | 3 

The usual vision informing reasons why measurements are 
made is to gather data for analysis and summarization in reports 
to be used in formulating policy directives. Even if this vision is 
informed by Design Thinking [23], [24], and so incorporates the 
elements of empathy, definition, ideation, prototyping, and 
testing, the focus on scores (counts and/or percentages of 
correct answers, or of responses in a rating category) 
unnecessarily limits what can be imagined and accomplished to a 
small fraction of the available potential [25].  

That is, the restricted orientation to responses to specific 
questions necessarily prevents the envisioning and realization of 
goals that would otherwise be achievable. This is because a vision 
limited to statistical treatments of ordinal scores does not 
meaningfully model or map the substantive features of the 
situation of interest. The map is not the territory. Mapping 
proceeds from concrete observations of what is happening on 
the ground but must aspire to represent those concrete features 
by identifying coherent patterns at abstract and formal levels of 
higher order complexity.  

Because real world circumstances are in constant flux, 
meaningful and useful maps cannot remain tied to any single 
given set of conditions. A general continuum defining a learning 
progression, developmental sequence, or healing trajectory must 
characterize the quantitative range of variation [26]-[27]. An 
abstract perspective is the only way to adaptively and resiliently 
inform individuals and groups about where they are located 

relative to where they were, where they want to go, and what 
comes next, no matter what concrete circumstances they find 
themselves in.  

The narrow vision associated with statistically modelled 
ordinal scores mistakes mere numbers for quantities and pays a 
high price for doing so. Comparability depends on all 
respondents answering the same questions, and standards are 
imagined as necessitating use of the same indicators. The 
resulting knowledge infrastructure is envisioned on the basis of 
information quality that cannot support generalized meaning, 
and so vision is obscured, and confusion results. 

2.2. Planning 

Applications of the information obtained from scores, 
ratings, and percentage statistics usually focus on generalizations 
that assume all numeric differences of a given magnitude mean 
the same thing, though this assumption is usually not, and likely 
cannot be, substantiated. The inferential problems following 
from this unwarranted assumption of uniform meaning are then 
further compounded by the ways the scores are interpreted and 
applied. With no information typically made available on the 
uncertainty ranges or confidence intervals associated with the 
scores, there is no way of telling if and when numeric differences 
are real and reproducible, or are simply random noise.  

In addition, because questions are each treated separately, as 
domains unto themselves, no information on a learning 
progression, developmental sequence, healing trajectory, or 
other quality improvement continuum is made available. 
Improvement efforts then can do nothing but focus directly on 
areas in which failure (low ratings or incorrect answers) is 
experienced, instead of first ensuring that prerequisite 
foundations for sustainable change have been put in place. The 
result of acting on this kind of low-quality statistical information 
is then to continue repeating the same pattern of efforts in 
precisely the endless treadmill cycle one wanted to avoid. 

2.3. Skills 

The skills required in commonly adopted statistical 
approaches to measurement focus on knowledge of the relevant 
content and processes involved for assessment and survey 
development, social interactions for administering those tools, 
operational awareness for policy formation, and data input, 
aggregation, analysis, and reporting. Analyses may be as simple 
as counting correct answers or responses within rating categories, 
and computing percentages, or as complex as any advanced 
statistical method may be.  

The focus of data analytic skills unjustifiably presumes 
without evidence that results will retain their meaning across 
levels of complexity. But the statistical skills employed in usual 
approaches to measurement mistreat concrete scores as abstract 
quantities explained by formal theory, when they are not. That is, 
everyone is well aware that it is impossible to tell from my count 
of ten rocks whether I have more or less rock than someone with 
two rocks. It is also common knowledge that correct responses 
to ten questions cannot be understood as indicating more ability 
or success than correct responses to two questions, since the two 
groups of questions asked may vary markedly in difficulty. 
Statistical modelling proceeds by focusing on these merely 
numeric data anyway, mistakenly assuming that nothing better 
can be done. Failure to bring the needed skills to bear can only 
then result in anxiety, since the information produced is readily 
seen to be disconnected from the circumstances in which it is 
supposed to be applied. 

Table 1. Statistical vs Scientific Modelling Paradigm Contrasts vis a vis 
Sustainable Change Conditions. 

Sustainable 
Change 
Condition 

Statistical Modelling 
Paradigm 

Scientific Modelling 
Paradigm 

Vision 

Centralized gathering and 
analysis of ordinal 

instrument-dependent 
data for policy formation 

Distributed network of 
instruments traceable to 

common units informs and 
aligns end user decisions 

and behaviours 

Skills 

Item writing & 
administration, response 

scoring, statistical 
summarization, reporting 

Construct definition and 
modelling, instrument 

calibration, item banking, 
adaptive end user 

application 

Incentives 
Rewards for perceived goal 

attainment 

Shared success in general 
improvements to 

organizational viability 

Resources 

Investments limited as not 
accountable for or 

expected to produce 
significant returns 

Investments proportional 
to magnitudes of returns 

from improved efficiencies 
and market share 

Plan 

Interprets ordinal scores as 
interval and all numeric 

differences as meaningful; 
no context for 

improvement provided 

Scales interval measures 
with individual uncertainty 
and data quality estimates; 

quantitative continuum 
qualitatively annotated to 

guide change efforts 

Implications for 
managing what 

is measured 

Management focuses on 
moving numbers that 

matter within restricted 
domain of limited 

observations, sometimes at 
expense of mission 

Management focuses 
adaptively on relevant 

tasks representing mission, 
skipping tasks irrelevant to 
challenges of the moment 

Implications for 
communication 

Ordinal scores interpreted 
as interval tied to limited 

number of particular items 
results in obscure and 
difficult comparisons 

Interval measures 
interpreted relative to 

entire bank of calibrated 
items opens up clear and 
transparent opportunities 

for learning 



 

ACTA IMEKO | www.imeko.org December 2022 | Volume 11 | Number 4 | 4 

2.4. Resources 

Resources invested in the statistical modelling approach to 
creating sustainable change are typically focused on minimizing 
expenditures in producing a one-time snapshot of the state of 
things used for setting policy going forward. No specific forms 
of returns are expected, so the investments made are not usually 
accountable except as expenses, which are kept to the lowest 
possible levels. The information produced is typically used only 
as a conspicuously displayed expression of the fact that attention 
is being focused in some way on matters of concern to an 
interested party. But with vision, skill sets, and plans limited to 
low quality ordinal scores whose meanings are tied to the 
particular questions asked, the structural limits imposed on 
potential returns means only limited investments of resources 
can be justified and the usual result is a frustrating inability to 
advance. 

2.5. Incentives 

In the context of the usual approach to statistical data 
modelling, incentives are usually cast in relation to achieving 
results defined in terms of counts, scores, or percentages. 
Student proficiency scores or patient/customer/employee 
satisfaction or performance ratings are interpreted as evidence of 
achievements that are then rewarded by recognition, bonuses, 
promotions, etc. But because the data are tied to responses to 
specific questions, and because they are moreover ordinal, 
nonlinear, and not mapped to variation in meaningful amounts 
of a measured construct, incentive systems like this are easily 
gamed. Even without the advantages of a perspective informed 
by scientific measurement, this general management problem is 
recognized as leading to confusion, conflict, inefficiency, and a 
lack of focus [28]. 

In education, for instance, having students memorize tasks 
known to be included in the items on a test can inflate scores 
without, however, actually improving proficiency. In more 
extreme cases, teachers and principals have conspired to change 
student test scores. Similarly, customer satisfaction surveys are 
often accompanied with requests for ratings at a specific level or 
higher. The explicit goal is to create an appearance of success 
that can be rewarded in a public way that conveys an atmosphere 
of positive progress and overcomes resistance, even when the 
substantive failure to change anything is readily apparent to 
everyone involved. Because the vision, skills, and incentives are 
all focused on specific and discrete concrete issues that can never 
adequately represent the abstract and formal levels of 
complexity, unfair biases serving some agendas and undermining 
others will likely promote resistance of some form or another as 
the usual consequence. 

3. INNOVATIVE SCIENTIFIC MODELLING 

3.1. Vision 

An alternative vision as to why measurements are made 
focuses on modelling a decision process, calibrating instruments 
informing that process, distributing those instruments to front 
line decision makers, and gathering data for periodic analysis and 
summarization in reports used for quality improvement and 
accountability. This vision makes clear provisions for creating 
knowledge systems offering practical value beyond periodically 
produced reports. When the demands of effective knowledge 
infrastructures [21], [25], [29], [30] are met, data are reported at 
each level of complexity relevant to the demands of end users.  

Front line managers like teachers, clinicians, and others 
engaged in individualized care processes need denotative facts 
contextualized within learning progressions, developmental 
sequences, disease natural histories, etc. Practice management 
requires metalinguistic statistical summaries of interval logits 
reported to facilitate communication and comparability over 
time and space, within and across individuals, classrooms, clinics, 
schools, hospitals, etc. Accountability requires 
metacommunicative theoretical explanatory power that justifies 
decision processes at the metalinguistic and denotative levels. To 
the extent this is accomplished, one might reasonably expect less 
confusion to be produced than is commonly associated with the 
statistical approach. 

3.2. Planning 

Applications of the information obtained from scientifically 
modelled measurements require experimental tests 
substantiating the requirement that numeric differences of a 
given magnitude mean the same thing, within the range of 
estimated uncertainty. Measurements are interpreted and applied 
in relation to uncertainty ranges or confidence intervals, which 
makes it possible to tell if and when numeric differences are real 
and reproducible, or are simply random noise. In addition, 
because questions are scaled together to delineate a learning 
progression, developmental sequence, or quality improvement 
trajectory, measurements are interpreted substantively in relation 
to the amount of the construct represented at each scale level.  

Improvement efforts then can focus attention on the easiest 
tasks not yet accomplished. Now a foundation for sustainable 
change has been put in place by successes experienced at lower 
levels of difficulty. The result is that end users' behaviours and 
decisions are coordinated and aligned by their shared responses 
to the same information. When end users can, in addition, easily 
learn from one another by sharing knowledge, probabilities of 
breaking free of treadmill cycles are increased. 

3.3. Skills 

The skills required for implementing scientific models of 
decision processes are considerably more technically and socially 
sophisticated than the skills associated with the usual statistical 
data modelling approach. All of the latter's skill sets are needed, 
as well as mastery of advanced conceptual tools involving 
construct mapping, assessment/survey item development, 
response scoring, mathematical model formulation, instrument 
calibration; measure, uncertainty, and data quality interpretation; 
knowledge system development, administrative and 
interpretation guidelines, user training, etc. These skills focus on 
producing knowledge retaining its meaning and properties across 
levels of complexity, suggesting the possibility of resulting in less 
anxiety than has been the case in using the statistical approach. 

3.4. Resources 

With experience, resources invested in the scientific 
modelling approach to creating sustainable change can be gauged 
for maximizing returns from ongoing improvements in 
efficiency and outcomes. As expectations concerning returns 
take shape, lessons are learned as to how the investments can be 
made accountable. With a vision, skill sets, and plans aimed at 
maximizing the value of high-quality interval measurements 
whose meanings are independent of the particular questions 
asked, investments proportionate to the expected returns can be 
justified, and the business plan can be scaled up as the market 
expands. 



 

ACTA IMEKO | www.imeko.org December 2022 | Volume 11 | Number 4 | 5 

3.5. Incentives 

In the context of scientifically modelling an overall decision 
process, incentives are shaped by involving everyone as 
participants in the creation of enhanced processes and outcomes. 
The overarching viability of the organization is placed front and 
centre. Incentives reward generalizable innovations that improve 
quality. Given common languages of comparison, everyone has 
the information they need to take responsibility for the outcomes 
in their care. Inputs that do not positively impact qualitatively 
and/or quantitatively measurable affective, cognitive, 
behavioural, etc. outcomes can be evaluated for removal.  

In the traditional statistical modelling approach, the maxim 
"you manage what you measure" becomes a cynical motto 
conveying how management can be distracted into superficial 
issues only peripherally related to the main operational focus of 
the organization. In the scientific modelling context, though, 
managing what is measured is akin to turning a wrench fitted on 
the head of a bolt that needs to be tightened: the tool is fit for 
purpose. The distribution of instruments calibrated to a common 
metric informs decision processes and data sharing that everyone 
can learn from quickly and easily. Incentives overcome 
resistance, then, by illuminating clear paths to forward advances 
increasing the pride everyone takes in their work. 

4. DISCUSSION 

Scientific modelling is superior to statistical modelling in the 
context of promoting sustainable change because, first, it 
provides a vision that encompasses the entire populations both 
of potential challenges that may emerge and of potential 
participants (employees, students, teachers, clinicians, suppliers, 
managers, etc.) who may engage with those challenges. This 
capacity follows from the focus of scientific models on the 
abstract construct represented in measurements, as opposed to 
the concrete data and specific questions focused on by statistical 
models. The usual statistical approach accepts ratings and scores 
as meaningful, even though their significance depends on the 
particular questions that were asked. So when challenges not 
represented in the questions and associated data emerge, those 
challenges are likely to be ignored, discounted, or distracting in 
ways that lead to confusion. Scientific models, in contrast, inform 
clarity by demanding a theoretical account supported by data and 
expressed in comparable metrics with known uncertainties.  

Second, scientific modelling dispels anxiety by bringing 
advanced expertise to bear on problem definition, construct 
mapping, instrument calibration, report generation, measure 
interpretation, and quality improvement applications. Though 
statistical modelling skill sets may, of course, be highly 
developed, many change initiatives are approached with little 
more experience than familiarity with spreadsheets and word 
processors. Though these latter rudimentary methods are 
commonly used, the importance of communicating meaningful 
results in well-defined terms will likely continue to exert an 
inexorable demand for higher quality knowledge. 

Third, because scientific modelling supports new degrees of 
rigorous comparability over time, new expectations for 
accountability and accounting can be expected to alter the quality 
and quantity of resistance-countering incentives that can be 
offered. Proportionate returns on investment will follow from 
fair and equitable measurements that are demonstrably 
reproducible and relevant to the challenges to innovation being 
faced. These kinds of returns should become the goal of change 
efforts, instead of incentive systems that can be gamed, creating 

the appearance of innovation by focusing on easily counted 
signal events, with the associated demoralizing atmosphere that 
goes with widely perceived unfair advantages. 

Fourth, in the same vein, because the magnitudes of impacts 
are commonly estimated in the confusing terms of statistical 
scores, the resources brought to bear in change efforts are 
commonly insufficient to effect significant results, leading to 
continued frustration. The capacity to generalize and scale across 
contexts by means of a combination of explanatory theory, 
experimental evidence, and distributed instrumentation, 
however, leads to the clear definition of opportunities for 
investment likely to pay handsome returns.  

Fifth, where the statistical focus on improvement planning is 
typically guided by nothing more than the areas of failure or low 
ratings, the scientific approach maps the improvement trajectory. 
This is done in a way that more closely informs day to day 
activities by indicating where a process is at relative to its goal, 
and showing what comes next in a logical sequence. Instead of 
simply taking on the most difficult challenges with no attention 
to preparatory factors, the scientific approach attends to 
establishing baseline structures, processes, and outcomes in an 
orderly approach.  

Differences in circumstance across situations can be 
accommodated via adaptive selection of relevant tasks and 
challenges, without compromising overall comparability. This 
results in a visible documentation of small gains as progress 
toward the goal is made, as opposed to the feeling of being on a 
treadmill that results from not being oriented on a clear path 
toward defined goals. 

5. CONCLUSION 

Successful sustainable change initiatives depend on abilities to 
flexibly and quickly store and retrieve knowledge. Centralized 
repositories of low-quality information accessed infrequently are 
likely to result in muddled vision, inconsequential skill sets, 
ineffective incentives, insufficient resources, and incomplete 
plans. Distributed networks of instruments embodying high 
quality information, in contrast, offer the potential for 
counteracting the confusion, anxiety, resistance, frustration, and 
treadmills too commonly taken for granted. 

REFERENCES 

[1] T. P. Knoster, R. A. Villa, J. S. Thousand, A framework for 
thinking about systems change, In R. A. Villa & J. S. Thousand 
(Eds.), Restructuring for Caring and Effective Education, 
Brookes, Baltimore, 2000, pp. 93-128. 

[2] D. Andrich, Distinctions between assumptions and requirements 
in measurement in the social sciences, In J. A. Keats, R. Taft, R. 
A. Heath & S. H. Lovibond (Eds.), Mathematical and Theoretical 
Systems, Elsevier Science Publishers, 1989. 

[3] J. Cohen, The earth is round (p < 0.05), American Psychologist, 
49 (1994), p. 997-1003. Online [Accessed 19 December 2022] 
https://psycnet.apa.org/record/1995-12080-001  

[4] O. D. Duncan, M. Stenbeck, M. Panels and cohorts, In C. C. Clogg 
(Ed.), Sociological Methodology 1988, American Sociological 
Association, New York, 1988, pp. 1-35. 

[5] W. P. Fisher, Jr., Statistics and measurement: Clarifying the 
differences, Rasch Measurement Transactions, 23 (2010), pp. 
1229-1230. Online [Accessed 19 December 2022]  
http://www.rasch.org/rmt/rmt234.pdf  

[6] P. E. Meehl, Theory-testing in psychology and physics: A 
methodological paradox, Philosophy of Science, 34 (1967), pp. 
103-115. 
DOI: 10.1086/288135  

https://psycnet.apa.org/record/1995-12080-001
http://www.rasch.org/rmt/rmt234.pdf
https://doi.org/10.1086/288135


 

ACTA IMEKO | www.imeko.org December 2022 | Volume 11 | Number 4 | 6 

[7] J. Michell, Measurement scales and statistics: A clash of paradigms, 
Psychological Bulletin, 100 (1986), pp. 398-407.   
DOI: 10.1037/0033-2909.100.3.398 

[8] D. Rogosa, Casual [sic] models do not support scientific 
conclusions: A comment in support of Freedman, Journal of 
Educational Statistics, 12 (1987), pp. 185-95.   
DOI: 10.3102/10769986012002185  

[9] M. Wilson, Seeking a balance between the statistical and scientific 
elements in psychometrics, Psychometrika, 78 (2013), pp. 211-
236.   
DOI: 10.1007/s11336-013-9327-3  

[10] T. Hopper, Stop accounting myopia: think globally: A polemic, 
Journal of Accounting & Organizational Change, 15 (2019), pp. 
87-99.   
DOI: 10.1108/JAOC-12-2017-0115 

[11] A. Akmal, R. Greatbanks, J. Foote, Lean thinking in healthcare, 
Health Policy, 124 (2020), pp. 615-627.   
DOI: 10.1016/j.healthpol.2020.04.008 

[12] A. N. Whitehead, Science and the modern world, Macmillan, New 
York, 1925. 

[13] R. D. Luce, J. W. Tukey, Simultaneous conjoint measurement, 
Journal of Mathematical Psychology, 1 (1964), pp. 1-27.   
DOI: 10.1016/0022-2496(64)90015-X 

[14] G. Rasch, Probabilistic models, Danmarks Paedogogiske Institut, 
Copenhagen, 1960. 

[15] W. P. Fisher, Jr., Invariance and traceability for measures of 
human, social, and natural capital. Measurement, 42 (2009), pp. 
1278-1287.   
DOI: 10.1016/j.measurement.2009.03.014 

[16] L. Pendrill, Quality assured measurement, Springer, Cham, 2019 
ISBN 978-3-030-28695-8. 

[17] L. Mari, M. Wilson, A. Maul, Measurement across the sciences, 
Springer, Cham, 2021 ISBN 978-3-030-65558-7. 

[18] R. Othman, N. A. Hashim, Typologizing organizational amnesia, 
The Learning Organization, 11 (2004), pp. 273-284.   
DOI: 10.1108/09696470410533021  

[19] C. Pollitt, Institutional amnesia, Prometheus, 18 (2000), pp. 5-16.  
DOI: 10.1080/08109020050000627 

[20] E. Hutchins, The cultural ecosystem of human cognition, 
Philosophical Psychology, 27 (2014), pp. 34-49.   
DOI: 10.1080/09515089.2013.830548 

[21] S. L. Star, K. Ruhleder, Steps toward an ecology of infrastructure, 
Information Systems Research, 7 (1996), pp. 111-134.   
DOI: 10.1287/isre.7.1.111  

[22] J. Sutton, C. B. Harris, P. G. Keil, A. J. Barnier, The psychology 
of memory, extended cognition, and socially distributed 
remembering, Phenomenology and the Cognitive Sciences, 9 
(2010), pp. 521-560.   
DOI: 10.1007/s11097-010-9182-y  

[23] H. Plattner, C. Meinel, L. Leifer (Eds.), Design Thinking Research: 
Measuring Performance in Context, Springer Science & Business 
Media, Cham, 2012. 

[24] A. Royalty, B. Roth, Mapping and Measuring Applications of 
Design Thinking in Organizations, In Design Thinking Research 
(pp. 35-47), Springer International Publishing, Cham, 2016. ISBN 
978-3-030-76324-4 

[25] W. P. Fisher, Jr., E. P.-T. Oon, S. Benson, Rethinking the role of 
educational assessment in classroom communities, Educational 
Design Research, 5 (2021), pp. 1-33.   
DOI: 10.15460/eder.5.1.1537  

[26] W. P. Fisher, Jr,. Imagining education tailored to assessment as, 
for, and of learning, Assessment and Learning, 2 (2013), pp. 6-22. 
Online [Accessed 19 December 2022]  
https://www.researchgate.net/profile/William-Fisher-
Jr/publication/259286688_Imagining_education_tailored_to_ass
essment_as_for_and_of_learning_theory_standards_and_quality
_improvement/links/5df56a2592851c83647e7860/Imagining-
education-tailored-to-assessment-as-for-and-of-learning-theory-
standards-and-quality-improvement.pdf  

[27] P. Black, M. Wilson, S. Yao, Road maps for learning, 
Measurement: Interdisciplinary Research and Perspectives, 9 
(2011), pp. 1-52.   
DOI: 10.1080/15366367.2011.591654 

[28] M. C. Jensen, Value maximization, stakeholder theory, and the 
corporate objective function, Journal of Applied Corporate 
Finance, 22 (2010), pp. 32-42.   
DOI: 10.1111/j.1745-6622.2010.00259.x 

[29] W. P. Fisher, Jr., Contextualizing sustainable development metric 
standards, Sustainability, 12 (2020), pp. 1-22.   
DOI: 10.3390/su12229661 

[30] W. P. Fisher, Jr., Bateson and Wright on number and quantity, 
Symmetry, 13 (2021) 1415.   
DOI: 10.3390/sym13081415 

 

 

https://doi.org/10.1037/0033-2909.100.3.398
https://doi.org/10.3102/10769986012002185
https://doi.org/10.1007/s11336-013-9327-3
https://doi.org/10.1108/JAOC-12-2017-0115
https://doi.org/10.1016/j.healthpol.2020.04.008
https://doi.org/10.1016/0022-2496(64)90015-X
https://doi.org/10.1016/j.measurement.2009.03.014
https://doi.org/10.1108/09696470410533021
https://doi.org/10.1080/08109020050000627
https://doi.org/10.1080/09515089.2013.830548
https://doi.org/10.1287/isre.7.1.111
https://doi.org/10.1007/s11097-010-9182-y
https://doi.org/10.15460/eder.5.1.1537
https://www.researchgate.net/profile/William-Fisher-Jr/publication/259286688_Imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/Imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf
https://www.researchgate.net/profile/William-Fisher-Jr/publication/259286688_Imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/Imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf
https://www.researchgate.net/profile/William-Fisher-Jr/publication/259286688_Imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/Imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf
https://www.researchgate.net/profile/William-Fisher-Jr/publication/259286688_Imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/Imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf
https://www.researchgate.net/profile/William-Fisher-Jr/publication/259286688_Imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/Imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf
https://www.researchgate.net/profile/William-Fisher-Jr/publication/259286688_Imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/Imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf
https://doi.org/10.1080/15366367.2011.591654
https://doi.org/10.1111/j.1745-6622.2010.00259.x
https://doi.org/10.3390/su12229661
https://doi.org/10.3390/sym13081415