WOULD ARTIFICAL INTELLIGENCE MAKE STRATEGY ‘LESS 

HUMAN’? 

Julia M. Hodgins, King’s College London 

Canada 

Abstract 

This article discusses the influence of artificial intelligence (AI)—specifically 

Narrow AI—in the formulation of strategy arguing that there is not a 

straightforward answer to the question posited in the title. The Impact of Narrow 

AI in strategic decision-making will not fundamentally alter the nature of strategy 

due to the impossibility to program human faculties such as rationality and 

intentionality. Notwithstanding, the article concludes that ethical issues in the 

global environment will sustain the basis of strategy primarily as a human and 

political activity for the foreseeable future. Firstly, this piece reviews overarching 

definitions. Secondly, it discusses how Narrow AI affects strategy’s formulation 

through the predictive power already developed; three illustrative examples 

substantiate the elaboration. Thirdly, it discusses how ethical factors limit 

Narrow AI’s influence at the core of strategy so that it remains a human activity 

first and foremost. Discussions related to tactical applications of AI—for 

example, drones—are out of the scope of this analysis. 

Introduction 

Hollywood portrays Artificial Intelligence (AI) as capable of threatening 

humankind’s survival, if not of redefining society in every regard at the very 

least. For instance, following the narrative posited in the movie Ex-Machina 

(Garland, 2014), Ava—a humanoid unit of Super AI developed by the company 

Blue Book—confuses the programmer attempting to assess her to the point of 

limerence, managing to deceive him, murders her creator, and heads straight to 

Blue Book headquarters presumably to take control of everything; ‘heartless,’ 

conniving, and overpowering. While this cinematographic narrative is not so 

fanciful, its materialization still gets lost in imagination; fears of Artificial 

Intelligence dominating humankind would remain unfounded for a long while. 

Artificial Intelligence develops within two large avenues. Firstly, Modular-AI, 

often referred to as Modular or Narrow, and in this article defined as Narrow 

Artificial Intelligence (Narrow AI hereafter), is a type of unit that performs 

specific tasks some of which already influence the basis of strategy. Secondly, 

General-AI or Artificial General Intelligence, a type of unit expected to surpass 

the performance of human intelligence into Super-AI (Advani, 2021). This paper 



Julia Hodgins  76 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

limits its scope to present capabilities, Narrow AI, since General and Super AI 

are not yet fielded; the former remains a theoretical concept and the latter, 

“almost… science-fiction” (Advani, 2021, p. 25).  

Max Tegmark’s definition of Artificial Intelligence overarches this analysis; a 

“non-biological intelligence” (Tegmark, 2017, p. 55), framed within 

intelligence’s definition as the “ability to accomplish complex goals” (Tegmark, 

2017, p. 71). This approach accommodates diverse tasks ranging from self-

awareness to problem-solving, while disarticulating the human monopoly of 

intelligence without closing the main gap between human and artificial 

intelligences stated by AI theorists, that Narrow AI lacks body and emotions 

(Payne, 2018), discussed later. This defines Narrow AI as different to human 

intelligence, the first cannot emulate the second. The precision is relevant 

because, on one hand emotions enable humans to display “emotional reasoning,” 

(Ayoub & Payne, 2016, p.798) facilitating the interplay of memories with present 

affect, and molding risk tolerance (Ayoub & Payne, 2016). On the other hand, 

cognition is embodied through the human architecture (Payne, 2018), which is –

still – the referent object and the agent of strategy inside and outside the 

battlefield, not least it provides valuable contextual information used to adjust 

offensive and defensive strategies, before and during operations. 

The interweaving of cognition and strategy raises the relevance of that distinction 

and facilitates Narrow AI’s impact, despite said limitations. Narrow AI units 

learn specific tasks in two ways. One is Machine Learning (ML), within which 

the units’ algorithms complete one—and only one—task by relentlessly 

exercising different pathways, adjusting actions, and perfecting its performance 

without altering the architecture of its hosting unit, neither software nor 

hardware. ML cannot manage, however, hurled evidence; it demands data in 

consistent format, pre-defined, and recognizable. The other one is Deep Learning 

(DL), in which the unit learns to classify data, infer, predict results, recognize 

patterns—even damaged or incomplete ones—and can make sense of conflicting, 

hurled data since it uses Artificial Neural Networks (Neural AI). Neural AI are 

individual units that can process one thing at a time but interconnected in large 

networks performing identical tasks in parallel, thus adapting more flexibly. 

Unlike ML, Neural AI can learn new tasks associated to the initial one, which is 

an expansive advantage, but cannot move onto other domains (Ayoub & Payne, 

2015, p. 795). Both ML and DL are the substance of Narrow AI’s domain-

specific impact in the formulation of strategy. 

Colin Gray’s definition of strategy dominates security, defense, and statecraft 

doctrines, “the bridge that purposefully should connect means with ends,” adding 



Julia Hodgins  77 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

that strategy serves policy (Gray, 2010, p. 43) through a decision-making 

process, at the service of the group’s benefit. Definition’s argument can extend 

as strategy serving statecraft and, ideally, serving citizens’ interests, 

circumscribing it to instrumental decision-making. Kenneth Payne discusses 

strategy in terms of psychology and evolution, from prehistory to the future, the 

“purposeful use of violence for political ends” (Payne, 2018, p. 28). The political 

nature highlights a link between decision-making, cognition, and group survival. 

The purposefulness serving the group (Payne, 2018), collaborative pursuit of 

shared goals while managing conflict with other groups (Payne, 2018)—common 

between both authors—legitimizes violence in the pursuit of survival and means 

of security (Payne, 2018). This is the motivation of our strategic intelligence 

according to Christopher Coker (Coker, 2019). Strategy uses all skills and 

resources to formulate a way to achieving the group’s ends in any context, 

making it fluid, flexible, adversarial (Ayoub & Payne, 2016); exercising 

creativity and imagination; shaping and shaped by culture (Ayoub & Payne, 

2016). This reveals cognition underlying strategy and both cultural products 

evolving, informing each other. Thus, if Narrow AI means a revolution in 

cognition, it impacts strategy too (Payne, 2018) in a domain-specific fashion, 

however (Ayoub & Payne, 2016). 

Narrow AI affects the formulation of strategy with its unparalleled predictive 

power. As yards of measure, if humans’ forecasting ability during complex 

situations was particularly keen, many campaigns—whether commercial or 

military—would have reached the expected ends. Equally supportive, the 

industry of tools for analysis, forecasting, and planning would have not 

developed as comprehensive and diverse as it has done especially post WWII, 

when strategy and planning trans-pollinated form warfare to business; a quick 

academic search using ‘forecasting’ as keyword could bring back more than three 

million documents. Narrow AI analyzes overwhelmingly copious quantities of 

data at a velocity unreachable by humans (Advani, 2021), with an impeccable 

management of statistical calculations. Narrow AI infers upon immense iterations 

either following a predefined output and a classified dataset with pre-specified 

categories, or without those by identifying patterns upon which it elaborates 

predictions (Ayoub & Payne, 2016). Forecasts become more meaningful as 

Narrow AI learns to optimize its results by iterating its algorithms, alongside the 

data its processing methods are refined to (Ayoub & Payne, 2016). Benjamin 

Jensen et al. call for an “algorithmic warfare,” (Jensen et al., 2020, p. 527) 

arguing that the state which best manages its data will acquire long-term military 

advantage, signaling the unprecedented gain for strategic decisions that Narrow 

AI’s predictive power represents. Breadth, depth, systematicity, thoroughness, 

and speed characterize the organized, sequential processing of Narrow AI 



Julia Hodgins  78 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

(Ayoub & Payne, 2016), its outputs have only one tier of bias that humans 

transfer unintendedly when designing the algorithms—and potentially 

correctable. This constitutes Narrow AI’s unsurmountable impact in strategy 

formulation. 

Human processing is rather susceptible to two tiers of bias, the same one from 

the dataset, and a second one, whether from the analysts’ perception (i.e., 

confirmation, tunnel vision, optimism, selectiveness, amongst others), or 

cognition, “mental errors predictable and consistent,” according to Richards 

Heuer (Heuer, 2020, p. 58). Also, Narrow AI is not prone to escalate conflicts to 

satisfy egos, to risk-aversion or risk-love, to dismiss opposing perspectives, or to 

physiological events such as emotions, stress, fatigue, or nutrition (Ayoub & 

Payne, 2016). Integrating Narrow AI in data management and analysis at the 

service of strategy formulation provides a sustainable advantage which unfolds 

more successfully under a human-machine cooperation, where machines are 

‘team-mates’ as their capabilities are in constant expansion (Coker, 2019; Dear, 

2019). The combination of two different intelligences complementing humans’ 

shortcomings warrants the above convenience; some military bodies have 

explored this edge. For instance, Microworlds Analysis models war contexts with 

multiple simulated events under specific rules of engagement that Narrow AI 

runs iteratively at incredible speed, suggesting strategies and tactics not evident 

before, which enhances strategic abilities of participant officers.  

Calculations of risk and success are essential in strategy formulation as well. 

Narrow AI further extends human statistical capabilities offering insights that 

minimize casualties based on sound probabilistic estimation conducted upon 

multiple parameters, at high speed, and without dismissing moving pieces 

(Ayoub & Payne, 2016). Lastly, Bernardcodie, a Neural AI trained in the US 

National Security Strategy (NSS) archive, identifies recurring topics within the 

past rhetoric of NSS’ documents using DL pattern-recognition, weighs the 

repeating wording, and deep-writes strategy using probabilistic text-prediction. 

Bernardcodie’s largest input, hitherto, is combining Narrow AI’s skills with 

humans to understand past strategy (Wicker, 2021). 

Notwithstanding, Narrow AI cannot make up the most human part of strategy in 

at least two significant ways. First, the “Theory of the mind” (Payne, 2018, p. 

6)—the human ability to reflect upon others’ beliefs and behaviours. Conflicting 

sides read each others’ actions aiming to accurately interpret intentions, such 

interpretations become a primordial input to mold the warfare exchange. This 

interpretive process involves the conflict parties and their shared context 

(Quintanilla, 2019), and is equivalent to the “Orders of Intentionality” posited by 



Julia Hodgins  79 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

Coker (Coker, 2019, p. 58); humans can develop and hold up to five layers of 

reflections—orders of intentionality—upon ideas and behaviors of one or of 

others. This profoundly psychological aspect of strategy consists of figuring the 

unfolding of each others’ motivations and attitudes through the conflict, and how 

to influence the adversary’s behavior to achieve our ends (Payne, 2018). Narrow 

AI lacks intentionality, is unable to process the subtleties of shifting attitudes, and 

cannot imagine an opponent’s rational internal dialogue (Ayoub & Payne, 2016). 

The theory of the mind provides an avenue for deterrence inasmuch it entails a 

calculated influence on the behavior of adversaries and allies marked by the 

nuances of language and interpretation (Corbett & Binednagel-Sehovic, 2019), 

supporting the argument of Narrow AI’s limited influence on the basis of 

strategy—the purposeful connection between ends and means.  

Second, Narrow AI lacks human architecture and correlated emotions, substantial 

for cognition since human biology and the instinctual motivation springing from 

it are a source of information from any context; all of which are then used to 

adjust strategy and tactics (Boden, 2016; Payne, 2018). Narrow AI cannot process 

the instability of obscure and fluctuating contexts (Ayoub & Payne, 2016), losing 

mastery of battlefield dynamics, which precludes it from playing a vivid and 

commanding role in strategy. Human forecasting may or may not win a war or a 

market, but can process the uncertain with short imperfect information, which 

Narrow AI cannot. 

By extension, ethical consequences of decisions affecting strategy constrain 

Narrow AI’s impact to become foundational. Warfare laws, as made by humans, 

are profoundly anthropocentric (Asaro, 2012), and are applied upon rational 

discourses rather than upon the logics of informatic, precluding them from being 

programmable. Neural AI is far from developing ethical reflection and/or moral 

standing (Boden, 2016), and probability is not a way to process ethics and laws. 

Rules of engagement assume a human agent at the centre of the battlefield re-

orienting course, shifting tactics, or re-purposing elements into either weapons 

or defences. The inception of autonomous weapons—weapon systems 

programmed to make decisions without human intervention (Ayoub & Payne, 

2016)—brings ethical and legal dilemmas to field; for instance, the decision of 

shooting, to which strategy cannot be oblivious. The proportionality of attacks 

and counterattacks on the battlefield must be contemplated considering both, 

lives of combatants (military or not) engaged in the fight and lives of non-

combatants that risk being caught in the crossfire, in terms of deciding to use 

lethal force (Asaro, 2012; Payne, 2021). No matter how efficient and useful 

Narrow AI may be, some theorists remark that the decision to use lethal force, 

the shot itself, and its consequences remain in the scope of human agents (Coker, 



Julia Hodgins  80 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

2019). This type of reflection and decision-making, ethical and strategic 

simultaneously, escapes Narrow AI’s capabilities not least as it relates to 

unpredictable battlefield contexts, but as it responds to moral life and laws 

(Coker, 2019). Understanding legal frameworks that are varied and overlapping 

between states and interstates demands concomitantly to make sense of, and to 

judge upon those; something already challenging for humans. Narrow AI’s 

mastery resides on its relentless obedience to commands and protocols (Walsh, 

2017), supporting the argument of limited impact within the basis of strategy. 

This argument becomes more relevant when considering that armed conflicts 

increasingly occur in urban settings, and combatants engaged are often non-state 

and non-military actors (resisting citizens, insurgents, terrorists, and 

mercenaries). Furthermore, Toby Walsh argues that Artificial Intelligence drone 

operations aggravate tensions and escalate irregular combats (Walsh, 2017). The 

cognitive ability to process issues that are ‘ethically troubling’ and produce 

strategic decisions represent a challenge beyond Narrow AI’s capabilities, as 

argued by Payne (Payne, 2021), proving Narrow AI’s limited influence at the 

basis of strategy as it cannot remain oblivious to ethical issues unfolding from 

strategical and tactical decisions. In addition, theorists have posited the irregular 

character of conflicts as characteristic of the twenty-first century (Krieg & Rickli, 

2019). Increasingly, intra-state processes cross borderlines to become trans-state 

conflicts (Gray, 2006). Lacking theory of the mind, body, and emotions precludes 

a full strategic ability in the current global context where warfare and state 

boundaries incrementally blur, and imperfect information puzzles both humans 

and machines, supporting the argument of Narrow AI’s limited impact in the 

basis of strategy. 

Finally, the legal system that prosecutes human rights and war crimes is as human 

an activity as we expect the decision of shooting in the battlefield would remain. 

Prosecuting war crimes relies on agents and participants intending to persuade 

courts and jurors based on arguments that combine reason and emotion, aiming 

to close gaps between intangible laws and war doctrines with specific contexts 

and facts. Algorithms cannot cross examine, argue, and counterargue to convince 

jurors with empathy and rationality, or imagine how to impart justice in a court. 

If so, as Peter Asaro contends, the right to due process would be undermined as 

the justice system is ultimately founded on human judgment (Asaro, 2012), is 

informed by evidential warrants, empathy, and compassion, and deeply imbued 

with moral and ethical considerations referring to the specific and applicable laws 

and regulations. Strategy must consider this legal prosecution system framing 

warfare, conflict, peacebuilding, and international relations as it shapes the 



Julia Hodgins  81 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

environment where it elapses; proving that Narrow AI’s impact cannot reach, 

least change, the basis of strategy. 

Conclusion 

There is not a straightforward answer as to the title statement. Outperforming 

humans in managing and extracting new knowledge from copious datasets at a 

fraction of time, with only one tier of fixable bias provides insurmountable 

advantages for strategy formulation. Simultaneously, the core of strategy—

purposeful cognition that uses all resources available to secure the group’s 

survival—is not fundamentally altered. Lacking emotions and human 

architecture is Narrow AI’s benefit and limitations at the same time. 

While Artificial Intelligence is here to stay and is progressively deployed in many 

domains—particularly in cybersecurity which constitutes a new battlefield—the 

very basis of strategy will remain profoundly human and political. Instead, within 

an integrative approach where humans and machines are team-mates based on 

the complementary differences of their intelligences, Narrow AI's predictive 

power provides unprecedented geopolitical advantage, enriching strategic 

formulation abilities in and out of the battlefield.  

In addition, ethical and legal issues of the global strategic environment framing 

security, defence, and statecraft—warfare included—preclude the full 

automation of strategy through Narrow AI, upholding the human basis of the 

nature of strategy. Under that light, strategy will remain human and political for 

the foreseeable future in and out of the battlefield; foundational impact of 

Artificial Intelligence at the basis of strategy is yet to be seen. 

  



Julia Hodgins  82 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

References 

Advani, V. (2021, February 11). What is Artificial Intelligence? How does AI 

work, Types and Future of it? Great Learning. 

https://www.mygreatlearning.com/blog/what-is-artificial-intelligence/ 

Asaro, P. (2012, June). On banning autonomous weapons systems. 

International Review of the Red Cross, 94(886), 687–709. 

https://www.cambridge.org/core/journals/ international-review-of-the-

red-cross/article/on-banning-autonomous-weapon-systems-human-

rights-automation-and-the-dehumanization-of-lethal-

decisionmaking/992565190BF2912AFC5AC0657AFECF07 

Ayoub, K. & Payne K. (2016). Strategy in the age of artificial intelligence. 

Journal of Strategic Studies, 39(5-6), 793–819. 

https://doi.org/10.1080/01402390.2015. 1088838 

Boden, M. A. (2016, June 9). AI: Its nature and future. Oxford University 

Press.  

Coker, C. (2019, April 30). Artificial intelligence and the future of warfare. 

Scandinavian Journal of Military Studies, 2(1), 55–60. 

https://doi.org/10.31374/sjms.26 

Corbett, A. & Binednagel-Sehovic, A. (2019, April). Acculturation of the core 

concepts of European security. In NATO Science and Technology 

Organization (pp. 1-14). NATO. DOI: 10.14339/STO-MP-SAS-141  

Dear, K. (2019). Artificial intelligence and decision-making. The RUSI Journal, 

164(5-6), 18–25. https://doi.org/10.1080/03071847.2019.1693801 

Garland, A. (Director). (2014). Ex Machina [Film]. Film4 & DNA Films.  

Gray, C. (2006). Another bloody century. Phoenix.  

Gray, C. (2010, September). The strategy bridge: Theory for practice. Oxford 

University Press.  

Heuer, R. J. (2020, Mar 5). Psychology of Intelligence Analysis. Pickle Partners 

Publishing. 



Julia Hodgins  83 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

Jensen, B. M., Whyte, C. & Cuomo, S. (2020, September). Algorithms at war: 

The promise, peril, and limits of artificial intelligence. International 

Studies Review, 22(3), 526–550. https://doi.org/10.1093/isr/viz025 

Krieg, A. & Rickli, J-M. (2019, June 1). Surrogate warfare: The transformation 

of war in the twenty-first century. Georgetown University Press.  

Payne, K. (2021, December 1). I, Warbot: The dawn of artificially intelligent 

conflict. Oxford University Press.  

Payne, K. (2018, April 5). Strategy, evolution, and war: From apes to artificial 

intelligence. Georgetown University Press. 

Quintanilla, P. (2019). La comprensión del Otro. Lima: Fondo Editorial PUCP, 

32(1). http://dx.doi.org/10.18800/arete.202001.011 

Tegmark, M. (2017, August 29). Life 3.0: Being human in the age of artificial 

intelligence. Knopf.  

Walsh, T. (2017, September 7). Android dreams: The past, present and future 

of artificial intelligence. C Hurst & Co Publishers Ltd. 

Wicker, E. (2021, April 14). Strategy in the artificial age: Observations from 

teaching an AI to write a U.S. national security strategy. War on the 

Rocks. https://warontherocks.com/2021/04/strategy-in-the-artificial-age-

observations-from-teaching-an-ai-to-write-a-u-s-national-security-

strategy/. 

Author Biography 

Julia M. Hodgins is the Strategic Communications Lead for ITSS Verona 

Summer School, Analyst in the team “Culture, Society, and Security,” and online 

facilitator in the upcoming Summer School 2022. Her research interests are 

gender security, cyber security, strategy, social equality, and decolonization. 

Julia is the lead researcher and co-author of the chapter “El Perú a Través de 

Nuestros Ojos” in the forthcoming book Más allá del Bicentenario: Tareas 

Pendientes (Ed. Mariela Noles); and produced the Docu-podcast “Indigenous 

Languages in Music.” Julia holds a BA-Honors majoring in Sociology and 

concentrated in social research by the University of the Fraser Valley (UFV – 

Abbotsford). Currently, she is a candidate for the MA in International Affairs by 

King’s College London.  

Author’s Note: The views contained in this article are the author’s alone. 



Julia Hodgins  84 

 

The Journal of Intelligence, Conflict, and Warfare 

Volume 5, Issue 1 

 This work is licensed under a Creative Commons Attribution-

NonCommercial-NoDerivatives 4.0 International License. 

© (JULIA HODGINS, 2022) 

Published by the Journal of Intelligence, Conflict, and Warfare and Simon Fraser 

University 

Available from: https://jicw.org/ 

https://jicw.org/