Original Research A dual processing approach to complex problem solving Wolfgang Schoppek Institute of Psychology, University of Bayreuth, Bayreuth, Germany This paper reflects on Dietrich Dörner's observation that participants working on complex dynamic control tasks exhibit a “tendency to economize”, that is, they tend to minimize cognitive effort. This observation is interpreted in terms of a dual processing approach; it is explored if the reluctance to adopt Type 2 processing could be rooted in biological energy saving. There is evidence that the en- ergy available for the cortex at any point in time is quite limited. Therefore, effortful thinking comes at the cost of neglecting other cortical functions. The proposed dual processing approach to complex problem solving is investi- gated in an experiment where cognitive load was varied by means of a secondary task to make Type 1 or Type 2 pro- cessing more likely. Results show that cognitive load had no effect on target achievement and knowledge acquisi- tion. Even in the single task condition, many participants seem to prefer Type 1 processing, supporting Dörner's observation. Keywords: dynamic decision making; problem solving; dual pro- cessing; theory Progress in an area of research is stimulated by dis- coveries and new theories. In the area of complex problem solving (CPS), where the handling of uncer- tain and dynamic situations is investigated1 , both are scarce. As for discoveries, one can even doubt if there were any. One candidate for both is Dörner’s (1996) observation that failure in the process of CPS follows a certain logic, with the features of complex problems and the limitations of human thinking as premises. For example, problem solvers often focus on a central variable, to which they attribute too much explanatory power (e.g., job satisfaction in an economic scenario). The resulting failure, based on the neglect of other important variables, can be deduced from the con- junction of a tendency to economize thinking (Dörner, 1996) on the side of the problem solver, and the fea- ture of complexity and connectedness on the side of the problem. A virtue of Dörner’s conception is its com- prehensiveness: He fruitfully brought together ideas from very different sources. Because (complex) problem solving is a vast research topic, which intersects with many established areas of psychology, such as memory, decision-making, mo- tivation, or judgement, I am convinced that only a comprehensive, holistic approach can yield progress. In the present paper, I pick up Dörner’s concept of the tendency to economize, connect it with the idea of dual processing, and explore what predictions can be derived from this. For this purpose, the dual pro- cessing approach is contrasted with the current “stan- dard model of CPS” (Fischer, Greiff, & Funke, 2012; Schoppek & Fischer, 2017) by means of an exploratory experiment, which is in part replicated in a second ex- periment. Kahneman (2011) called the human judge (or prob- lem solver) a “cognitive miser” – a person who mostly relies on intuitive judgement and uses reasoning spar- ingly. Kahneman assigns intuitive judgment to “Sys- tem 1” and thinking to “System 2” (see below). The resemblance of cognitive miserliness to the tendency to economize establishes a connection between Dörner’s idea and dual processing theories: The tendency to economize consists of a strong preference for System 1 and reluctant use of System 2. Conceptual preliminaries Some terms in problem solving research are used with varying meanings. Therefore, before the presentation of the research questions I shall define the core con- cepts. I use the term “complex problem solving” in the tradition of Dörner (1996) for human goal-directed ac- tivities in situations which are characterized by a rel- atively large number of relevant variables (complex- ity), which influence each other in various ways (con- nectedness), and some of which change their values autonomously (dynamics). The problem solver nei- ther knows exactly what variables are relevant, nor knows all current values (intransparency). In such sit- uations, more than one goal can be reasonably pur- sued, whereby the goals typically cannot be maximized at the same time (polytely). This definition has been criticized for lacking precision and operationalization (e.g. Quesada et al., 2005). However, in problem solv- ing research it is widely accepted that it depends on the knowledge of the problem solver whether a task can be classified as a problem or not (Öllinger, 2017). Similarly, CPS research situations can be more or less Corresponding author: Wolfgang Schoppek, Institute of Psychology, Uni- versity of Bayreuth, Bayreuth, Germany. e-mail: Wolfgang.Schoppek@uni- bayreuth.de 1A more detailed, current definition is given by Dörner & Funke (2017) and below. 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 1 mailto:Wolfgang.Schoppek@uni-bayreuth.de mailto:Wolfgang.Schoppek@uni-bayreuth.de https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving typical for a “complex problem”. In my opinion, the fo- cus should be on the processes on the side of the prob- lem solver; and insights about these do not depend on the exact classification of the problem. For the prob- lems that persons are asked to solve, I often use the term “complex dynamic control (CDC) tasks”. This term originates in the literature on technical process control (e.g., Woods et al., 1990), and many authors use it (e.g., Osman, 2010; Davis et al., 2020). In cognitive psychology, the term “strategy” is often used generally for any course of action or sequence of cognitive processes. In its original (military) context, a strategy is an abstract approach to a problem, which needs to be substantiated in real situations (Clause- witz, 1832/1991). If a course of action can be imple- mented directly, it is a tactic rather than a strategy. A tactic is a relatively concrete procedure. With this terminology, much that is referred to as strategy could be more precisely be called a tactic. I shall discuss a last problematic term, “intuition”, after a short intro- duction to the basic concepts of the dual processing approach. The dual processing account for human thinking, decision making, and problem solving The core proposition of the dual processing (DP) ac- count is that there are two modes (or types) of in- formation processing, which differ in their character- istics, which work in parallel, and which may come to different conclusions about the given information. For example, in view of a piece of cake, Type 1 processes may quickly raise the impulse of eating it, whereas Type 2 processes may involve the recollection of an in- tention to lose weight and mobilize resistance against the temptation. Initially, the two modes of processing were described as systems with characteristic features. For example, System 1 typically works fast, parallel, automatic, and modality specific; in contrast, System 2 is described as being slow, serial, controlled, and flexible (Evans, 2008). The problem with these characterizations is that they are neither sufficient nor necessary. It is sim- ply not true that all information processing that is slow, is also serial, controlled, and flexible. In addi- tion, it is unlikely that there are exactly two systems for processing information. Particularly the processes that are assigned to System 1 (e.g., pattern recogni- tion, procedural knowledge) are too diverse as to sub- sume them under a unitary system. Therefore, the characterization as two systems was abandoned, and newer conceptions classify processes as belonging to two types of processing. According to Stanovich and Toplak (2012), the defining feature of Type 1 processes is their autonomy: “The execution of Type 1 processes is mandatory when their triggering stimuli are encoun- tered, and they are not dependent on input from high- level control systems” (p. 7). Likewise, the central fea- ture of Type 2 processes is the function of decoupling representations created by hypothetical reasoning and representations of the real world (ibid.). Evans (2012) assigns working memory a critical role for that func- tion. Taken together, Type 2 processes largely overlap with the contemporary conception of executive func- tions (Diamond, 2013): working memory, inhibitory control, and cognitive flexibility. Previous approaches for explaining CPS behavior How can extant approaches for explaining CPS behav- ior be located in the framework of DP? Some of them describe problem solving behavior in terms of Type 1 processing. Broadbent, Fitzgerald, and Broadbent (1986) found that participants who successfully con- trolled simple dynamic systems (e.g., the sugar pro- duction task, viz. Sugar Factory) were not able to answer questions about the causal structure of the systems correctly. They were also not able to pre- dict what effects given input values have on the target variables. From this, Broadbent et al. concluded that participants have learned to control the systems by us- ing a mental “lookup table”. Dienes and Fahey (1995) followed up on these considerations and showed that a model based on Logan’s (1988) instance theory could replicate most of the empirical findings unless the sys- tem’s behavior was governed by a highly salient rule. In that case, a rule-based model made the best pre- dictions. Buchner, Funke, and Berry (1995) offered a different explanation for the negative correlations between verbalizable knowledge and control perfor- mance. Participants who encountered a greater va- riety of system states had a good chance of answering the knowledge questions correctly but were obviously not successful in reaching the targets (because suc- cess meant that the system states did not vary much around the target state). In an additional experiment, however, Dienes and Fahey (1998) found stochastic independence between repeating successful inputs in situations previously encountered and recognition of these situations as known. This corroborates Broad- bent et al.’s (1986) assumption that the relevant knowl- edge for controlling these systems is learnt and known implicitly. Implicit learning can clearly be identified as Type 1 process (Evans & Stanovich, 2013; Sun, Slusarz, & Terry, 2005). In contrast, the rule-based model relies primarily on Type 2 processing. Taatgen and Wallach (2002), as well as Fum and Stocco (2003) presented ACT-R models that simu- lated the learning process in the Sugar Factory. The former model relies on declarative memory of known input-output sequences and assumes a partial match- ing mechanism; the latter model uses learning of pro- cedural parameters. Although ACT-R differs from Lo- gan’s instance theory, and both models differ from each other, they simulate implicit learning rather than ex- plicit rule learning. Osman, Glass, and Hola (2015) presented a model of CPS that is based on reinforcement learning (SLIDER model – Single Limited Input, Dynamic Exploratory Responses). This type of learning is also a process of 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 2 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Type 1. However, the system in that research devi- ates from those that are commonly used in CPS re- search (such as MicroDYN, Tailorshop, or Dynamis2): It has only one output variable that depends linearly on two input variables; a third input variable has no ef- fect. Paradoxically, Osman and colleagues report next to nothing about the fitting procedure and the per- formance of their model. Anyhow, it is obvious that a system with one output variable lends itself more readily to reinforcement-based control than a system with more output variables. This is because in systems with only one output variable no side effects are pos- sible (additional effects of an input variable to other than the targeted output variable). The presence of side effects often requires sophisticated input tactics, which involve considerations about how fixed vs. free the input variables are. For example, if one target vari- able can only be controlled by a single input variable, the latter is relatively fixed and cannot easily be used to control another target variable. (This extends the concept of controllability according to Beckmann and Goode, 2017, who focused on the number of dependen- cies of an output variable, with the number of effects of an input variable). The proof that more complex systems can be controlled based on pure reinforcement learning is still outstanding. In his own framework (Schoppek, 2002; Schoppek et al., 2017), the author has used the term “I-O knowl- edge” (input-output knowledge) for declarative knowl- edge about input values and their specific effects. This conception was inspired by the ACT-R cognitive archi- tecture (Anderson & Lebiere, 1998), which does not go well together with a DP approach. Nevertheless, some aspects of ACT-R can be classified as Type 1 processes, for example, the learning rules that govern parameter changes on the subsymbolic level, or proce- dural learning. So far, I have presented explanations for complex problem solving behavior that can largely be assigned to Type 1 processing. We now turn to explanations that are primarily based on Type 2 processing. The most prominent proponent is the model that has been developed in the context of the “multiple complex sys- tems” approach (Greiff, Wüstenberg, & Funke, 2012). The model assumes that problem solvers first try to detect the causal structure of a system. The success of this phase of problem solving depends on the use of appropriate strategies such as VOTAT (“vary one thing at a time”; Tschirgi, 1980; Vollmeyer, Burns, & Holyoak, 1996). After that, problem solvers try to reach goal states using the knowledge they have ac- quired in the first phase. Many studies involving mul- tiple complex systems such as MicroDYN (Greiff et al., 2012), or MicroFIN (Neubert et al., 2015) adopted that model, referring to the first phase as knowledge acquisition, and to the second phase as knowledge ap- plication (Fischer, Greiff, & Funke, 2012; Greiff & Funke, 2009; Greiff et al., 2013; Kretzschmar & Süß, 2015; Wüstenberg et al., 2012). In most of those studies, CPS competency is measured as a construct comprising these two correlated, yet discriminable di- mensions. Due to the prevalence of that model, we have introduced the name “standard model of CPS” (Schoppek & Fischer, 2017, 2) for it. Note, how- ever, that in some studies a 1-dimensional measure- ment model fitted at least equally well (Kretzschmar et al., 2017). The knowledge acquisition process is mainly char- acterized by induction: From observations of the sys- tem’s responses to certain inputs, the problem solver induces causal relations among variables. Knowledge application involves deductive processes in addition: From the induced rules, the problem solver deduces a sequence of actions to be taken in order to reach the desired goal state. Admittedly, this is a strong simplification of the real processes going on during CPS. However, it demonstrates the similarity between the processes assumed in the standard model of CPS and the induction-deduction cycle that is characteris- tic for many problems used in intelligence tests (Hunt, 2010). Therefore, it is coherent that the performance in controlling simple systems (as used in MicroDYN) is closely correlated with measures of intelligence (Greiff et al., 2013; Stadler et al., 2015). The assignment of these processes to Type 2 is justified by their high de- mands on working memory. This synopsis shows that dual processing ideas are hidden in theorizing about complex problem solving, but that the pertinent assumptions are not combined within one framework. A subtle hint in that direction can be found in the abstract of the Broadbent et al. (1986) paper: “The results challenge a common view of the discrepancy between performance and verbal accounts, and suggest rather that there are alterna- tive modes of processing in human decision making, each mode having its own advantages” (p.33). How- ever, this idea has not been picked up in subsequent research. To my knowledge, there is no published at- tempt to combine both accounts for the topic of CPS. The tendency to economize and related concepts Dietrich Dörner observed in many studies that par- ticipants minimized cognitive effort (Dörner, 1980, 1996; Dörner & Schaub, 1994). For example, problem solvers tend to identify a central variable in a com- plex system and hypothesize that many other quanti- ties almost exclusively depend on it. (In the minds of many people today such a variable might be “uncon- trolled immigration”. This way of thinking may also contribute to the development and adoption of con- spiracy theories). Dörner (1996) attributes these and some other shortcomings of human decision-making in complex situations to the slowness of human think- ing (of Type 2) and has coined the term “tendency to economize” (Ökonomietendenz). In everyday lan- guage, one would say people are lazy-minded. Other researchers have also observed that humans deploy Type 2 processing sparsely or reluctantly. Her- bert Simon broached the issue of the narrowness of 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 3 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving human cognition and observed that persons tend to “satisfice” instead of optimizing (Barnard & Simon, 1947). This is due, among other things, to the inher- ent uncertainty of induction, but also to the limited capacity of the reasoning apparatus (Simon, 1993). The concept of satisficing, i.e., making decisions based on simple criteria, takes into account these limitations and is thus related to the tendency to economize. The heuristics and biases program (Kahneman, Slovic, & Tversky, 1982) was another important field of study with strong relations to the tendency to econ- omize. Several authors demonstrated in many ex- periments that human judgement is guided by simple heuristics, which often lead to wrong conclusions. This program is so well known that it is unnecessary to go into details here. Kahneman (2011) interprets these earlier findings in terms of a DP framework now. Gigerenzer and Brighton (2011) in their ABC pro- gram (adaptive behavior and cognition) gave the topic a different twist: This group investigated how peo- ple use heuristics to make good decisions (Gigerenzer, Hertwig, & Pachur, 2016). They postulate that simple rules of thumb rely on the results of basic skills that have been developed through evolution (“evolved ca- pacities”). As an example, the authors often describe the gaze heuristic. When the goal is to hit a mov- ing object, such as a ball to be caught, one moves so that the angle of view to the object is constant. The perception of the angle of view is provided by the per- ception system, and the rule of keeping it constant is simple. Although Gigerenzer is decidedly opposed to the DP approach (Kruglanski & Gigerenzer, 2011), his conception fits well into this framework: The evolved capacities can be classified as Type 1 processing and the rules of thumb as Type 2. Through their simplic- ity, the latter take the limited capacity of System 2 into account. In this context, it is important to clarify the mean- ing of intuition and its role in CPS. Kahneman (2011) classified Type 1 processing as intuitive. Gobet and Chassy (2009) define intuition as “the rapid under- standing shown by individuals, typically experts, when they face a problem” (p. 151). Other characteristics are the “essential role of perception, the fluid, automa- tized, and rapid behavior characteristic [. . . ], and the long time required to become an expert” (p. 172). This characterization is compatible with Kahneman’s, even though these authors are not advocating a DP approach. However, Gobet and Chassy’s (2009) com- putational model of expert problem solving in chess, which incorporates intuitive and analytic components and their interplay, is a valuable example of how dual processing ideas can be stated more precisely in cog- nitive models. The emphasis on experts points to the problem that intuition can refer to different processes, depending on the amount of experience and practice of the respec- tive person. While I mostly agree with the conception of Gobet and Chassy (2009), I do not assign intuition to experts alone. Persons with little experience in a domain can also have intuitions about the nature of the problem or about a certain course of action, be- cause of perceived similarities with familiar situations (Schoppek, 2019). In such cases, the intuitions will more likely be misleading than in the case of experts. Beckmann (2019) warned not to use “intuition” as a pseudo-explanation for behaviors that cannot be clas- sified as specific strategies. To be precise in that re- spect, I use the term “intuitive approach” for problem solving behavior that is characterized by rather un- systematic trial and error and the attempt to reach goals by gradually adapting an input tactic (see also Beckmann & Goode, 2017). Why do humans deploy Type 2 processing so sparsely? A potential explanation for cognitive miserliness is the energy demand required by Type 2 processes. Re- searchers in rich western industrial societies tend to forget that the abundant supply with calories they ex- perience today was not given during the time when homo sapiens appeared during evolution. Therefore, it seems plausible that the large frontal lobes that are characteristic of humans should be energized only when necessary (Baumeister & Thierney, 2011). The problem with this account is that the human brain consumes about 20% of the energy available in the blood almost independently from its specific activity (Fox & Raichle, 2007). The pattern of activity that can be observed during rest or daydreaming forms a “default mode network” (Raichle, 2015). Its activity ceases when the participant engages in specific cogni- tive tasks. At the same time, activity in other regions, the “task positive network” (TPN), increases (Basten, Stelzel, & Fiebach, 2013). This suggests that energy expenditure shifts rather than rises during thinking. Although the exact energy regime in the brain is still a matter of lively debate (Howarth, Gleeson, & Atwell, 2012; de Boeck & Kovacs, 2020), we can state that the view that homo sapiens uses thinking sparingly to save energy is too simple. Nevertheless, research on individual differences in cognitive functioning also considers energetic factors. Debatin (2019) reviewed a number of studies that ad- dressed the relation between glucometabolic function and cognitive performance and concludes that “there is an increasing amount of research supporting the hypothesis that individuals with better glucose reg- ulation perform better in cognitive performance tasks than individuals with worse glucose regulation” (De- batin, 2019, p. 4; see also Lamport et al., 2009). How- ever, most research in this area has focused on the role of glucose as substrate for oxidative phosphorylation, which is not the only way of providing energy in the body. An additional way, aerobic glycolysis, has re- ceived much less attention (Vaishnavi et al., 2010), so that the view on these questions may change in the near future. Taking up the idea of “shifting rather than rising energy expenditure” again and combining it with the fact of limited energy supply in the brain, one might 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 4 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving speculate that Type 2 processing can only occur at the cost of other cortical processing. As these other processes might be essential for survival (e.g., scan- ning the ambience visually and/or aurally), selection pressure may have acted on excessive thinking during evolution. This speculation is compatible with calcu- lations of the energy demand of neurons in the cortex on a molecular level, which gave rise to the assump- tion that the maximally available energy in the brain severely limits neuronal activity (Lennie, 2003). How- ever, newer calculations showed that action potentials demand much less energy than previously assumed, and that a good part of energy in the brain is used for functions that are independent of acute signaling, such as maintaining resting potentials or neurotransmitter recycling (Howarth et al., 2012). Although these mod- ifications attenuate Lennie’s (2003) original argument, they do not rule out the above speculation. It is generally problematic to draw inferences be- tween different levels of abstraction (Kästner, 2018; Newell, 1994), even more when the evidence on the biological level is vast and controversial. However, psychological theories should be consistent with bio- logical evidence, and the latter can help inspire the former through generating new hypotheses. In the case of the tendency to economize a glimpse into the neurosciences showed that the discussions there about energy expenditure in the brain justify a possible con- nection with modes of thinking. Predictions of the DP account Dual processing accounts have been criticized for not being able to make predictions (Keren & Schul, 2009). However, with the recent specifications (see above), I venture on some predictions in the area of complex problem solving. Obviously, all cognitive processing involves Type 1 and Type 2 portions to different de- grees. Therefore, in the following statements, I use “Type x processing” as shorthand term for “process- ing that is predominantly characterized as Type x” – just for the sake of readability. For making predictions, we need to identify espe- cially the broad range of Type 1 processes. Candi- dates are pattern recognition, incidental learning, im- plicit learning resulting in implicit knowledge (includ- ing specialized procedural knowledge). In complex dynamic control tasks, Type 1 processes perform the following functions. The list is not in- tended to be complete. When performed with little or no practice, some of the functions might also be classified as Type 2. 1. Recognition of system states 2. Recognition of system developments or temporal patterns 3. Input response on recognized system states 4. Unsystematic exploration (trial and error – can be useful under certain circumstances, e.g., finite state automata) 5. Buildup of I-O knowledge 6. Execution of automatized action sequences The following functions are governed mainly by Type 2 processes: 1. Systematic exploration of a dynamic system to acquire structural knowledge (e.g., using VOTAT) 2. Construction of a strategy for exploration 3. Calculation of an intervention based on structural knowledge 4. Construction of input tactics (what variables to manipulate in what order) 5. Keeping the focus on the problem when difficulties arise 6. Remembering to check background variables When we combine the classifications above with the propositions of the DP account, we arrive at following predictions about (complex) problem solving: 1. When confronted with a problem, most persons initially tackle it with a high proportion of Type 1 processes such as unsystematic exploration. 2. Learning to control a novel complex dynamic sys- tem requires Type 2 processing. If central ex- ecutive capacity is bound by other requirements, problem solving performance declines. 3. Working with ample use of Type 2 processing is not very common. It usually needs a consider- able incentive such as sustaining a threatened self- esteem, feelings of challenge, or large extrinsic in- centives (Liddle et al., 2011). 4. Advanced problem solvers have exploration strategies in their repertoire (e.g., VOTAT) and can execute those largely in Type 1 mode (with- out overloading their working memory). 5. Extensive practice with a specific system leads to automatization, meaning the demand for Type 2 processing decreases. 6. After transition to Type 1 processing, it is difficult to detect changes in the system and respond to them appropriately (Luchins, 1942; Betsch et al., 2001). 7. The difficulty of a problem correlates predomi- nantly with its requirement for Type 2 process- ing (Stanovich & West, 1999). However, individ- ual differences in experience, which are reflected in implicit knowledge (Type 1), may override the correlation (Ackerman, 1990; Weise et al., 2020). Predictions 1 and 3, and less obviously, prediction 6, are instances of the tendency to economize. Pre- diction 2 is based on considerations around the stan- dard model of CPS (see section “extant approaches”). Other predictions rest on established theories of cog- nitive skill acquisition and automatization (Anderson et al., 1997; Norman & Shallice, 1986). 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 5 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Dual processing in Dynamis2 These considerations shall now be applied to a new variant of a dynamic problem solving environment, called Dynamis2 (Schoppek & Fischer, 2017). Inspired by Allen Newell (1973), who summoned his colleagues in cognitive science at the time: “Analyze a complex task” (p.21) and “know the method your subject is us- ing to perform the experimental task” (p.12), I present a detailed description of a typical problem within this environment together with possible strategies2 . In the complex dynamic control task environment Dynamis2 (Schoppek & Fischer, 2017) systems are simulated using sets of linear equations. Output vari- ables (aka endogeneous variables) depend on the values of input variables (aka exogeneous variables), which are controlled by the problem solver, and on each other, including themselves. This idea is based on Funke’s (1993) Dynamis approach, which is also re- alized in MicroDYN (Greiff, et al., 2012). One impor- tant feature of Dynamis2 is that it is real-time driven: The simulation is updated every half second, regard- less of whether the participant manipulates the input variables or not. This makes the dynamics of the sim- ulated systems more rigorous than in most other CPS environments and results in genuine time pressure for the participants. A typical run of the system consists of 250 simulation updates (called cycles), which rep- resent a round. An experimental block in Dynamis2 comprises one exploration round, where participants can freely vary the input variables without a specific goal state, followed by several rounds where partic- ipants are required to reach goal states provided by the experimenter. Figure 1 shows a screenshot of the user interface. The following equations constitute an exemplary system that simulates the effect of three drugs (MedA, MedB, MedC), administered continuously (as if from a drip), on three blood levels of three substances (Muron, Fontin, Sugon). All variables and their re- lationships are fictitious in order to minimize prior knowledge influences. However, the course of the blood levels is plausible. M uront = 0.1 · M uront-1 + 2 · M edAt-1 F ontint = F ontint-1 + 0.5 · M uront-1 − 0.2 · Sugont-1 + M edBt-1 Sugont = 0.9 · Sugont-1 + M edC t-1 The effects of the output variables on themselves re- sult in an eigendynamic (or momentum) that is more pronounced the higher the coefficient is. For exam- ple, Muron’s level, with an eigendynamic coefficient of 0.1, responds quickly to the administration of MedA, whereas Sugon reaches a stable level only slowly, given a constant input of MedC. Fontin, having the coef- ficient one, tends to accumulate, which can only be prevented by a certain level of Sugon. The characteristics of the system have implications for all possible control strategies, regardless of being based on Type 1 or Type 2 processing: As Muron can only be controlled with MedA, and also has a positive effect on Fontin, the latter must be prevented from increasing steadily. This can only be achieved using MedC. MedC raises the level of Sugon, which in turn decreases Fontin. However, as the effect of MedC on Sugon unfolds slowly, it is almost impossible to control the level of Fontin by varying MedC. A straightforward strategy for reaching and maintaining the goal state is to keep MedC constant at a certain level (e.g., 25), wait until Sugon levels off, and eventually use MedB to raise Fontin to the desired level. Additionally, MedA needs to be set to 45 at some time during this process to reach the goal of Muron=100. Of course, other strategies are possible, but it is important to recap that using MedC for a fine tuning of Fontin is adverse. A participant who conforms to the standard model would start exploring the system by varying the three input variables one at a time (VOTAT). To detect the eigendynamics of the output variables, she should ap- ply a PULSE tactic (Schoppek & Fischer, 2017) that consists of setting an input variable to a positive value, then back to zero, and observe the course of the output variables. From her observations, the participant can induce all causal relations that constitute the system. When it comes to targeted control, the participant can use her structural knowledge to develop a con- trol strategy and deduce specific input values. From this description, it is obvious that such an approach involves inferential reasoning, which puts a heavy load on working memory and can be characterized as Type 2 processing. On the other hand, what can a participant who takes an intuitive approach learn? He notices early that Muron can only be controlled with MedA. He will also notice that Fontin tends to increase. Because Fontin shall be kept at 1000, he will search for a means to prevent Fontin from growing. Eventually, he will find out that only MedC does that. This participant has gained rudimentary structural knowledge, which he uses to control the system: He will set MedA to the value that brings (and keeps) Muron to 100 (this can be accomplished by visuomotor closed-loop control). Then he tries to control Fontin by adjusting MedC. As Fontin responds to changes in MedC only gradu- ally, this strategy rarely succeeds. I will refer to this as “Strategy Gamma”. This procedure mainly consists of visuomotor closed-loop control: Doing something – watching – ad- justing, which is a Type 1 process. Beckmann and Goode (2017) called this “ad-hoc optimization” (see also Beckmann & Guthke, 1995). Occasionally, the participant must draw some inferences from the ob- served: Noticing and considering that only MedA af- fects Muron, that only MedC limits Fontin. These are Type 2 processes. A little more reasoning could lead our participant to the conclusion that MedC should be kept constant 2This unusual description of the material in the introduction is due to the theoretical nature of analyzing the strategies, which readers cannot understand unless they know the problem. 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 6 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Figure 1. Screenshot of the user interface of a Dynamis2 scenario. The lines represent the course of the output variables. Fontin is displayed in a separate panel because of its range being larger than the ranges of the other variables. The current values of the input variables are listed in the top left corner (MedA . . . MedC), those of the output variables in the bottom left corner. to prevent Fontin from fluctuating in a delayed man- ner. He will notice that Fontin responds much quicker to MedB and uses this Medicine to fine-tune Fontin. This feasible strategy gets by with rudimentary struc- tural knowledge and hence deviates considerably from the standard model. I shall label this strategy, which is characterized by low variation of Med A and Med C, and higher variation of Med B, “Strategy Beta”. Compared to Strategy Gamma, the development of Strategy Beta involves a higher share of Type 2 pro- cessing. To make the strategy classification complete, I introduce a third strategy “Alpha”, where all input variables are varied. Strategy Alpha is characteristic for early exploration phases. Experiment 1 The purpose of this experiment was to challenge the standard model of CPS. This means most hypotheses were formulated under the assumption that the stan- dard model was valid. By varying the presence of a secondary task, intended to increase the burden on working memory, the propensity to adopt a standard or intuitive approach should be manipulated. The secondary task was sentence verification, which has proved its utility in the context of measuring work- ing memory capacity (Daneman & Carpenter, 1980; Unsworth et al., 2009). Adding working memory load should make the use of working memory intensive strategies less likely. In terms of the DP framework, this should disturb Type 2 processing, leading to a greater proportion of Type 1 processing. It is not realistic to expect that all participants con- form to a certain model (standard model, intuitive model). Also, the proportion of using either type of processing cannot be measured directly. Therefore, I started with the working hypothesis that most partic- ipants conformed to the standard model of CPS. From this, one can derive testable hypotheses: Under the as- sumption that the standard model was true, I expected that participants in the dual task condition – as com- pared to the single task condition – perform worse and gain less structural knowledge. Additional and more specific hypotheses are listed after the description of details of the experiment. Materials and measures As complex dynamic control tasks, three Dynamis2 systems were used. The rationale of Dynamis2 and the first system was described in the introduction. The equations of the other two problems are listed in the 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 7 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving appendix. A distinctive feature of the present study was that I used a board with three physical sliders as input device. This should enable participants to con- trol the system in an intuitive, sensory-motor style. Because the sliders are resting in their positions un- less the user moves them, the same applies to the input values. In the beginning of the experiment, the sliders were in their minimum position (zero). Negative input values were not possible. One round consisted of 250 cycles with 0.5 s each, resulting in a total time of 2 min and 5 s. Control performance was measured by the number of rounds each participant took to reach the crite- rion (variable “trials to criterion”). The criterion was reaching the targets and keeping them for ten cycles in two consecutive rounds. As the number of rounds was limited to 15, some participants did not reach the criterion. Their performance was coded as 16. Participants in the dual task condition were asked to do a sentence verification task concurrently with system control. A female voice spoke sentences that were either meaningful or not. In the first case, partic- ipants were to respond by saying “yes”, in the second case by speaking “no”. For example, a meaningful sen- tence was “Oranges grow on trees”, an absurd sentence was “Litter goes into the litter nose”. There were 25 sentences during one round – one sentence every five seconds on average. After each exploration round, participants were asked to enter the effects they had inferred as arrows into a diagram that showed the variables of the sys- tem. From this, a structural score was calculated as the difference between the numbers of correctly and wrongly marked effects. Additionally, I carried out exploratory analyses of the strategies. The associated definitions are described in a separate section “strategies” under results. Design and Procedure Two factors were varied between subjects and one fac- tor was varied within subjects. In the dual task con- dition (DT), participants had to do the sentence ver- ification task while controlling the Dynamis2 system Medicine 1. There was also a single task condition (ST) without the verification task. The single vs. dual task factor was varied in the first block only to avoid overburdening the participants with a continued dual task requirement. All blocks began with a free explo- ration round without given goal states. The second factor consisted in a variation of the se- quence in which the Dynamis2 systems had to be con- trolled. Both conditions started with a specific goal state for the system Medicine 1 (Muron = 100, Fontin = 1000). In the blocked condition, participants con- tinued with a task consisting of a changed goal state for the same system (Muron = 80, Fontin = 1500; near transfer) followed by a new system (Medicine 2, Bulmin = 1000, Grilon = 80; far transfer). In the spaced condition, the order of transfer problems was reversed (far transfer first, then near transfer). Hence, participants in the blocked condition had more expe- rience with the task environment before turning to Medicine 2 than participants in the spaced condition. In both conditions the session ended with a third sys- tem (growing vegetables), which is not reported here. The different tasks can be viewed as a third factor that was varied within subjects. Figure 2 shows the sequence of tasks in the different conditions. Participants Seventy-three persons participated in the experiment: 42 women and 31 men. Participants were studying different majors (32 economics, business administra- tion or law, 16 humanities or social sciences, and 20 sciences, five did not provide the information) at a German University. Participants provided informed consent and all procedures followed the principles of the Declaration of Helsinki. Hypotheses Hypothesis 1.1: Participants in the ST condition take fewer rounds for reaching the goal criterion in the source problem and in the near transfer problem than those in the DT condition. Hypothesis 1.2: Participants in the ST condition acquire better structural knowledge about the source problem than those in the DT condition. Hypothesis 1.3: Structural knowledge and problem solving success are correlated positively, particularly in the ST condition. Hypothesis 1.4: The use of the PULSE tactic in the first two rounds is predictive for (a) structural knowl- edge and for (b) success, particularly in the ST condi- tion. Hypothesis 1.5: Participants solve the far transfer problem faster in the blocked condition than in the spaced condition. As described above, the hypotheses are based on the working hypothesis that the standard model of CPS with its emphasis on acquisition and application of structural knowledge is an adequate description of CPS. Hypothesis 1.4 was formulated as a replication of results found in an earlier experiment (Schoppek & Fischer, 2017). The PULSE tactic involves systematic setting back of input values to zero in order to observe the eigendynamics of output variables and has been shown to predict success in several complex dynamic control tasks (Beckmann, 1994; Lotz et al., 2017). Hypothesis 1.5 is based on the fact that participants in the blocked condition have a second opportunity to work with the same system. The original consid- eration was that this enabled participants to further analyze the causal structure of the system they al- ready know. During this opportunity, they can ac- quire strategic knowledge about exploration of a sys- tem, which they can transfer to the new system (far transfer). This effect should be most prominent in the DT condition, because the secondary task is omitted 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 8 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Figure 2. Diagram of the experimental design. G1, G2: different goal states; ST: single task, DT: dual task; KnowlT1: structural knowledge test for Medicine 1. in the near transfer problem, which makes the sec- ond opportunity more profitable in the DT condition. At first glance, this prediction seems to contradict the standard model with its focus on structural knowledge (which cannot be transferred to the far transfer prob- lem). However, the standard model does not state that structural knowledge is the only relevant type of knowledge and therefore does not preclude the acqui- sition of strategic knowledge. Additionally, the hy- pothesis refers to a less specific effect of cognitive load, which is reduced in the second block of Medicine 1 due to increased familiarity (van Merriënboer, 1997). This effect pertains to a reduction of extraneous load (han- dling the task environment) and intrinsic load (reach- ing the goals), resulting in more resources for learning any kind of knowledge or skills (germane load). Apart from testing the hypotheses, I will report re- sults about differences between participants studying certain subjects, and detailed analyses about the use of strategies and their relation to control performance. In an a priori power analysis, the expectation was that effects of d = 0.65 (medium) should be detected by a one tailed t-test with a power of 1 − β = .85 and a significance level of α = .05. This resulted in a sample size of n = 35 per condition. As it turned out that some of the variables markedly differed from the normal distribution, nonparametric tests were applied, the power of which is a little lower than the t-test’s. Post hoc, the power of the one tailed t-tests with the present sample is 1 − β = .87, for the U-tests it is 1 − β = .85. All power analyses were conducted using the software G-Power (Faul et al., 2009). Results The results are presented in two sections. First, I re- port the analyses for testing the hypotheses. In a sec- ond part, I report some exploratory analyses that can support the interpretation of results or can be used to generate new hypotheses. Testing the hypotheses Table 1 shows descriptive statistics for the main vari- ables. We see that the scenario Medicine 1 was a dif- ficult problem. Many participants did not reach the goal criterion in 15 rounds (coded as 16). The sce- nario Medicine 2 was much easier. The range from 3 to 13 trials to criterion indicates that all participants reached the goals. It is very unlikely that this marked difference between the scenarios is only due to practice, because one half of the sample (the spaced condition) worked on Medicine 2 before they repeated Medicine 1 with changed target values. The means of the struc- ture score show that in both conditions, participants identified little more than one causal relation on av- erage. Given the five possible relations, this is a low value. The average number of PULSE events in the first two rounds is also rather low3 . For Block 1 the distributions of trials to criterion were clearly deviating from a normal distribution in both conditions (Figure 3). Local modes can be iden- tified at 7 to 8 trials and at 12 to 13 trials. The most frequent value in both conditions was 16, meaning that the criterion was not reached. Due to the peculiar dis- tributions, I calculated nonparametric statistical tests. For all scenarios, the U-tests indicated no significant differences between the dual vs. single task conditions (Medicine 1.1: U = 689.5, p = .636, Medicine 1.2: U = 703, p = .532). Comparing the medians (see Ta- ble 1) shows that the median in the dual task condition was even lower than in the single task condition. So, Hypothesis 1.1 was not supported by the data. For the analyses pertaining to Hypotheses 1.2 to 1.4, six participants with missing structure scores were re- moved from the sample (three in each condition), re- sulting in n = 34 and n = 33 in the DT and ST con- ditions, respectively. With respect to Hypothesis 1.2, a t-test revealed no significant difference in the structure score between the ST and DT conditions (MST = 1.15, MDT = 1.24, t = −0.18, p = .573, Cohen’s d = −0.048). Hence, Hypothesis 1.2 was not supported by the data. 3I have also calculated an alternative measure of CPS perfor- mance, based on goal deviations, which I have not reported. The measure has a similarly peculiar distribution as the reported measure and does not reflect goal attainment as well as the re- ported measure. The results were qualitatively the same as for trials to criterion. 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 9 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Table 1. Descriptive Statistics of dependent variables from Experiment 1. single task dual task M Mdn SD Range M Mdn SD Range TTCa Med 1.1 12 4 to 16 11.5 4 to 16 TTC Med 1.2 6 2 to 16 4 2 to 16 TTC Med 2 4 3 to 16 4 3 to 13 StrucScore Med 1.1 1.15 2.18 -5 to 5 1.24 1.48 -2 to 4 PULSE Med 1.1 2.89 2.01 0 to 8 2.57 1.94 0 to 7 a TTC: trials to criterion, StrucScore: structure score – a measure of structural knowledge, PULSE: number of impulse events in the first two rounds. To test Hypothesis 1.3, I calculated Spearman’s rho between the structure score and trials to criterion in Medicine 1. For the whole sample, this results in rho = −.247 (p = .044). In the ST and DT conditions, I obtained rho = −.513 (p = .002), and rho = .020 (p = .913), respectively (zdiff = 2.08, p = .019). Hence, Hypothesis 1.3 was supported. As expected, the correlation is significantly larger in the ST condi- tion. Spearman’s rank correlations between the number of PULSE inputs in the first two rounds and the struc- ture score were rho = .371 (p = .033) in the ST condi- tion and rho = −.070 (p = .696) in the DT condition (zdiff = 1.72, p = .043), supporting Hypothesis 1.4a. Hypothesis 1.4b was not supported by the data: The correlations between PULSE and trials to crite- rion were rho = −.328 (p = .062) in the ST condi- tion and rho = −.153 (p = .388) in the DT condition (zdiff = 0.68, p = .247). Hypothesis 1.5 stated better performance of the blocked condition in the far transfer problem. Al- though the medians of trials to criterion in Medicine 2 do not differ much between the conditions, the ex- Figure 3. Distributions of the number of trials to achieve the goal criterion in the single task and dual task conditions. Sixteen means that the target was not achieved in 15 rounds. pected difference was significant. Participants in the blocked condition solved that problem earlier than in the spaced condition (U = 477.5, p=.047), so the hy- pothesis is supported. However, the supposed reason for that – a better acquisition of strategic knowledge in the blocked condition – was not supported in ad- ditional analyses: In Medicine 2, participants in the blocked condition used the PULSE tactic only slightly more than those in the spaced condition (M = 3.73 vs. M = 3.25, t = 1.00, one-sided p = .160, Cohen’s d = 0.162). Exploratory analyses To analyze differences between the participants of the study they were assigned to three categories: “Sciences” (Chemistry, Physics, Biology, Mathemat- ics, Engineering Sciences), “Economics” (Economics, Law), and “Arts & Humanities” (History, Cultural Studies, Languages, Social Sciences). Kruskal-Wallis tests revealed significant effects of the participants’ subject of study on trials to criterion in all three prob- lem solving blocks. Figure 4 shows boxplots of the re- sults in Medicine 1.1 (Panel A) and Medicine 2 (Panel B). We see that science students solved the problem considerably faster than students of other fields of study (Medicine 1.1: χ2 = 9.52, df = 2, p = .009, Medicine 1.2: χ2 = 6.65, df = 2, p = .036, Medicine 2: χ2 = 8.48, df = 2, p=.014). This confirms similar results from earlier studies (Schoppek, 2004; Schoppek & Fischer, 2017). To classify the strategies used in the source prob- lem, I calculated the standard deviations for each in- put variable across all 250 cycles of each round. This allows judging how much a variable was varied by the problem solver. Based on these indicators, three main strategies and two marginal strategies4 were identified. Strategy Alpha is defined by varying input variables MedB and MedC (both SDs ≥ 0.7). The SD for MedA may be zero because many participants keep this in- put constant at a value of 45). Strategy Beta is de- fined by keeping MedC relatively constant (SD < 0.7) and using MedB to control Fontin (SD ≥ 0.7). Strat- egy Gamma is defined the other way round: Keep- ing MedB constant and using MedC for controlling Fontin. The marginal strategies were “Minimal”, de- 4The name is due to the rare occurrence of those strategies. 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 10 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Figure 4. Boxplots of the number of trials needed to achieve the target in three areas of study (16 means that the target was not achieved in 15 rounds). Left Panel: Medicine 1.1, Right Panel: Medicine 2. fined by varying all input variables only slightly (all SDs < 0.7), and “OnlyA”, defined by varying almost entirely MedA (SD for MedA ≥ 0.7, all other SDs < 0.7). The marginal strategies were used in 2.1% of all rounds. I removed the first round of all participants from the dataset, because this round was declared as explo- ration round, and the participants were supposed to vary all input variables by instruction. Of the remain- ing 728 rounds, 77.5% were classified as Alpha, 7.7% as Beta, and 12.8% as Gamma. The success rates of the strategies were markedly different (see Table 2). As expected, Strategy Beta was the most successful (68% success rate). To analyze the relations between strategy and suc- cess (having reached the goals), I calculated a general- ized linear mixed model with the three main strategies and the participant as predictors and success in each round as dependent variable. The analysis, which es- timates the parameters of a multilevel logistic regres- sion model, was calculated with the function glmer from the R-package lme4 (Bates et al., 2015). Note that in this analysis the entity at Level 1 is a round of Dynamis2. Participants are located on Level 2. There- fore, participants figure as predictor and the analysis is based on n = 728 data points. Strategy Alpha was used as the baseline, coded with zero in the dummy variables. The marginal strate- gies were omitted due to their rare occurrence. The variance between participants was factored in by esti- mating a random intercept for each participant. The estimated parameters are to be interpreted as odds (intercept) or odds ratios (predictors) on a log scale. The estimated value for the intercept was −2.375 (z = −11.26, p < .001), meaning that it is signifi- cantly more likely being not successful using Strategy Alpha than being successful using any other strategy. The log odds ratios for Strategies Beta and Gamma were 3.452 (z = 8.57, p < .001) and 1.824 (z = 5.49, p < .001), respectively. This means that the odds of being successful using Strategy Beta are e3.452 = 31.6 times higher than the odds for any other strategy. For Strategy Gamma this ratio is e1.824 = 6.2. Discussion Overall, Experiment 1 has not completely supported the predictions of the standard model of CPS. The dy- namic system Medicine 1 has turned out equally diffi- cult in the single task (ST) and the dual task (DT) con- ditions (Hyp. 1.1). Moreover, the participants in the ST condition have not gained better structural knowl- edge about the system (Hyp. 1.2). In a comparable experiment, Hundertmark, Holt, Fischer, Said, & Fis- cher (2015) also found a much smaller effect (η2 = .01) of a cognitive load manipulation on system control than they had expected. In retrospect, these hypothe- ses might not have been well justified, because they implicitly assumed that approaching the control task with a greater amount of Type 2 processing would be superior. This assumption can be doubted generally, because the relation between approach and success is moderated by factors such as expertise or cognitive ability (Evans, 2012; Gigerenzer & Brighton, 2011; Gobet & Chassy, 2009). In particular, Dynamis2 prob- lems with their time pressure, their lack of a log of input variations, and the analog user interface (slid- ers and graphs) probably suggest a Type 1 approach much more than conventional Dynamis applications, featuring input logs, much fewer time steps, and little time pressure. Another reason why the expected differences have not been found in Experiment 1 could be that the con- curring tasks called for different subsystems of work- ing memory (Baddeley, 2007), the sentence verification task being clearly verbal, the Dynamis2 problem being more visual-motor. In devising the hypotheses, I had assumed an important role of the central executive for 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 11 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Table 2. Strategy use and associated success in two data sets. Proportions refer to individual rounds (each round of each participant was counted). Strategy Alpha Beta Gamma minimal onlyA All Experiment 1 Proportion 0.78 0.08 0.13 0.01 0.01 1 Rate of success 0.09 0.68 0.34 1 0.13 0.18 Experiment 2 Proportion 0.84 0.04 0.11 0.01 0.01 1 Rate of success 0.10 0.54 0.27 1 0.14 0.14 system control, for moving the attention between the two tasks and for the decisions about the meaning of the sentences. It might be that the sentence verifica- tion task called for central executive processes far less than expected. This surmise should be tested with varied secondary tasks. The predictions of the standard model about the role of structural knowledge (Hyp. 1.3) and the PULSE tactic (Hyp. 1.4) for controlling the system have been confirmed, albeit the effect is due to the ST condition only. Whereas the level of structural knowl- edge was quite low in both conditions, the range was much larger in the ST condition. The reason for this pattern of results could be that only part of the par- ticipants proceeded in accordance with the standard model, that this group was larger in the ST condition, and that this proceeding does not warrant success (as indicated by the markedly negative minimum of the structure score in the ST condition). This interpreta- tion raises the question about the proceeding of the other participants. I will get back to that question below. Lastly, the expectation that the longer experience with the source problem in the blocked condition is beneficial for the far transfer problem was confirmed (Hyp. 1.5). As the participants cannot transfer struc- tural knowledge to the new system (far transfer), this effect must be due to other types of knowledge. How- ever, the data did not support the supposed mediation of the effect through more use of the PULSE tactic. Experiment 2 In this Experiment, Dynamis2 was used for inducing ego depletion (Baumeister, Vohs, & Tice, 2007) in the context of a study about training self-control through regular physical activity (Schoppek, in prep.). For the present research, I report only the results related to Dynamis2. Participants, design, and hypotheses Seventy-seven subjects from the same population as in Experiment 1 participated in the experiment (students of different majors at the University of Bayreuth, 48 female, 29 male). Participants worked on the same problem as the source problem from Experiment 1 for a maximum of 15 rounds. The sentence verification task was also administered concurrently. With the results of Experiment 2, the explorative re- sults from Experiment 1 can be cross validated. There- fore, the hypotheses for Experiment 2 were as follows: Hypothesis 2.1: Science students solve the problem in fewer rounds than students of other majors (partic- ularly faster than students of arts and humanities). Hypothesis 2.2: Strategy Beta is the most successful strategy, followed by Strategy Gamma and Strategy Alpha as least successful strategy. The first hypothesis is not only based on the results of Experiment 1, but also on earlier findings with Dy- namis2 (Schoppek & Fischer, 2017) or a predecessor system (Schoppek, 2004). Results Participants reached the goal criterion within 4 to 16 rounds (16 meaning they never reached it). The me- dian was 13. These values are close to those from Experiment 1 (see Table 1). Experiment 2 confirmed the differences among the students of the three categories of majors (Kruskal- Wallis test, χ2 = 9.34, df = 2, p = .009). However, an examination of the medians shows that the differ- ences are due to the poor performance of the Arts & Humanities students (Mdn=16). The Economics stu- dents (Mdn=11) performed similarly to the Science students (Mdn=11.5). The U-test comparing the com- bined Science and Economics group with the Arts & Humanities group was significant (U = 729, p = .002). For cross validation of the strategy results, the same analysis as in Experiment 1 was applied: A general- ized linear mixed model with the three main strategies and the participant as predictors and success in each round as dependent variable. (Please recall that in this analysis the entity at Level 1 is a single round. Partic- ipants are located on Level 2). The present analysis is based on n=806 data points. The results were qualitatively the same as in Exper- iment 1: The odds for the intercept (corresponding to Strategy Alpha) were −2.316 (z = −11.64, p < .001). Strategy Beta was the most successful of the main strategies (log odds ratio = 2.630, z = 5.85, p < .001), followed by Strategy Gamma (log odds ratio = 1.568, z = 4.28, p < .001). Descriptive statistics for this analysis are displayed in Table 2. 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 12 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Discussion The effect of the subject of study on problem solving performance has been replicated with respect to the difference between the Science students and the Arts & Humanities students (see Schoppek, 2004). How- ever, the Economics students were as fast in solving the problem as the Science students. The effect of subject of study points again to the important role of knowledge other than structural knowledge. Science students (and probably Economics students, too) have more experience with diagrams of quantitative gradi- ents and with the notion of dynamic systems than stu- dents of Arts & Humanities. Additionally, I have re- ceived the impression that the relevance of controlling dynamic systems for the participants’ selves matters. In case of failure, arts students may well take com- fort in thinking “this kind of stuff has never been my cup of tea” – and disengage from the task. This is much harder for science students, who probably feel their subject-based self-esteem challenged in the face of failure. This hypothesis should be investigated in future studies. The assumed relatedness of systems of the Dynamis type with certain topics in the sciences is consistent with findings that problem solving in Mi- croDYN correlates most closely with school grades in math and science (Greiff et al., 2013; Greiff et al., 2013). It is also in line with findings from PISA 2003 that math and science competence significantly con- tribute to problem solving across 41 countries (Scherer & Beckmann, 2014). Experiment 2 has also replicated the role of the dif- ferent strategies. In both experiments, the little effec- tive strategies Alpha and Gamma prevailed (whereby Strategy Alpha, characterized by consistently varying two input variables, might not even be worthy of the name “strategy”). This prevented many participants from reaching the goals. Only in 4% (Exp. 1: 8%) of all rounds, participants used the more sophisticated Strategy Beta, which had a much higher rate of suc- cess. This preference for self-evident but inadequately simple strategies is a further instance of the tendency to economize (Dörner, 1996). General Discussion We have enough evidence now that structural knowl- edge is beneficial for controlling complex dynamic sys- tems (Funke, 1993; Greiff et al., 2013; Schoppek & Fis- cher, 2017), but also that by no means all participants conform to the standard model (Fischer et al., 2012). This also became apparent in a study using Micro- DYN (Stadler, Hofer, & Greiff, 2020) where individual differences in problem solving behavior were found in participants who obtained the same CPS scores. In the present experiments, many participants preferred an intuitive approach, which is on average less success- ful. So, one of the most important research questions for the future is to investigate the conditions, under which problem solvers switch from the “default mode”, which is dominated by Type 1 processes, to effortful thinking, which involves much Type 2 processing. This question is not only relevant to problem solving, but also to judgment and decision-making. We can find one answer to that question in exist- ing research: rewards. Although Kahneman, Slovic, and Tversky (1982) have obtained their findings about heuristic judgement despite rewarding their partici- pants for correct answers (e.g., Kahneman & Tver- sky, 1972), there is evidence from diverse areas that attractive rewards motivate individuals to engage in effortful control or thought. They instigate persons to overcome ego depletion (Muraven & Slessareva, 2003), they can markedly reduce ADHD symptoms in an ex- perimental setting (Liddle et al., 2011), and they coun- teract fatigue (Inzlicht & Berkman, 2015). The inter- pretation about threatened self-esteem in science stu- dents can also be subsumed under this account, albeit the incentive is negative in that case. This is in line with the statement by Inzlicht and Berkman (2015) that “affirming some core value . . . similarly prevents the reductions in self-control” (p.516). We can investigate the potency of such mechanisms in problem solving well with CDC tasks like Dynamis2. Their complexity, dynamics, time scale, and interac- tivity make such tasks more similar to real life require- ments than the more artificial, highly standardized and short system control items in the multiple complex systems approach (Greiff et al., 2012; Neubert et al., 2015). Future research needs to clarify the relations among effortful thinking, its behavioral indicators, and success. For instance, Kahneman (2011) described the pupil reaction as indicator for Type 2 processing. In the present study, I took the Strategy Beta as indicator for Type 2 processing and Strategy Gamma for Type 1 processing. This provision as well as other indicators should be validated further. Similarly, the relation be- tween reasoning and success is not trivial. Kahneman, Slovic, and Tversky (1982) have been criticized for almost equating reasoning with normative solutions (Gigerenzer & Brighton, 2011). With respect to this problem, Evans (2012) stated that “normative correct- ness cannot be a defining feature of Type 2 processing because it is an externally imposed evaluation and not intrinsic to definitions based upon explicit processing through working memory" (p.123). Therefore, the re- lation can be subject to empirical investigation. As in other areas (Stanovich & West, 1999), one would ex- pect an advantage of the “analytic approach” to CPS that is moderated by individual differences in intelli- gence (Greiff et al., 2013). However, even an approach that is dominated by Type 2 processing can be auto- mated with extensive practice and hence get less de- pendent on cognitive ability (a phenomenon closely as- sociated with the Elshout-Raaheim hypothesis, which has recently been confirmed in a CPS study, Weise, et al., 2020). For making progress in understanding and predict- ing CPS, we need a more general theory about problem solving, or as Beckmann (2019) stated, “some ex ante ideas are needed about both the real-life problem and the laboratory task” (p.3). Models that are tailored 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 13 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving to a narrow class of tasks, like the standard model of CPS, are only helpful when they are embedded in a more general theoretical framework, like the DP ap- proach. To that end, I envision a theory of mental states, which characterizes classes of states and speci- fies the rules that govern the transitions among those states. This description applies to a number of im- portant and more or less successful theories: The Ru- bicon model of action phases (Gollwitzer, 1990), the flow theory (Csikszentmihalyi et al., 2014), or the re- source model of self-control (Baumeister et al., 2007) with its recent modifications by Inzlicht and Schme- ichel (2012). For example, ego depletion is charac- terized as a state in which persons are not willing or not able to exert effortful control. Persons enter this state when having spent effortful control for a while, and exit it when consuming sugar or experi- encing humor, amongst other things. Given these as- sumptions, trying to reach the goals in a Dynamis2 scenario using an analytical approach, which involves much Type 2 processing, can lead to ego depletion. On the other hand, it is conceivable that participants are getting so involved in the control task that they expe- rience a state of flow. Csikszentmihalyi, Abuhamdeh, and Nakamura (2014) characterize flow as “intense ex- periential involvement in moment-to-moment activity. Attention is fully invested in the task at hand, and the person functions at his or her fullest capacity” (p. 214). This apparently involves Type 2 processing. To my knowledge, it has not been investigated whether flow is usually followed by ego depletion or not. As the activities during a state of flow are not accompa- nied by feelings of labor, I suppose it is not. From a DP perspective, flow can be described as resulting from a seamless interplay between a bird’s eye view on the situation, which is maintained and handled by Type 2 processing, and a broad array of potent Type 1 processes that are orchestrated through decisions on the top level (Type 2). These are just a few exam- ples of existing connection points that might enable a unification of those theories in the future. I regard such a unified theory of mental states as a convenient framework for specific theories about problem solving in dynamic and uncertain situations – also known as CPS. As mentioned earlier, effortful thinking does not always generate better results than an intuitive ap- proach; but in general, overcoming the tendency to economize is desirable, not just in the laboratory, but also in real life. Acknowledgements: I want to thank three reviewers and the action editor for their valuable hints that were helpful for improving earlier versions of the manuscript. Declaration of conflicting interests: The author de- clares no conflicts of interest. Peer Review: In a blind peer review process, Jens F. Beckmann, André Kretzschmar, and Matthias Stadler have reviewed this article before publication. All review- ers have approved the disclosure of their names after the end of the review process. Handling editor: Varun Dutt Copyright: This work is licensed under a Creative Com- mons Attribution-NonCommercial-NoDerivatives 4.0 In- ternational License. Citation: Schoppek, W. (2023). A dual process- ing approach to complex problem solving. Jour- nal of Dynamic Decision Making, 9, 1–17. doi:10.11588/jddm.2023.1.76662 Received: 09.11.2020 Accepted: 15.03.2023 Published: 20.06.2023 References Ackerman, P. L. (1990). A correlational analysis of skill specificity: Learning, abilities, and individual differences. Journal of Exper- imental Psychology: Learning, Memory, and Cognition, 16 (5), 883–901. https://doi.org/10.1037/0278-7393.16.5.883 Anderson, J. R., Fincham, J. M., & Douglass S. (1997). The role of examples and rules in the acquisition of a cognitive skill. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 932–945. Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahwah, NJ: Erlbaum. Baddeley, A.D. (2007). Working memory, thought and action. Ox- ford: Oxford University Press Barnard, C., & Simon, H. A. (1947). Administrative behavior. A study of decision-making processes in administrative organiza- tion. New York: Free Press. Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41 (5), 517–528. https://doi.org/10.1016/j.intell.2013.07.006 Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67 (1), 1–48. https://doi.org/10.18637/jss.v067.i01 Baumeister, R. F., & Tierney, J. (2011). Die Macht der Disziplin: Wie wir unseren Willen trainieren können. Frankfurt/New York: Campus-Verlag. Baumeister, R. F., Vohs, K. D., & Tice, D. M. (2007). The strength model of self-control. Current Directions in Psycholog- ical Science, 16 (6), 351–355. Retrieved from http://dx.doi.org/ 10.1111/j.1467-8721.2007.00534.x Beckmann, J. (1994). Lernen und Komplexes Problemlösen [Learn- ing and Complex Problem Solving ]. Bonn: Holos. Beckmann, J.F. (2019). Heigh-Ho: CPS and the seven ques- tions – some thoughts on contemporary Complex Problem Solv- ing research. Journal of Dynamic Decision Making, 5 (12). https://doi.org/10.11588/jddm.2019.1.69301 Beckmann, J.F., & Goode, N. (2017). Missing the wood for the wrong trees: On the difficulty of defining the complexity of com- 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 14 https://doi.org/10.11588/jddm.2023.1.76662 https://doi.org/10.1037/0278-7393.16.5.883 https://doi.org/10.1016/j.intell.2013.07.006 https://doi.org/10.18637/jss.v067.i01 http://dx.doi.org/10.1111/j.1467-8721.2007.00534.x http://dx.doi.org/10.1111/j.1467-8721.2007.00534.x https://doi.org/10.11588/jddm.2019.1.69301 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving plex problem solving scenarios. Journal of Intelligence, 5 (15), 1-18. https://doi.org/10.3390/jintelligence5020015 Beckmann, J. F., & Guthke, J. (1995). Complex problem solving, intelligence, and learning ability. In P. A. Frensch & J. Funke (Eds.), Complex problem solving: The European perspective (pp. 177–200). Psychology Press. Betsch, T., Haberstroh, S., Glockner, A., Haar, T., & Fiedler, K. (2001). The effects of routine strength on adaptation and information search in recurrent decision making. Organizational Behavior and Human Decision Processes, 84 (1), 23–53. https:// doi.org/10.1006/obhd.2000.2916 Boeck, P. de, & Kovacs, K. (2020). The many faces of in- telligence: A discussion of Geary's mitochondrial functioning theory on general intelligence. Journal of Intelligence, 8 (1). https://doi.org/10.3390/jintelligence8010008 Broadbent, D. E., FitzGerald, P., & Broadbent, M. H. P. (1986). Implicit and explicit knowledge in the control of complex systems. British Journal of Psychology, 77 (1), 33–50. https://doi.org/ 10.1111/j.2044-8295.1986.tb01979.x Buchner, A., Funke, J., & Berry, D. C. (1995). Negative correla- tions between control performance and verbalizable knowledge: Indicators for implicit learning in process control tasks? The Quarterly Journal of Experimental Psychology. A, Human Exper- imental Psychology, 48 (1), 166–187. https://doi.org/10.1080/ 14640749508401383 Clausewitz, C. von (1832/1991). Vom Kriege (ed. Werner Hahlweg). Bonn: Dümmler. Csikszentmihalyi, M., Abuhamdeh, S., & Nakamura, J. (2014). Flow. In M. Csikszentmihalyi (Ed.), Flow and the foundations of positive psychology: The collected works of Mihaly Csikszentmi- halyi (pp. 227–238). Heidelberg, New York: Springer. Daneman, M., & Carpenter, P. A. (1980). Individual differences in working memory and reading. Journal of Verbal Learning & Ver- bal Behavior, 19, 450–466. doi:10.1016/S0022-5371(80)90312-6 Davis, Z. J., Bramley, N. R., & Rehder, B. (2020). Causal structure learning in continuous systems. Frontiers in Psychology, 11, 244. https://doi.org/10.3389/fpsyg.2020.00244 Debatin, T. (2019). A revised mental energy hypothesis of the g factor in light of recent neuroscience. Review of General Psy- chology, 23 (2), 201–210. Diamond, A. (2013). Executive Functions. Annual Review of Psychology, 64 (1), 135–168. https://doi.org/10.1146/annurev -psych-113011-143750 Dienes, Z., & Fahey, R. (1995). Role of specific instances in con- trolling a dynamic system. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21 (4), 848–862. Dienes, Z., & Fahey, R. (1998). The role of implicit memory in controlling a dynamic system. The Quarterly Journal of Experi- mental Psychology. A, Human Experimental Psychology, 51 (3), 593–614. https://doi.org/10.1080/713755772 Dörner, D. (1980). On the difficulties people have in dealing with complexity. Simulation & Games, 11 (1), 87–106. Dörner, D. (1996). The logic of failure: Recognizing and avoiding error in complex situations. New York, NY: Basic Books. Dörner, D., & Funke, J. (2017). Complex problem solving: What it is and what it is not. Frontiers in Psychology, 8 (1153), 1–11. https://doi.org/10.3389/fpsyg.2017.01153 Dörner, D., & Schaub, H. (1994). Errors in planning and decision- making and the nature of human information processing. Applied Psychology, 43 (4), 433–453. Evans, J. St. B.T. (2008). Dual-processing accounts of reason- ing, judgment, and social cognition. Annual Review of Psy- chology, 255–278. Retrieved from http://dx.doi.org/10.1146/ annurev.psych.59.103006.093629 Evans, J. St. B.T. (2012). Spot the difference: Distinguishing be- tween two kinds of processing. Mind & Society, 11 (1), 121–131. Retrieved from http://dx.doi.org/10.1007/s11299-012-0104-2 Evans, J. St. B.T., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8 (3), 223–241. Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statis- tical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149-1160. Fischer, A., Greiff, S., & Funke, J. (2012). The process of solving complex problems. J. Probl. Solv. 4, 19–42. doi: 10.7771/1932- 6246.1118 Fox, M. D., & Raichle, M. E. (2007). Fox, M. D., & Raichle, M. E. (2007). Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nature reviews neuro- science, 8 (9), 700-711. https://doi.org/10.1038/nrn2201 Funke, J. (1993). Microworlds based on linear equation systems: A new approach to complex problem solving and experimental results. In G. Strube & K. F. Wender (Eds.), Knowledge and performance in complex problem solving (pp. 313–330). Elsevier. Fum, D. & Stocco, A. (2003). Instance vs. rule-based learning in controlling a dynamic system. In F. Detje, D. Dörner, & H. Schaub (Eds.), Proceedings of the international conference on cognitive modelling (pp. 105–110). Universitätsverlag Bamberg. Gigerenzer, G., & Brighton, H. (2011). Homo heuristicus: Why bi- ased minds make better inferences. In G. Gigerenzer, R. Hertwig, & T. Pachur (Eds.), Heuristics: The foundations of adaptive be- havior (pp. 2–27). Oxford, New York: Oxford University Press. Gigerenzer, G., Hertwig, R., & Pachur, T. (Eds.). (2016). Heuris- tics: The foundations of adaptive behavior (First issued as an Oxford University Press paperback). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199744282.001.0001 Gobet, F., & Chassy, P. (2009). Expertise and intuition: A tale of three theories. Minds and Machines, 19 (2), 151–180. Retrieved from http://dx.doi.org/10.1007/s11023-008-9131-5 Greiff, S., Fischer, A., Wüstenberg, S., Sonnleitner, P., Brun- ner, M., & Martin, R. (2013). A multitrait-multimethod study of assessment instruments for complex problem solving. In- telligence, 41 (5), 579–596. Retrieved from http://dx.doi.org/ 10.1016/j.intell.2013.07.012 Greiff, S., and Funke, J. (2009). Measuring complex problem solv- ing: The MicroDYN approach, in The Transition to Computer- Based Assessment - Lessons Learned from Large-Scale Surveys and Implications for Testing, eds F. Scheuermann and J. Björns- son (Luxembourg: Office for Official Publications of the Euro- pean Communities), 157–163. Greiff, S., Wüstenberg, S., & Funke, J. (2012). Dynamic Prob- lem Solving: A New Assessment Perspective. Applied Psycho- logical Measurement, 36 (3), 189–213. https://doi.org/10.1177/ 0146621612439620 Greiff, S., Wüstenberg, S., Molnár, G., Fischer, A., Funke, J., & Csapó, B. (2013). Complex problem solving in educational contexts—Something beyond g: Concept, assessment, mea- surement invariance, and construct validity. Journal of Educa- tional Psychology, 105 (2), 364–379. https://doi.org/10.1037/ a0031856 Gollwitzer, P. (1990). Action phases and mind-sets. In E. T. Hig- gins & R. M. Sorrentino (Eds.), The handbook of motivation and cognition: Foundations of social behavior (pp. 53–92). New York, NY: Guilford Press. Howarth, C., Gleeson, P., & Attwell, D. (2012). Updated energy budgets for neural computation in the neocortex and cerebellum. 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 15 https://doi.org/10.3390/jintelligence5020015 https://doi.org/10.1006/obhd.2000.2916 https://doi.org/10.1006/obhd.2000.2916 https://doi.org/10.3390/jintelligence8010008 https://doi.org/10.1111/j.2044-8295.1986.tb01979.x https://doi.org/10.1111/j.2044-8295.1986.tb01979.x https://doi.org/10.1080/14640749508401383 https://doi.org/10.1080/14640749508401383 https://doi.org/10.3389/fpsyg.2020.00244 https://doi.org/10.1146/annurev-psych-113011-143750 https://doi.org/10.1146/annurev-psych-113011-143750 https://doi.org/10.1080/713755772 https://doi.org/10.3389/fpsyg.2017.01153 http://dx.doi.org/10.1146/annurev.psych.59.103006.093629 http://dx.doi.org/10.1146/annurev.psych.59.103006.093629 http://dx.doi.org/10.1007/s11299-012-0104-2 https://doi.org/10.1038/nrn2201 https://doi.org/10.1093/acprof:oso/9780199744282.001.0001 http://dx.doi.org/10.1016/j.intell.2013.07.012 http://dx.doi.org/10.1016/j.intell.2013.07.012 https://doi.org/10.1177/0146621612439620 https://doi.org/10.1177/0146621612439620 https://doi.org/10.1037/a0031856 https://doi.org/10.1037/a0031856 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving Journal of Cerebral Blood Flow and Metabolism, 32 (7), 1222– 1232. https://doi.org/10.1038/jcbfm.2012.35 Hundertmark, J., Holt, D. V., Fischer, A., Said, N., and Fischer, H. (2015). System structure and cognitive ability as predictors of performance in dynamic system control tasks. J. Dynam. Decis. Making, 1, 1–10. doi:10.11588/jddm.2015.1.26416 Hunt, E. (2010). Human intelligence. Cambridge University Press. Inzlicht, M., & Berkman, E. (2015). Six Questions for the Resource Model of Control (and Some Answers). Social and Personality Psychology Compass, 9 (10), 511–524. https://doi.org/10.1111/ spc3.12200 Inzlicht, M., & Schmeichel, B. J. (2012). What is ego depletion? Toward a mechanistic revision of the resource model of self- control. Perspectives on Psychological Science, 7 (5), 450–463. Retrieved from http://dx.doi.org/10.1177/1745691612454134 Kästner, L. (2018). Integrating mechanistic explanations through epistemic perspectives. Studies in History and Philosophy of Sci- ence, 68, 68–79. https://doi.org/10.1016/j.shpsa.2018.01.011 Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press. Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology, 430– 454. Retrieved from http://dx.doi.org/10.1016/0010-0285% 2872%2990016-3 Kahneman, D. (2011). Thinking, fast and slow. McMillan. Keren, G., & Schul, Y. (2009). Two is not always better than one: A critical evaluation of two-system theories. Perspectives on Psychological Science, 4 (6), 533–550. Kretzschmar, A., Hacatrjana, L., & Rascevska, M. (2017). Re- evaluating the psychometric properties of MicroFIN: A multidi- mensional measurement of complex problem solving or a unidi- mensional reasoning test? Psychological Test and Assessment Modeling, 59 (2), 157–182. Kretzschmar, A., & Süß, H. M. (2015). A study on the training of complex problem solving competence. Journal of Dynamic Decision Making, 1 (1), 1–14. https://doi.org/10.11588/jddm .2015.1.15455 Kruglanski, A. W. & Gigerenzer, G. (2011). Intuitive and delibera- tive judgements are based on common principles. Psychological Review, 118, 97–109. Lamport, D. J., Lawton, C. L., Mansfield, M. W., & Dye, L. (2009). Impairments in glucose tolerance can have a negative impact on cognitive function: a systematic research review. Neuroscience & Biobehavioral Reviews, 33 (3), 394-413. Lennie, P. (2003). The cost of cortical computation. Current Biol- ogy, 13 (6), 493–497. https://doi.org/10.1016/S0960-9822(03) 00135-0 Liddle, E. B., Hollis, C., Batty, M. J., Groom, M. J., Totman, J. J., Liotti, M., & Liddle, P. F. (2011). Task-related default mode network modulation and inhibitory control in ADHD: effects of motivation and methylphenidate. Journal of Child Psychology and Psychiatry, 52 (7), 761–771. https://doi.org/10.1111/j.1469 -7610.2010.02333.x Logan, G. D. (1988). Toward an instance theory of automatization. Psychological Review, 95 (4), 492–527. Lotz, C., Scherer, R., Greiff, S., & Sparfeldt, J. R. (2017). In- telligence in action – Effective strategic behaviors while solving complex problems. Intelligence, 64, 98–112. doi: 10.1016/j.intell.2017.08.002 Luchins, A. S. (1942). Mechanization in problem solving: The effect of einstellung. Psychological Monographs, 54, 95–111. Müller, R. & Urbas, L. (2020). Adapt or Exchange: Making changes within or between contexts in a modular plant scenario. Journal of Dynamic Decision Making, 1, 1–10. Muraven, M., & Slessareva, E. (2003). Mechanisms of self-control failure: Motivation and limited resources. Personality & social psychology bulletin, 29 (7), 894–906. Neubert, J. C., Kretzschmar, A., Wüstenberg, S., & Greiff, S. (2015). Extending the assessment of complex problem solving to finite state automata. European Journal of Psychological As- sessment, 31 (3), 181–194. https://doi.org/10.1027/1015-5759/ a000224 Newell, A. (1973). You can't play 20 questions with nature and win: Projective comments on the papers of this symposium. In Chase, W. G. (Ed.), Visual information processing. New York, NY: Academic Press. Newell, A. (1994). Unified theories of cognition. Harvard University Press. Norman, D. A., & Shallice, T. (1986). Attention to action: Willed and automatic control of behavior. In R. J. Davidson (Ed.), Consciousness and self-regulation (pp. 1–18). New York, NY: Springer. Öllinger, M. (2017). Problemlösen. In J. Müsseler & M. Rieger (Eds.), Allgemeine Psychologie (pp. 587–618). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-53898-8_16 Osman, M. (2010). Controlling uncertainty: A review of human behavior in complex dynamic environments. Psychological Bul- letin, 136 (1), 65–86. Osman, M., Glass, B., & Hola, Z. (2015). Approaches to learning to control dynamic uncertainty. Systems, 3, 211–236. https:// doi.org/10.3390/systems3040211 Quesada, J., Kintsch, W., & Gomez, E. (2005). Complex problem- solving: A field in search of a definition? Theoretical Issues in Ergonomics Science, 6 (1), 5–33. http://dx.doi.org/10.1080/ 14639220512331311553 Raichle, M. E. (2015). The Brain’s Default Mode Network. An- nual Review of Neuroscience, 38 (1), 433–447. https://doi.org/ 10.1146/annurev-neuro-071013-014030 Scherer, R., & Beckmann, J. F. (2014). The acquisition of problem solving competence: Evidence from 41 countries that math and science education matters. Large Scale Assessment in Education, 2 (10). https://doi.org/https://doi.org/10.1186/ s40536-014-0010-7 Schoppek, W. (in prep.). Increasing self-regulatory strength through regular physical exercise? Schoppek, W. (2002). Examples, rules, and strategies in the con- trol of dynamic systems. Cognitive Science Quarterly, 2(1), 63– 92. Schoppek, W. (2004). Direction of causality makes a difference. In K. Forbus, D. Gentner, & T. Regier (Eds.), Proceedings of the 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 16 https://doi.org/10.1038/jcbfm.2012.35 https://doi.org/10.1111/spc3.12200 https://doi.org/10.1111/spc3.12200 http://dx.doi.org/10.1177/1745691612454134 https://doi.org/10.1016/j.shpsa.2018.01.011 http://dx.doi.org/10.1016/0010-0285%2872%2990016-3 http://dx.doi.org/10.1016/0010-0285%2872%2990016-3 https://doi.org/10.11588/jddm.2015.1.15455 https://doi.org/10.11588/jddm.2015.1.15455 https://doi.org/10.1016/S0960-9822(03)00135-0 https://doi.org/10.1016/S0960-9822(03)00135-0 https://doi.org/10.1111/j.1469-7610.2010.02333.x https://doi.org/10.1111/j.1469-7610.2010.02333.x https://doi.org/10.1027/1015-5759/a000224 https://doi.org/10.1027/1015-5759/a000224 https://doi.org/10.1007/978-3-642-53898-8_16 https://doi.org/10.3390/systems3040211 https://doi.org/10.3390/systems3040211 http://dx.doi.org/10.1080/14639220512331311553 http://dx.doi.org/10.1080/14639220512331311553 https://doi.org/10.1146/annurev-neuro-071013-014030 https://doi.org/10.1146/annurev-neuro-071013-014030 https://doi.org/https://doi.org/10.1186/s40536-014-0010-7 https://doi.org/https://doi.org/10.1186/s40536-014-0010-7 https://doi.org/10.11588/jddm.2023.1.76662 Schoppek: A dual processing approach to complex problem solving twenty-sixth annual conference of the Cognitive Science Society. - Mahwah, NJ: Erlbaum, 1219-1224. Schoppek, W. (2019). A flashlight on attainments and prospects of research into complex problem solving. J. Dynam. Decis. Making, 5, 8. Schoppek, W., & Fischer, A. (2017). Common process demands of two complex dynamic control tasks: Transfer is mediated by comprehensive strategies. Frontiers in Psychology, 8, 2145. Stadler, M., Becker, N., Gödker, M., Leutner, D., & Greiff, S. (2015). Complex problem solving and intelligence: A meta- analysis. Intelligence, 53, 92–101. https://doi.org/10.1016/ j.intell.2015.09.005 Stadler, M., Hofer, S., & Greiff, S. (2020). First among equals: Log data indicates ability differences despite equal scores. Computers in Human Behavior, 111, 106442. https://doi.org/10.1016/j.chb .2020.106442 Stanovich, K. E., & Toplak, M. E. (2012). Defining features versus incidental correlates of Type 1 and Type 2 processing. Mind & Society, 11 (1), 3–13. https://doi.org/10.1007/s11299-011- 0093-6 Stanovich, K. E., & West, R. F. (1999). Individual differences in reasoning and the heuristics and biases debate. In P. L. Ack- erman, P. C. Kyllonen, & R. D. Roberts (Eds.), Learning and individual differences (pp. 389–415). Washington, DC: APA. Sun, R., Slusarz, P., & Terry, C. (2005). The interaction of the explicit and the implicit in skill learning: A dual-process approach. Psychological Review, 112 (1), 159. Taatgen, N. A., & Wallach, D. (2002). Whether skill acquisition is rule or instance based is determined by the structure of the task. Cognitive Science Quarterly, 2 (2), 163–204. Tschirgi, J. E. (1980). Sensible reasoning: A hypothesis about hypotheses. Child Development, 51 (1), 1. https://doi.org/10 .2307/1129583 Unsworth, N., Spillers, G. J., & Brewer, G. A. (2009). Examining the relations among working memory capacity, attention control, and fluid intelligence from a dual-component framework. Psy- chology Science, 51 (4), 388–402. Vaishnavi, S. N., Vlassenko, A. G., Rundle, M. M., Snyder, A. Z., Mintun, M. A., & Raichle, M. E. (2010). Regional aero- bic glycolysis in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 107 (41), 17757–17762. https://doi.org/10.1073/pnas.1010459107 van Merriënboer, Jeroen J. G. (1997): Training complex cognitive skills. Englewood Cliffs, NJ: Educational Technology Publica- tions. Vollmeyer, R., Burns, B. D., & Holyoak, K. J. (1996). The impact of goal specificity on strategy use and the acquisition of problem structure. Cognitive Science, 20 (1), 75–100. http://dx.doi.org/ 10.1016/S0364-0213%2899%2980003-2 Weise, J. J., Greiff, S., & Sparfeldt, J. R. (2020). The moder- ating effect of prior knowledge on the relationship between in- telligence and complex problem solving – Testing the Elshout- Raaheim hypothesis. Intelligence, 83, 101502. https://doi.org/ 10.1016/j.intell.2020.101502 Woods, D. D., Roth, E. M., Stubler, W. F., & Mumaw, R. J. (1990). Navigating through large display networks in dynamic control applications. In Proceedings of the Human Factors Soci- ety Annual Meeting (Vol. 4, pp. 396–399). Sage. Wüstenberg, S., Greiff, S., & Funke, J. (2012). Complex problem solving—More than reasoning? Intelligence, 40, 1–14. 10.11588/jddm.2023.1.76662 JDDM | 2023 | Volume 9 | Article 1 | 17 https://doi.org/10.1016/j.intell.2015.09.005 https://doi.org/10.1016/j.intell.2015.09.005 https://doi.org/10.1016/j.chb.2020.106442 https://doi.org/10.1016/j.chb.2020.106442 https://doi.org/10.2307/1129583 https://doi.org/10.2307/1129583 https://doi.org/10.1073/pnas.1010459107 http://dx.doi.org/10.1016/S0364-0213%2899%2980003-2 http://dx.doi.org/10.1016/S0364-0213%2899%2980003-2 https://doi.org/10.1016/j.intell.2020.101502 https://doi.org/10.1016/j.intell.2020.101502 https://doi.org/10.11588/jddm.2023.1.76662