Opinion A flashlight on attainments and prospects of research into complex problem solving Wolfgang Schoppek1 1University of Bayreuth, Germany. Research on complex problem solving (CPS) has reached a stage where certain standards have been achieved, whereas the future development is quite ambiguous. In this situation, the editors of the Journal of Dynamic Deci- sion Making asked a number of representative authors to share their point of view with respect to seven questions about the relevance of (complex) problem solving as a research area, about the contribution of laboratory-based CPS research to solving real life problems, about the roles of knowledge, strategies, and intuition in CPS, and about the existence of expertise in CPS. Why should there continue to be problem solving research (in addition to research on memory, decision-making, motivation etc.)? The virtue of problem solving as object of research liesin its integrative potential. Problem solving is an ac- tivity that is characteristic of humans. As a form of action it involves the entire person: Goals have their roots in per- sonality, knowledge acquisition and use as well as thinking are important topics in cognitive science, just like moti- vation, self-regulation and emotion are in other areas of psychology. I am convinced that insight into the human mind can only be gained when the interactions of impor- tant subsystems are considered. Moreover, problem solving research treasures an arsenal of methods that reminds psy- chologists that not everything is best investigated by way of large-scale studies, which suggest mistaking average ef- fects or correlations for explanations, or even for theories. What are the connections between current CPS research practice and real problems? Where do you see potential for development towards stronger relations? Currently, the mainstream of research about CPS, revolv- ing around the multiple (or minimal) complex systems test MicroDYN (e.g. Greiff, Fischer, et al., 2013), has little to do with real problems. Consequently, there are claims about turning to more realistic microworlds (Funke, 2014). While I am sympathetic to these claims and believe that CPS research in the narrower sense should be more aware of research about naturalistic decision making (Klein, 2008), I do not think that only high-fidelity simulations should be used (see Question 5). However, I expect from authors that they state more thoroughly, with what research interest they conduct studies with specific microworlds. Just stat- ing that our world is increasingly complex and dynamic, and therefore we must study how persons deal with such systems, is not sufficient. In particular, I doubt that the requirement to explore a completely new and unknown sys- tem is very common in reality. Given the artificiality of the laboratory situation, do participants really adopt the presented problems? What insights can be gained despite this artificiality and which cannot? I think that many of our participants do not adopt the pre- sented problems as their own. And those who do so, often do not adopt them to a degree they would if they were real and personal problems. For example, I have never seen a participant confronted with the “Tailorshop” who conducted exact cost analysis to fix a rational shirt price before starting the game. However, treating things lightly and trying to solve problems that are not existential on the quick appears typically human to me. It is just this transi- tion from a halfhearted approach to immersion into a prob- lem – and the conditions that support it – that can well be investigated via simulated complex problems. Another research question that can be studied in laboratory situ- ations refers to how persons reduce complexity. In “The logic of failure”, Dörner (1996) has described a number of ways how this happens. For example, persons tend to search for a central variable to which they attribute exces- sive explanatory power. However, as the book title sug- gests, these observations focus on detrimental attempts to reduce complexity, which might be shifted to a more pos- itive orientation in future research (see also Osman, 2010, discussed under Question 5). What evidence exists for the influence of other kinds of knowledge besides structural knowledge on the results of CPS? Which of these kinds of knowledge should be examined in future research? Besides structural knowledge, the best-studied type is strategy knowledge, which I address under Question 5. In the future, we should investigate the significance of Corresponding author: Wolfgang Schoppek, University of Bayreuth, 95440 Bayreuth, Germany. E-mail: wolfgang.schoppek@uni-bayreuth.de 10.11588/jddm.2019.1.69297 JDDM | 2019 | Volume 5 | Article 8 | 1 mailto: https://doi.org/10.11588/jddm.2019.1.69297 Schoppek: A flashlight on attainments and prospects knowledge about concepts from systems theory: exponen- tial growth, saturation, properties of non-linear dynamics, such as self-organization, phase transitions, or attractors (Schiepek & Strunk, 2010). I believe that persons who are familiar with such concepts should be better at control- ling complex dynamic systems, because these concepts are helpful for understanding the system at hand; and they are associated with hints about potential actions or caveats. Simple examples of these are considering side effects or de- layed effects. What evidence is available for the impact of strategies (except VOTAT) on the results of CPS? Which of these strategies should be examined more closely? As stated above, I think that the best-studied type be- sides structural knowledge is strategy knowledge. An im- portant class of exploration tactic whose role in CPS has been rediscovered recently is observing the dynamics of a system without interventions, if necessary after a short im- pulse. I have named this tactic PULSE (Schoppek & Fis- cher, 2017), but there are a number of other descriptions and demonstrations that this tactic is beneficial (Beck- mann, 1984; Schoppek, 2002; NOTAT: Lotz, Scherer, et al., 2017). However, like for VOTAT, the area of appli- cation for PULSE is narrow: exploring unknown systems. Osman (2010) has discussed the necessity to reduce com- plexity when dealing with the characteristic uncertainty of complex dynamic control tasks, which clearly has a strate- gic aspect. In my opinion, the proposed monitoring and control framework is too abstract to derive specific strate- gies from it. Although I doubt that we can find much generalizable evidence about the features of a promising strategy for reducing complexity, I consider it worthwhile to study the different ways of reducing complexity in spe- cific domains. Is there intuitive CPS? At first glance, “intuitive CPS” sounds like a contradic- tion. When we consider a problem being defined by a barrier that precludes direct goal achievement, and intu- ition as a solution that comes into mind without think- ing, there is no intuitive CPS. In other words, a task that can be accomplished without thinking is not a problem. However, a problem solving process can comprise varying portions of intuitive components. Such components could be, for example, the execution of an exploration tactic, the recognition of a critical system status, the recognition of an opportunity for a certain intervention, or pondering the constraints of different input possibilities. Gobet and Chassy (2009) have presented a model of expert problem solving in chess that incorporates intuitive and analytic components and their interplay. These authors define in- tuition as “the rapid understanding shown by individuals, typically experts, when they face a problem” (Gobet & Chassy, 2009, p.151), and model it by the formation of a network of increasingly complex chunks. This approach might be fruitfully applied to CPS. In addition, I estimate a dual-processing framework (Evans, 2012) as useful for teasing apart intuitive and analytic components of prob- lem solving (Schoppek, 2020). What distinguishes experts in CPS from laypersons? This question implies that there are experts in CPS, which is not self-evident. Greiff and his colleagues view CPS as a general competence (Greiff & Martin, 2014), whereas Tri- cot and Sweller (2014) provided compelling evidence for the primacy of domain-specific knowledge (e.g. air-traffic controllers can hold enormous amounts of flight related in- formation in working memory, but are not better at stan- dard working memory tests). However, I think that there is knowledge about complex dynamic systems that can potentially be applied to a wide range of problem situa- tions, regardless of their specific domain. These include the items addressed under Question 4, but also the ability to recognize classes of systems. For example, a person who has understood the oscillating dynamic of the legendary “sugar factory” (Berry & Broadbent, 1984) and has ex- plored predator-prey systems, is probably better prepared for dealing with oscillations in a new domain than a person who had none such experiences. Additionally, experts have a large repertoire of strategies at their disposal (see Ques- tion 5) and can execute many tactics almost automatically. Apart from these knowledge related characteristics, I would expect that CPS experts feel more appealed by a complex problem. They are more likely to perceive a failed problem solving attempt as challenging self-worth than laypersons. This hypothesis may contribute to explain the replicated finding that science students are better at controlling dy- namic systems that are new to them than students of other majors (Schoppek, 2004, 2020). It is also an example of the relevance of motivational, self-regulatory, and emotional processes for understanding (complex) problem solving. Declaration of conflicting interests: The author de- clares he has no conflict of interests. Author contributions: The author is completely re- sponsible for the content of this manuscript. The ab- stract was added by the editors. Handling editor: Andreas Fischer and Wolfgang Schoppek Copyright: This work is licensed under a Creative Com- mons Attribution-NonCommercial-NoDerivatives 4.0 In- ternational License. Citation: Schoppek, W. (2019). CA flashlight on at- tainments and prospects of research into complex prob- lem solving. Journal of Dynamic Decision Making, 5, 8. doi: 10.11588/jddm.2019.1.69297 Published: 31 Dec 2019 10.11588/jddm.2019.1.69297 JDDM | 2019 | Volume 5 | Article 8 | 2 https://doi.org/10.11588/jddm.2019.1.69297 https://doi.org/10.11588/jddm.2019.1.69297 Schoppek: A flashlight on attainments and prospects References Berry, D. C., & Broadbent, D. E. (1984). On the relationship between task performance and associated verbalizable knowledge. The Quarterly Journal of Experimental Psychology Section A, 36(2), 209–231. https://doi.org/10.1080/14640748408402156 Dörner, D. (1996). The logic of failure: Recognizing and avoiding error in complex situations. New York, NY: Basic Books. Evans, Jonathan St. B.T. (2012). Spot the difference: Distinguish- ing between two kinds of processing. Mind & Society, 11(1), 121–131. Retrieved from http://dx.doi.org/10.1007/s11299- 012-0104-2 Funke, J. (2014). Analysis of minimal complex systems and complex problem solving require different forms of causal cognition. Frontiers in Psychology, 5(739), 1–3. https://doi.org/10.3389/fpsyg.2014.00739 Gobet, F., & Chassy, P. (2009). Expertise and intuition: A tale of three theories. Minds and Machines, 19(2), 151–180. Retrieved from http://dx.doi.org/10.1007/s11023-008-9131-5 Greiff, S., Fischer, A., Wüstenberg, S., Sonnleitner, P., Brun- ner, M., & Martin, R. (2013). A multitrait-multimethod study of assessment instruments for complex problem solv- ing. Intelligence, 41(5), 579–596. Retrieved from http://dx.doi.org/10.1016/j.intell.2013.07.012 Greiff, S., & Martin, R. (2014). What you see is what you (don’t) get: A comment on Funke’s (2014) opinion paper. Frontiers in Psychology, 5(1120). https://doi.org/10.3389/fpsyg.2014.01120 Klein, G. (2008). Naturalistic decision making. Human Factors, 50(3), 456–460. https://doi.org/10.1518/001872008x288385 Lotz, C., Scherer, R., Greiff, S., & Sparfeldt, J. R. (2017). Intelligence in action – Effective strategic behav- iors while solving complex problems. Intelligence, 64, 98–112. https://doi.org/10.1016/j.intell.2017.08.002 Osman, M. (2010). Controlling uncertainty: A review of human behavior in complex dynamic environments. Psychological Bul- letin, 136(1), 65–86. https://doi.org/10.1037/a0017815 Schiepek, G., & Strunk, G. (2010). The identification of criti- cal fluctuations and phase transitions in short term and coarse- grained time series—a method for the real-time monitoring of human change processes. Biological Cybernetics, 102, 197–207. https://doi.org/10.1007/s00422-009-0362-1 Schoppek, W. (2002). Examples, rules, and strategies in the control of dynamic systems. Cognitive Science Quarterly, 2(1), 63–92. Schoppek, W. (2004). Teaching structural knowledge in the control of dynamic systems: Direction of causality makes a difference. In K. D. Forbus, D. Gentner, & T. Regier (Eds.), Proceedings of the 26th Annual Conference of the Cognitive Science Society (pp. 1219–1224). Mahwah, NJ: Lawrence Erlbaum Associates. Schoppek, W. (2020). Tut denken weh? Überlegungen zur Ökonomietendenz beim komplexen Problemlösen. In K. Viol & H. Schöller (Eds.), Selbstorganisation - Ein Paradigma für die Humanwissenschaften? (373-388). Berlin [u.a.]: Springer. Schoppek, W., & Fischer, A. (2017). Common process demands of two complex dynamic control tasks: Transfer is mediated by comprehensive strategies. Frontiers in psychology, 8, 2145. https://doi.org/10.3389/fpsyg.2017.02145 Tricot, A., & Sweller, J. (2014). Domain-specific knowledge and why teaching generic skills does not work. Educational Psychol- ogy Review, 26(2), 265–283. https://doi.org/10.1007/s10648- 013-9243-1 10.11588/jddm.2019.1.69297 JDDM | 2019 | Volume 5 | Article 8 | 3 https://doi.org/10.11588/jddm.2019.1.69297