Original Research The role of difficulty in dynamic risk mitigation decisions Lisa Vangsness and Michael E. Young Kansas State University, Department of Psychological Sciences Previous research suggests that individuals faced with risky choices seek ways to actively reduce their risks. The risk defusing operators (RDOs) that are identified through these searches can be used to prevent or compensate for (here, pre- and post-event RDOs, respectively) negative outcomes. Although several factors that affect RDO se- lection have been identified, they are limited to static decisions conducted during descriptive tasks. The fac- tors that influence RDO selection in dynamically unfold- ing environments are unknown, and the relationship be- tween task characteristics and RDO selection has yet to be mapped. We used a videogame environment to con- duct two experiments to address these issues and found that experienced losses impacted risk mitigation strat- egy: when the task was difficult, participants experienced greater losses and were more likely to select preventive RDOs (Experiment 1). Additionally, risk mitigation be- havior stabilized as participants gained experience with the task (Experiments 1 and 2) and could be shifted by making an RDO easier to use (Experiment 2). Exploratory analyses suggested that these risk mitigation choices were not driven by judgments of difficulty (JODs), even though participants’ JODs were accurate and aligned with task difficulty. This research suggests that while people seek preventive RDOs when tasks are difficult and risky, risk mitigation strategy is shaped by experienced losses; de- cision makers do not use JODs to anticipate future risks and inform risk mitigation decisions. Keywords: difficulty, risk mitigation, risk defusing operators, judg- ments of difficulty, dynamic environments Most people brush their teeth before work in the morn-ing. When repeated twice a day, this small preven- tive measure can significantly reduce the risk of cavities and improve overall oral hygiene. Despite the positive benefits of tooth brushing, more than half of Americans report for- getting to brush their teeth at least once in the past year (Delta Dental, 2014). Failing to brush your teeth invites risk but generally does not result in a negative outcome. Unless you consistently fail to brush or are particularly susceptible to cavities, you will not require a filling. This everyday decision is a simplified example of the choices that are made in high-risk medical, defense, and educa- tional situations (to name a few) around the world: Is it better to expend time and energy on preventive measures or should we wait and minimize the costs of prevention by taking action only if a negative outcome occurs? Our re- search studies sought to identify factors that contribute to when and how risk mitigation strategies are chosen, specif- ically within dynamic environments that rapidly change and respond to peoples’ actions. Previous research involving the use of risk mitigation strategies has focused on the conditions under which peo- ple will search for risk defusing operators (RDOs), which are actions or tools that can be used to reduce the risks as- sociated with a decision (Huber, Beutter, Montoya, & Hu- ber, 2001). In these experiments, participants seek RDOs by asking questions about a vignette. These questions may emphasize preventive or compensatory strategies that could be employed before (pre-event RDOs) or after (post- event RDOs) a decision is made to reduce the likelihood or severity of a negative outcome (for a review see Hu- ber, 2012). More than a decade of research with these vignette-based descriptive tasks suggests that participants’ willingness to seek RDOs depends on information availabil- ity and environmental pressures. That is, risk mitigation depends not only on the environment, but also on a per- son’s ability to detect and interpret environmental cues. While vignette-based tasks fail to capture the dynamic na- ture of some real-world decisions, this research illustrates an important concept: people will actively engage with the environment to reduce their risks when they perceive the opportunity to do so (Huber, Beutter, Montoya, & Hu- ber, 2001). For this reason, we will review research that uses vignette-based tasks before exploring the implications these findings have on choices made in dynamic decision- making tasks and the current studies. Risk Mitigation in Vignette-based Decision-making Tasks Within the context of vignette-based tasks, individuals ini- tiate a search for RDOs when they recognize that their desired choice is associated with an unacceptable level of risk and will discontinue this search when an acceptable RDO is found (Bär & Huber, 2008). This search hinges on their experience with a task as well as their knowledge of risks and RDO availability. When a scenario is unfa- miliar and includes explicit cues about the detection of a negative event (e.g., a test that detects the negative side effects of a medication) or about RDO availability (e.g., access to an expert that may be aware of successful risk mitigation strategies), individuals are more likely to ask questions about these factors and use this information to make decisions. This search is less likely to occur when information cues are absent (Huber & Huber, 2008; Huber & Huber, 2003) and when individuals have background Corresponding author: Lisa Vangsness, Kansas State University, Department of Psychological Sciences, 492 Bluemont Hall, Manhattan, KS 66506, USA, e-mail: lvangsness@ksu.edu 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 1 mailto:lvangsness@ksu.edu https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation knowledge that precludes the need for additional informa- tion (Huber & Macho, 2001). Environmental pressures also appear to play a role in the search for risk mitigation strate- gies. Under time pressure, questions about RDOs become more strategic and focused on RDO availability (Huber & Kunz, 2007), while requiring people to justify their choices discourages them from taking risks even when RDOs are available (Huber, Bär, & Huber, 2009). To summarize, in- dividuals use environmental cues to determine when and how to search for risk mitigation strategies in the context of vignette-based tasks. While the factors influencing RDO search are well- studied in the context of vignette-based tasks, less is known about how RDOs are used during situations that are con- tinuously unfolding. Although participants indicate an in- terest in using preventive strategies when negative out- comes are difficult to detect and severe losses are expected (e.g., a symptomless virus; Huber & Huber, 2003), these preferences may change when decisions are encountered re- peatedly within a single context (e.g., Camilleri & Newell, 2013; Hau, Pleskac, Kiefer, & Hertwig, 2008; Hau, Pleskac, & Hertwig, 2010). Repeated decisions allow participants to receive feedback in the form of experienced or avoided losses, which can be used to inform future risk mitiga- tion decisions. Thus, an RDO’s role in a dynamic task can change depending on the event to which the decision- maker anchors their choice. For example, insurance is of- ten considered a compensatory, pre-event RDO because it is purchased in advance but only pays out if a negative outcome occurs (e.g., Huber, 2012); yet some people wait to purchase insurance until after they have sustained a loss (Zaleskiewwicz, Piskorz, & Borkowska, 2002), making in- surance a post-event compensatory RDO. More research is needed to determine how individuals’ willingness to seek and employ RDOs is affected by the repeated choices made within dynamic environments. Learning from Experience in Repeated-choice Tasks Initially, repeated-choice tasks involve uncertainty (Knight, 1921) in that their risks are not yet known: new homeowners have not yet encountered a flood nor has a novice physician seen the effects of a deadly virus. As individuals gain experience in a novel environment, they sample context-specific information which can be used to estimate the probability of risk (Hertwig & Erev, 2009) and calibrate subsequent judgements (cf., Brunswick, 1952). That is, people track experienced losses and become aware of task-related cues that signal increased risks. These cues can then be used to inform subsequent risk mitigation decisions. One class of cues that signal risk encompasses those re- lated to task difficulty. On a broad level, difficulty corre- lates with risk. The more challenging the task, the more likely a person will be to make errors and experience losses. The easier the task, the more likely a person is to succeed. This relationship can be used to inform decision-making: individuals may use experienced losses to help identify and calibrate their use of cues to difficulty (e.g., visual com- plexity present on a radar screen), which can then be used to infer their level of risk. Taken together, experienced losses and cues to difficulty should affect the probability with which an individual will elect to use either type of RDO. If a task is perceived to be risky, an individual may become more likely to use an RDO, especially when the magnitude of the loss associated with that risk is high (cf., Huber & Huber, 2003). Ide- ally, preventive strategies will be employed when risks and losses are large, such as during a challenging task (Huber, 2012); however, upfront costs (e.g., time, money, effort) may dissuade people from adopting preventive strategies (Sigurdsson, Taylor, & Wirth, 2013). This is particularly true of demanding tasks that require many resources to complete. For instance, the CHEX decision-aid, a tool in- tended to help air traffic controllers and tactical coordina- tors improve their situation awareness, does not improve performance partly because it distracts people from their primary task of monitoring the airspace. In this case, par- ticipants rarely used the preventive RDO because it occu- pied resources that could be used to complete the task itself (Vallières, Hodgetts, Vachon, & Tremblay, 2016). Thus, compensatory strategies may be preferred during challeng- ing tasks because they do not require any effort unless they must be used. Converging evidence from cognitive, comparative, and motivational literature supports the notion that people weigh the trade-offs between effort and reward when mak- ing decisions (for reviews see Mitchell, 2017; Walton, Ken- nerley, Bannerman, Phillips, & Rushworth, 2006; and Locke & Latham, 2002, respectively). If effort is not com- mensurately rewarded, people will minimize resource al- location by abandoning tasks in favor of easier or more rewarding endeavors. In this way, they strive to maxi- mize the utility of their limited resources (Kurzban, Duck- worth, Kable, & Myers, 2013) and may use the effort- reward trade-off to inform risk mitigation decisions. The role of expertise. An individual’s ability to judge task difficulty and estimate risk may be mediated by task- specific knowledge that is acquired through practice. A single task can be made difficult in many ways (e.g., the enemies in a videogame can move more quickly or more slowly; alternatively, these same enemies could take more or fewer shots to destroy). Through practice, people be- come sensitive to the risk-reward relationships present in their environments (Pleskac & Hertwig, 2014) and will ac- tively exploit them by selecting strategies that maximize their successes and rewards (Lovett & Anderson, 1996). This sensitivity is particularly pronounced when cues to difficulty are easily discriminable and frequently encoun- tered (Gaeth & Shanteau, 1984; Shanteau, 1992). When difficulty is determined by multiple task dimensions or when decision makers receive limited feedback, expertise may negatively impact decision making. Under these con- ditions, experts are more likely to attend to irrelevant cues and may be poorly calibrated in their estimates of difficulty and risk (for a review, see Koehler, Brenner, & Griffin, 2002). Studying risk mitigation decisions within a con- trolled dynamic task will allow us to determine whether people use cues to difficulty to evaluate risks and whether these relationships can be learned over time. Judgments of Difficulty as a Measure of Resource Demands Because difficulty, risk, and loss are closely intertwined, judgments of difficulty (JODs) should reflect peoples’ awareness of changing cues to difficulty and inform the strategies they pursue as they engage in a challenging task (Kahneman 1973; Kanfer & Ackerman, 1989; for a review see Kurzban 2016), JODs should also predict the risk mit- igation strategies that will allow people to achieve their 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 2 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation goals (Kurzban et al., 2013). To be successful, people must evaluate task difficulty frequently enough to detect changes in the environment that may affect their ability to effectively allocate resources toward their goals (Brunswik, 1956). Once a task is underway, JODs can be used to re- allocate resources in response to changing task demands (Flavell, 1979). Historically, researchers interested in metacognitive eval- uations of difficulty have used knowledge assessments (e.g., multiple-choice questionnaires; comprehension) to deter- mine the degree to which people accurately estimate the disparity between their abilities and those that the situ- ation requires (e.g., Ozuru, Kurby, & McNamara, 2012). However, JODs made in static environments differ signifi- cantly from those that must be made repeatedly as situa- tions rapidly unfold over time. Recent research involving dynamic tasks suggests that people integrate multiple cues to difficulty when making JODs, and that the weighting of these cues can change over time (Desender, Van Opstal, & Van den Bussche, 2017; Koriat, 1997). Peoples’ ability to identify, integrate, and update cues is an integral part of selecting appropriate problem-solving strategies (Lovett & Schunn, 1999); thus, JODs may be related to RDO se- lection in dynamic tasks. An illustration of this purported relationship can be found in Figure 1. Figure 1. While it is likely that cues to difficulty, level of risk, and the magnitude of losses directly influence risk mitigation decisions, it is also possible that these factors are captured by individuals’ Judgments of Difficulty (JODs). Evidence from two experiments revealed that JODs are unaffected by the magnitude of losses in- curred by an individual, and that JODs do not impact risk mitiga- tion decisions. Studying RDOs in a Dynamic Environment We wished to understand the influence that task difficulty and JODs have on risk mitigation strategies during a dy- namic task. Our dynamic task was a third-person shooter videogame designed using the Unity game engine (Unity, 2016). Interested parties can find a video and brief descrip- tion of this task at http://youtu.be/q6AHSWfAyyY. Previous literature suggested competing hypotheses re- garding the relationship between task difficulty and risk mitigation. If experienced losses and anticipated risks underlie RDO selection, preventive measures (pre-event RDOs) should be selected more frequently when perfor- mance is expected to worsen. In the context of this task, pre-event RDOs might be selected more often when a player’s in-game losses are greater and more frequent. However, it is also possible that current resource availabil- ity dominates risk mitigation decisions. If this holds true, compensatory strategies (post-event RDOs) should be fa- vored during difficult tasks that negatively impact perfor- mance. That is, players should allocate greater attention and working memory to improve their performance during a difficult level of the videogame rather than invest these resources in a pre-event RDO. These perspectives can be summarized in the following way: H1a (risk estimation): participants will be more likely to select pre-event RDOs as their task performance becomes impaired. H1b (resource minimization): participants will be less likely to select pre-event RDOs as their task performance becomes impaired. If risk estimation underlies RDO selection, then peo- ple should be cognizant of the relationship between dif- ficulty and risk and can track this relationship through repeated decisions. Support for this hypothesis would sug- gest that risk mitigation in dynamic tasks mirrors that of vignette-based tasks and can be conceptualized as a form of problem-solving. If resource minimization underlies RDO selection, then individuals should be less capable of mini- mizing the risks they encounter due to task difficulty. This finding would provide a simple explanation for peoples’ ten- dency to violate workplace safety precautions when tasks are difficult (e.g., Sigurdsson, Taylor, & Wirth, 2013), even though such behavior is suboptimal. However, such be- havior could also be explained by H1a if risk estimation is driven by experienced losses and cannot be anticipated; an exploratory model comparison will address this issue if H1a is supported. Because the cues to difficulty within our environment were relatively straightforward and varied along a single dimension, we anticipated that previous videogame expe- rience would change the way in which these strategies were adopted. Namely, if expertise enhances participants’ abil- ity to identify and use cues to difficulty and informs the selection of RDOs, participants who report significant ex- perience with videogame tasks should display early risk mitigation preferences and be less likely to sample alterna- tive strategies at the onset of the task (H2). In other words, participants with previous videogame experience should have adopted the RDO strategies outlined by H1a and H1b more quickly. We also believed that risk mitigation be- havior would stabilize as time-on-task increased, regardless of participants’ previous experience with videogame tasks (H3). If these hypotheses are supported, it would suggest that expertise improves the calibration of risk mitigation activities. Experiment 1 Participants Seventy-nine participants (43 female) from the General Psychology pool at Kansas State University completed the experimental task and received 1 hr of research credit compensation to fulfill a course requirement. One participant experienced a computer malfunction and needed to restart the videogame. The remaining data from this participant is included in the analyses. 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 3 http://youtu.be/q6AHSWfAyyY https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation Design and Procedure Participants completed a 40-min session of a third- person shooter videogame in which they controlled the avatar of a young boy who had shrunk to miniature size and was pursued by stuffed-animal zombies inside his bedroom. During each level, stuffed-animal zom- bies appeared at semi-random locations and pursued the boy throughout the room. Participants guided their avatar across the bedroom floor using the ar- row/ASWD keys and eliminated enemies with a laser cap gun that was controlled by moving and clicking with the computer mouse. The goal of the videogame was to successfully pass through as many levels as pos- sible before the session was finished. Experimenters encouraged participants to pursue this goal by stating that “most participants clear eight levels before the session ends.” Successful completion of this goal required partici- pants to prioritize their performance in the game be- cause death was a time-costly event that occurred once the avatar’s health was fully depleted by enemy at- tacks, which occurred each time a stuffed-animal zom- bie touched the avatar. Each enemy attack depleted 20 hit points of the avatar’s 100 hit points of health. When the avatar’s hit points dropped below 0, the avatar died and the game was paused for 30 s while a loading screen appeared. The purpose of this waiting period was to serve as an aversive consequence that discouraged players from using death as a gameplay strategy to avoid enemy characters. Following this de- lay, the avatar was restored to full health and placed at a random location within the game space. Participants advanced to a new level by eliminating enemies. Each enemy elimination earned the player 1 point. Once participants eliminated 30 enemies from the game space (raised their score from 0 to 30 points), their score reset and the game advanced to a new level with an identical layout that could be easier or harder than the last (how this was accomplished is detailed later): unlike a traditional video game, the degree of difficulty was randomly assigned at the beginning of each level. Participants’ ability to track changes in task difficulty was assessed using a pop-up window that appeared at the beginning of each level and ev- ery 2 minutes during the game. This pop-up window contained two buttons that allowed participants to in- dicate whether the videogame was “easier” or “harder” than it was before. This format allowed participants to make comparative assessments without interpreting scale anchors and without making assumptions about the scaling of JODs (for additional information, see Böckenholt, 2004). Once participants selected an op- tion with the computer mouse, the pop-up window dis- appeared from the screen. Gameplay remained paused for 3 s before and after the pop-up window appeared to reduce the performance costs associated with task interruption (Altmann & Trafton, 2007). After 40 mins of gameplay, the videogame ended and participants completed a demographic questionnaire that included questions about sex and videogame ex- perience. Participants also completed a modified ver- sion of the Game Engagement Questionnaire (Brock- myer et al., 2009). RDO selection. In an effort to ensure that all par- ticipants anchored their risk mitigation actions to the same event, RDOs were made available at the begin- ning of each level and every subsequent 5 mins. At these times, a pop-up window invited participants to “select a tool” that they could use to improve their per- formance during the game. Participants could select one of two tools, a shield (a pre-event RDO) or a health pack (a post-event RDO). Either tool could be used to mitigate 20 hit points of damage from an enemy char- acter by preventing an enemy attack (shield) or restor- ing the avatar’s health (health pack). Additionally, these tools differed in how difficult they were to use. While post-event RDOs could be used at any point fol- lowing an enemy attack, pre-event RDOs needed to be timed to the enemy attack because they only shielded the avatar for up to 5 s and needed to be redeployed once an enemy character touched the shield. Selecting a tool placed five of these items into the avatar’s inventory, which was indicated by a set of icons in the lower left corner of the screen. Although participants received an opportunity to restore their inventory every 5 mins, they could neither stockpile items nor could they hold items of more than one type. Thus, participants needed to use their experiences in the game to develop a risk mitigation strategy that considered the strengths and weaknesses of both the tools and themselves. Participants could use these tools at their discretion by pressing the F key on the keyboard. Each time participants used a tool, they received notification by visual and auditory cues: a 250-ms sound and a 3-D bubble accompanied each RDO use. Both the sound and the bubble were specific to the tool and could be used to differentiate tool choice. After a tool restored hit points or deflected an enemy attack, one of the five icons disappeared from the bottom of the screen. For a screenshot of the videogame task, see Figure 2. Task difficulty and risk. Task difficulty was manip- ulated as a between-subjects variable (difficulty type) by adjusting one characteristic of the enemy charac- ters’ behavior at the start of each level. This char- acteristic was automatically adjusted within-subjects by a programmed algorithm that randomly selected a value from a uniform distribution that represented a wide range of difficulty, as determined through par- ticipants’ performance during pilot testing (Vangsness, 2017). This randomly selected value was held through- out the level, while all other characteristics of the enemy characters’ behavior remained constant dur- ing the session. For example, participants assigned to the “speed” condition saw the enemy characters’ rate of movement change between levels but did not experience changes in the enemy characters’ hit points or population rate. Similarly, participants assigned 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 4 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation Figure 2. A screen shot from the videogame task depicts the player’s avatar surrounded by three enemy characters. The player’s health and remaining shields are depicted in the lower left corner. to the “population” condition experienced changes in how quickly enemy characters appeared in the level, but did not see changes in the enemy characters’ speed or hit points. A brief description of the characteristics and their sampling values can be found in Table 1. Previous analyses of gameplay data showed that dif- ficulty was inversely related to gameplay performance (Vangsness, 2017). That is, participants were attacked more frequently and experienced greater losses when the manipulated difficulty parameter took values near the upper limit of the range. Conversely, participants experienced fewer losses when this parameter took on smaller values. These analyses suggest that risk is higher during more difficult levels and is lower during easier levels. While it is theoretically possible to es- timate the moment-by-moment risks incurred by each participant, this estimation would require knowledge of many factors (e.g., skill of the individual player; lo- cation, velocity, and enemies’ expected time of arrival; etc.) that fluctuate considerably during the task. As we were interested in broad, robust patterns of behav- ior that transcend a single, specific context, we defined risk as it varied with task difficulty. Tutorial level. The videogame included a tutorial level to familiarize participants with the layout and controls of the game. The tutorial level was identical to the videogame task in all respects but only con- tained three enemy characters which participants were required to eliminate before progressing to the first level of the game. Because the tutorial level differed significantly from the remainder of the videogame, data from this portion is excluded from subsequent analyses. Results Risk Mitigation Strategy. We explored the factors underlying participants’ risk mitigation strategies with a multilevel logistic regression model that predicted the probability that a participant would select ei- ther tool (Health pack, Shield) using participants’ game performance, time-on-task, previous videogame experience, and difficulty type (Population, Speed, Strength) in the fixed effect structure. Game perfor- mance was defined as the rate of damage from en- emy characters that had elapsed since the most recent RDO selection ("total damage since last RDO choice ÷ time since last RDO choice" ), videogame experi- ence as the summed responses to relevant items from the demographic questionnaire, and time-on-task as the amount of time that had elapsed since the be- ginning of the first level of the videogame task. The random effect structure was selected using AIC com- parisons (Akaike, 1973), which supported a structure that included the intercept, game performance slope, and time-on-task slope. This specification allowed the model to account for participant differences in overall ability, perceptions of difficulty, and rate of learning. A full disclosure of random effect comparisons can be found in the appendix. The findings from this analysis are illustrated by Figure 3. The slope in each time-slice panel illustrates that risk mitigation strategy was significantly affected by participants’ damage rate since last RDO selection, and that this relationship changed over time. Early in the game, participants had little preference for either RDO but as time-on-task increased they learned to use preventive RDO strategies to compensate for heavy losses. When participants performed well during the 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 5 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation Table 1. Both experiments included a between-subjects manipulation in which participants experienced different difficulty types. condition description randomly selected values constant values Population rate (n = 26) The rate at which enemies appeared in the game space 1 – 25 s 10s Speed (n = 23) The speed at which enemy characters could travel. 0.2 – 15.0 Unity units 5.0 Unity units Strength (n = 30) The number of hit points enemies had when they first appeared in a level. 20 – 400 hit points 115 hit points Note. Unity units are an arbitrary measure that can be used to scale game objects with respect to one another. later stages of the game they became increasingly likely to select post-event RDOs. This pattern of be- havior aligns with our hypothesis that risk estimation underlies RDO selection (H1a). Specifically, partici- pant selected risk mitigation strategies that would pre- vent losses when they were likely to occur rather than choosing to conserve resources for task completion by selecting the less-effortful post-event RDO. This rela- tionship became more pronounced over time, suggest- ing that risk mitigation strategies stabilize as individ- uals become more familiar with available RDOs (H3). The other main effects included in the model were non- significant (p’s > .05), suggesting that there is not a strong relationship between previous videogame expe- rience and RDO selection (H2). All estimates and sig- nificance values are disclosed in Table 2. Table 2. Model estimates from Experiment 1 reveal that game performance and time-on-task significantly predict participants’ risk mitigation strategy during gameplay. predictor B SE z p intercept 0.77 0.28 2.74 .01 game performance -0.34 0.16 -2.11 .04 time-on-task 0.82 0.29 2.82 .005 previous videogame experi- ence 0.03 0.02 1.32 .19 Population 0.22 0.22 1.01 .31 Speed -0.09 0.23 -0.39 .69 performance x time-on-task -0.40 0.19 -2.13 .03 Note. Performance (M = 1.32, SD = 1.53) was centered around 1.17, a value halfway between the means of Experiments 1 and 2. Previous videogame experience (M = 8.93, SD = 7.40) was centered around its mean, and time-on-task (M = 1088.25, SD = 723.64) was centered around its mean and scaled by dividing by 1,000 prior to analysis. Experimental condition was effect coded, with Strength serving as the -1, -1 baseline. To evaluate participants’ ability to use JODs as a measure of resource demands, we used AIC values to compare the existing model with one that included participants’ perceptions of task difficulty as a main effect. Adding this predictor did not significantly im- prove the predictions of our earlier model (∆AIC = -2.87). We interpreted this finding to have one of two meanings: either participants’ JODs were highly cor- related with damage rate, suggesting that participants used the magnitude of their losses as a cue to game dif- ficulty, or participants did not incorporate their JODs in risk mitigation decisions. Exploratory analysis. To address the multiple inter- pretations of our model comparison, we conducted an exploratory analysis to determine whether damage rate was responsible for participants’ JODs, or if per- ceptions of difficulty were based on additional unmea- sured factors. This was accomplished by comparing two multilevel logistic regression models that included either a measure of participants’ game performance (damage rate since last JOD question) or of objective game difficulty (task difficulty parameter standardized across experimental condition). Both models included time-on-task, previous videogame experience, and ex- perimental condition (Population, Speed, Strength) in the fixed effect structure. AIC comparisons sup- ported a random effect structure that included inter- cept, standardized difficulty slope, and time-on-task slope to account for participant differences in ability, experiences of difficulty, and rate of learning. A full disclosure of random effect comparisons can be found in the appendix. Table 3. Model estimates from an exploratory analysis reveal that an objective measure of difficulty and time-on-task predict partici- pants’ JODs in Experiment 1. predictor B SE z p intercept 0.43 0.15 2.76 .01 standardized difficulty 3.50 0.37 9.41 <.001 time-on-task -0.47 0.14 -3.33 <.001 previous videogame experi- ence 0.01 0.02 0.48 .63 Population 0.001 0.19 0.01 .99 Speed -0.12 0.21 -0.55 .58 Note. Standardized difficulty (M = 0.51, SD = 0.30) and previ- ous videogame experience (M = 9.16, SD = 7.22) were centered around their means. Time-on-task (M = 1214.01, SD = 713.03) was centered around its mean and scaled by dividing by 1,000 prior to analysis. Experimental condition was effect coded, with Strength serving as the -1, -1 baseline. Model comparisons using AIC strongly supported a model that included objective game difficulty as a fixed effect (∆AIC = 82.95). The findings from this model (see Figure 4) suggest that cues to difficulty unrelated to the magnitude of losses (e.g., the number of enemies 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 6 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation Figure 3. During Experiment 1, participants’ risk mitigation strategies were not initially sensitive to changes in task difficulty. As the experimental session continued, participants began to compensate for changes in task difficulty by selecting preventive tools (i.e., the shield) when they experienced greater losses and compensatory tools (i.e., the health pack) when they experienced fewer losses. Error ribbons represent one standard error above and below the model estimates. visible on the screen; how quickly enemy characters move) underlie participants’ JODs. Despite this, the positive slope in each time-slice reveals that partici- pants’ JODs were well-calibrated to the difficulty level of the game. Participants were more likely to indicate the game was “harder than before” when they were playing levels that were objectively harder, and were more likely to indicate the game was “easier than be- fore” when playing levels that were objectively easier. We also found that participants’ JODs were influenced by time-on-task such that they became less likely to say that the game was “harder than before” later in the game; however the size of this effect was small. The other predictors included in the model were not significant (p’s > .05). All estimates and significance values are disclosed in Table 3. Discussion Participants’ risk mitigation strategies were affected by the interaction between their experienced losses and time-on-task. Initially, participants’ risk mitiga- tion strategies were unaffected by experienced losses, but over time, pre-event RDOs (i.e., the shield) were preferred following heavy losses. These results sug- gest that people respond to environmental changes by adopting risk mitigation strategies that reflect experi- enced losses (here, damage rate since last RDO ques- tion) and that these strategies change as people gain experience with a task. This behavior lends support to the hypothesis (H1a) that risk estimation drives the selection of risk mitigation strategies because par- ticipants actively compensated for their losses with a most costly pre-event RDO rather than allocating all their resources toward task completion. Participants’ behavior was unaffected by their level of videogame experience (H2), but did stabilize over time lending support to hypothesis H3. Our results also demon- strated that people actively and accurately monitor the environment for cues that reflect changes in task difficulty, but that these cues are not determined by the magnitude of participants’ losses and may instead focus on cues to difficulty within the videogame itself (e.g., the number of on-screen enemies). Because par- ticipants’ risk mitigation strategies were predicted by experienced losses while JODs were predicted by cues to difficulty, we believe that the shifts in risk mitiga- tion strategy are caused by individuals’ awareness of experienced losses, and that the cues used to select a risk mitigation strategy differ from those used to make JODs. This would seem to suggest that individuals’ risk mitigation strategies do not anticipate risks but respond to them after they have occurred. Although our results support the risk mitigation hypothesis (H1a), they do not completely discount the resource optimization account of human behavior (e.g., Kurzban et al., 2013; Vallières, Hodgetts, Va- chon, & Tremblay, 2016). While losses led participants to select resource-intensive pre-event RDOs, they did shift toward selecting post-event RDOs when losses were infrequent. Perhaps participants recognized that preventing losses, while strategic, came with inherent costs and therefore effectively navigated the trade-off between effort and reward. We reasoned that if partic- ipants engaged in trading off effort and reward, they would shift toward preventive risk mitigation strate- gies when this tool was made easier to use (H4a). How- ever, if resource optimization did not underlie partici- pants’ behavior, tool selection would not be influenced by the pre-event RDO’s ease-of-use (H4b). We tested these competing hypotheses in Experiment 2 by ma- nipulating the coordination required to effectively use the shield tool and measured the impact this had on tool selection throughout the videogame task. 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 7 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation Figure 4. Participants’ judgments of difficulty (JODs) were well-calibrated to the difficulty level of the videogame (parameter values standardized across difficulty types). JODs were also consistent across both experiments. Error ribbons represent one standard error above and below the model’s estimates. Experiment 2 Participants Eighty-eight participants (41 female) from the General Psychology pool at Kansas State University completed the experimental task and received 1 hr of research credit compensation to fulfill a course requirement. Design and Procedure Participants completed a 40-min session of the videogame task in which we manipulated the diffi- culty of the shield’s use as a between-subjects con- dition variable (RDO type), but held the reward for using this tool (avoiding an enemy attack) constant. In the Steady condition, pre-event RDOs were less costly: participants that selected the shield needed only to de- ploy it a single time. Once active, the shield protected the participants’ avatar from five enemy attacks. In the Sporadic condition, the shield was more costly be- cause behaved as it did in Experiment 1. That is, it remained active for five seconds and participants needed to deploy it multiple times to remain protected from enemy attacks. Furthermore, the timed activa- tion window required participants to coordinate the shield’s deployment with an anticipated attack. Because the between-subject difficulty manipulation (Population, Speed, Strength) was not a significant predictor in the Experiment 1 analyses, we included only two levels of the difficulty manipulation (difficulty type: Strength, Speed), in Experiment 2. We counter- balanced the four possible combinations of difficulty type and RDO type across experimental sessions. In all other respects, the videogame task was identical to that used in Experiment 1. Results We again used multilevel logistic regression to predict the probability that a participant would select either tool (Health pack, Shield) using participants’ game performance, time-on-task, previous videogame expe- rience, difficulty manipulation (Strength, Speed), and RDO type (Sporadic, Steady). Game performance, videogame experience, and time-on-task were included in the fixed effect structure and operationalized using the measures outlined in Experiment 1. AIC com- parisons supported a random effect structure that in- cluded the intercept, game performance, and time-on- task which allowed the model to account for partici- pant differences in overall ability, perceptions of dif- ficulty, and rate of learning. Because we were inter- ested in replicating the effects found in Experiment 1, we included the three-way interaction between game performance, time-on-task, and RDO type. The results of our analysis are depicted in Figure 5. The stark difference in risk mitigation patterns be- tween the Sporadic and Steady RDO type is clear; only RDO type and its two-way interaction with time 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 8 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation affected participants’ risk mitigation strategies during the game (see Table 4). This effect intensified as time- on-task increased and became most apparent in the final time-slice panel. Including participants’ percep- tions of task difficulty as a main effect again did not significantly improve our model’s predictions (∆AIC = -2.97), complementing our results from Experiment 1. The results of the two- and three-way interactions involving game performance, time, and RDO type also align with our previous analysis. Although these ef- fects did not reach significance, the model estimates for the “Sporadic” RDO type fall within the 95% con- fidence intervals established in Experiment 1. As this subset of the data represents only half of that included in our previous experiment, we expect that the increas- ing sensitivity to damage rate observed in Experiment 1 would have replicated had we included more partic- ipants. Table 4. Model estimates from Experiment 2 demonstrate that the ease-of-use manipulation overshadowed all other factors in predict- ing participants’ risk mitigation strategy. predictor B SE z p game performance -0.03 0.18 -0.19 .85 time-on-task -0.43 0.36 -1.19 .23 previous videogame ex- perience 0.02 0.03 0.71 .48 Sporadic 1.49 0.43 3.46 <.001 Speed -0.28 0.19 -1.53 .13 performance x time-on- task -0.13 0.18 -0.71 .48 performance x Sporadic 0.001 0.18 0.01 .99 time x Sporadic 1.15 0.38 3.07 .002 performance x time-on- task x Sporadic -0.07 0.19 -0.39 .69 Note. Performance (M = 17.53, SD = 34.01) was centered around 1.17, a value halfway between the means of Experiments 1 and 2. Previous videogame experience (M = 6.30, SD = 9.76) was centered around its mean, and time-on-task (M = 1285.87, SD = 846.32) was centered around its mean and scaled by dividing by 1,000 prior to analysis. RDO type and difficulty type were effect coded, with Steady and Strength coded as -1. Exploratory analyses. We again conducted an ex- ploratory analysis to determine whether participants’ JODs reflected changes in damage rate, or if a dif- ferent factor was responsible for their perceptions of difficulty. We used AIC values to compare two mul- tilevel logistic regressions that included either game performance (damage rate since last JOD question) or objective game difficulty (task difficulty parame- ter standardized across experiment condition). Both models included time-on-task, previous videogame ex- perience, difficulty type (Speed, Strength), and RDO type (Steady, Sporadic) in the fixed effect structure. AIC comparisons supported a random effect structure that included intercept, standardized difficulty, and time-on-task slope to account for participant differ- ences in ability, experiences of difficulty, and rate of learning. A full disclosure of random effect compar- isons can be found in the appendix. Model comparisons again supported the second model (∆AIC = 171.99), replicating our finding that the participants did not use the magnitude of losses to make JODs. As before, positive slopes across each time-slice (see Figure 4) reveal that participants’ JODs were well-calibrated to the objective difficulty of the game. Time-on-task again affected participants’ JODs: participants became less likely to say the game was “harder than before” as time progressed (see Table 5). Table 5. Model estimates from an exploratory analysis reveal that an objective measure of difficulty and time-on-task predict partici- pants’ JODs in Experiment 2. predictor B SE z p intercept 0.09 0.14 0.61 .54 standardized difficulty 4.07 0.40 10.24 <.001 time-on-task -0.29 0.11 -2.69 .01 previous videogame ex- perience -0.03 0.02 -1.30 .19 Sporadic 0.10 0.13 0.73 .46 Speed 0.13 0.15 0.89 .37 Note. Standardized difficulty (M = 0.71, SD = 0.31) and previ- ous videogame experience (M = 6.39, SD = 5.69) were centered around their means. Time-on-task (M = 1242.38, SD = 775.39) was centered around its mean and scaled by dividing by 1,000 prior to analysis. RDO type and difficulty type were effect coded, with Steady and Strength coded as -1. Discussion The results of Experiment 2 strongly confirm the hy- pothesis that people attempt to balance effort and re- ward during challenging tasks (H4a). Indeed, when we manipulated the effort-reward trade-off and included the pre-event RDO’s ease-of-use as a predictor in our model it attenuated the effects of many other pre- dictors, including game performance. This suggests that people prioritize the immediate conservation of resources only when it does not negatively impact their performance goals: unlike the participants in Experi- ment 1, participants in Experiment 2 were willing to use pre-event RDOs exclusively because they were eas- ier to use and no longer presented a resource cost. The findings from our exploratory analysis, which revealed that JODs were affected by the difficulty manipula- tion but not by the ease-of-use manipulation, further illustrates that the factors used to select RDOs are different from those used to make overall judgments of task difficulty. General Discussion Our study provides conclusive evidence that decision- makers balance effort and reward to select appropri- ate risk mitigation strategies. In Experiment 1, par- ticipants developed risk mitigation preferences as the 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 9 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation Figure 5. Participants’ behavior in Experiment 2 differed as a function of RDO type. Although participants in the sporadic condition behaved similarly to those in Experiment 1 (to which it is identical), participants in the steady condition developed a strong preference for the shield which was easier to use in this condition. Error ribbons depict one standard error above and below model estimates. task progressed. Later in the session, participants se- lected more resource-intensive pre-event RDOs when losses were likely and preferred easier-to-use post-event RDOs when losses occurred less frequently. This pref- erence shifted in Experiment 2 among participants for whom pre-event RDOs were made easier to use. In both experiments, behavior stabilized over time as par- ticipants gained familiarity with each tool. Together, this evidence suggests that while experienced losses in- fluence the risk mitigation strategy an individual pur- sues, preferences can also be affected by how difficult an RDO is to use. Although people recognize and respond to elevated risks and severe consequences by adopting pre-event RDOs (c.f., Huber, 2012; Huber & Huber, 2003), they are sensitive to the effort-reward trade-off presented by the RDO’s ease-of-use (c.f., Sigurdsson, Taylor, & Wirth, 2013). While JODs do not contribute to peo- ples’ risk mitigation strategies, people are affected by how easy RDOs are to use. Harder-to-use pre-event RDOs, which require an upfront investment of effort to employ, were only favored when they are necessary to reduce experienced losses. When pre-event RDOs were made easier to use, people relied upon them more often regardless of their experienced losses. This find- ing supports the theoretical opinion of Kurzban et al. (2013), in that participants will avoid unnecessary risk mitigation strategies if they are difficult to use. This finding is particularly relevant to situations that in- volve infrequent but costly risks during which preven- tive actions may be undervalued with respect to the efforts they require, such as natural disaster prepared- ness (Douglas, Leigh, & David, 2005) and responding to variations in air traffic control workload (Desmond & Hoyes, 1996). The specificity of cues to difficulty and JODs was further revealed in our analysis of participants’ JODs. Although objective measures of task difficulty pre- dicted JODs, damage rate (a measure of a partic- ipants’ experienced losses) did not produce a good model fit. This suggests that participants used other cues to produce JODs (see the right side of Figure 1), an assertion that is supported by the difference across RDO manipulations in Experiment 2. Thus, it is likely that the magnitude of losses was responsible for or me- diated the relationship between level of risk and RDO selection but did not provide a cue to task difficulty overall; however, this relationship should be explored more directly before strong claims are made. Unlike previous research, which showed that partic- ipants discontinued their search for RDOs when they had previous experience in an area (Huber & Ma- cho, 2001), we found that participants’ behavior was unaffected by domain-specific background knowledge (videogame experience). However, participants devel- oped a systematic adoption of risk mitigation strate- gies over time, supporting previous research that suc- cessful strategies are pursued once they are learned (c.f., Lovett & Anderson, 1996). This result also sup- ports Huber and Huber’s (2008) assertion that people use their expectations to determine the availability and efficacy of RDOs, as evidenced by the shifts in behavior that occurred over time and resulted in stabilization of risk mitigation strategy. Although general aspects of risk mitigation behavior appear to be consistent, behavior in experiential tasks does differ from that of descriptive tasks in important ways. Recent research has suggests that people can be trained to attend to certain task-related cues more strongly than others when making JODs (Desender et al., 2017). It may be possible to encourage indi- 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 10 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation viduals to use task-related cues to select risk mitiga- tion strategies and to down-weight the influence of an RDO’s ease-of-use. Similar means might be achieved by architecting an environment that emphasizes cer- tain task cues above others. Together, these lines of research will clarify the factors that influence risk mit- igation decisions and help people mitigate risks strate- gically. Acknowledgements: We would like to thank Abigail Basham, Sierra Davila, Landon Fossum, Naomi Mwe- baza, and Jacob Sanderson for their assistance in run- ning this study. Portions of the work were presented at the November 2017 meeting of the Psychonomic Society. Declaration of conflicting interests: The authors de- clare that the research was conducted in the absence of any commercial or financial relationships that could be constructed as a potential conflict of interest. Handling editor: Andreas Fischer Author contributions: The authors contributed equally to this work. Copyright: This work is licensed under a Creative Com- mons Attribution-NonCommercial-NoDerivatives 4.0 In- ternational License. Citation: Vangsness, L., & Young, M. E. (2017). The role of difficulty in dynamic risk mitigation de- cisions. Journal of Dynamic Decision Making, 3, 5. 10.11588/jddm.2017.1.41543 Received: 29 September 2017 Accepted: 7 December 2017 Published: 15 December 2017 References Akaike, H. (1973). Maximum likelihood identification of Gaussian autoregressive moving average models. Biometrika, 60(2), 255– 265. doi: 10.2307/2334537 Altmann, E. M., & Trafton, J. G. (2007). Timecourse of recovery from task interruption: Data and a model. Psychonomic Bulletin & Review, 14(6), 1079–1084. doi:10.3758/bf03193094 Bär, A. S., & Huber, O. (2008). Successful or unsuccessful search for risk defusing operators: Effects on decision behaviour. European Journal of Cognitive Psychology, 20(4), 807–827. doi:10.1080/09541440701686227 Böckenholt, U. (2004). Comparative judgments as an alternative to ratings: Identifying the scale origin. Psychological Methods, 9(4), 453–465. doi:10.1037/1082-989X.9.4.453 Brockmyer, J. H., Fox, C. M., Curtiss, K. A., McBroom, E., Burkhart, K. M., & Pidruzny, J. N. (2009). The development of the game engagement questionnaire: A measure of engagement in video game-playing. Journal of Experimental Social Psychol- ogy, 45(4), 624–634. doi:10.1016/j.jesp.2009.02.016 Brunswik, E. (1956). Perception and the representative design of psychological experiments. Berkeley, CA: University of California Press. Camilleri, A. R., & Newell, B. R. (2013). The long and short of it: Closing the description-experience "gap" by taking the long-run view. Cognition, 126(1), 54–71. doi:10.1016/j.cognition.2012.09.001 Delta Dental (2014). 2014 Oral Health and Well-Being Survey. Retrieved from https://www.deltadental.com/ DDPAOralHealthWellBeingSurveyBrochure2014.pdf. Desender, K., Van Opstal, F., & Van den Bussche, E. (2017). Subjective experience of difficulty depends on multiple cues. Sci- entific Reports, 7, 1–14. doi:10.1038/srep44222 Desmond, P. A., & Hoyes, T. W. (1996). Workload variation, intrinsic risk and utility in a simulated air traffic control task: Evidence for compensatory effects. Safety Science, 22(1–3), 87– 101. doi:10.1016/0925-7535(96)00008-2 Douglas, P., Leight, S., & David, J. (2005). When good intentions turn bad: Promoting natural hazard preparedness. Australian Journal of Emergency Management, 20(1), 25–30. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychol- ogist, 34(10), 906–911. doi:10.1037/0003-066x.34.10.906 Geath, G. J., & Shanteau, J. (1984). Reducing the influence of irrelevant information on experienced decision makers. Or- ganizational Behavior & Human Performance, 33(2), 263–282. doi:10.1016/0030-5073(84)90024-2 Hau, R., Pleskac, T. J., & Hertwig, R. (2010). Decisions from experience and statistical probabilities: Why they trigger different choices than a priori probabilities. Journal of Behavioral Decision Making, 23(1), 48–68. doi:10.1002/bdm.665 Hau, R., Pleskac, T. J., Kiefer, J., & Hertwig, R. (2008). The description-experience gap in risky choice: The role of sample size and experienced probabilities. Journal of Behavioral Decision Making, 21(5), 493–518. doi:10.1002/bdm.598 Hertwig, R., & Erev, I., (2009). The description-experience gap in risky choice. Trends in Cognitive Science, 13(12), 517–523. doi:10.1016/j.tics.2009.09.004 Huber, O. (2012). Risky decisions: Active risk management. Current Directions in Psychological Science, 21(1), 26–30. doi:10.1177/0963721411422055 Huber, O., & Huber, O. W. (2003). Detectability of the negative event: Effect on the acceptance of pre- or post- event risk-defusing actions. Acta Psychologica, 113(1), 1–21. doi:10.1016/s0001-6918(02)00148-8 Huber, O., & Huber, O. W. (2008). Gambles vs. Quasi- realistic scenarios: Expectations to find probability and risk- defusing information. Acta Psychologica, 127(2), 222–236. doi:10.1016/j.actpsy.2007.05.002 Huber, O., & Kunz, U. (2007). Time pressure in risky decision- making: Effect on risk defusing. Psychology Science, 49(4), 415–426. Huber, O., & Macho, S. (2001). Probabilistic set-up and the search for probability information in quasi-naturalistic decision tasks. Risk Decision and Policy, 6(1), 1–16. doi:10.1017/s1357530901000230 Huber, O., Bär, A. S., & Huber, O. W. (2009). Justi- fication pressure in risky decision making: Search for risk defusing operators. Acta Psychologica, 130(1), 17–24. doi:10.1016/j.actpsy.2008.09.009 Huber, O., Beutter, C., Montoya, J., & Huber, O. W. (2001). Risk- defusing behaviour: Towards an understanding of risky decision making. European Journal of Cognitive Psychology, 13(3), 409– 426. doi:10.1080/09541440125915 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 11 http://dx.doi.org/10.11588/jddm.2017.1.41543 https://doi.org/10.2307/2334537 https://doi.org/10.3758/bf03193094 https://doi.org/10.1080/09541440701686227 https://doi.org/10.1037/1082-989X.9.4.453 https://doi.org/10.1016/j.jesp.2009.02.016 https://doi.org/10.1016/j.cognition.2012.09.001 https://www.deltadental.com/DDPAOralHealthWellBeingSurveyBrochure2014.pdf https://www.deltadental.com/DDPAOralHealthWellBeingSurveyBrochure2014.pdf https://doi.org/10.1038/srep44222 https://doi.org/10.1016/0925-7535(96)00008-2 https://doi.org/10.1037/0003-066x.34.10.906 https://doi.org/10.1016/0030-5073(84)90024-2 https://doi.org/10.1002/bdm.665 10.1002/bdm.598 https://doi.org/10.1016/j.tics.2009.09.004 https://doi.org/10.1177/0963721411422055 https://doi.org/10.1016/s0001-6918(02)00148-8 https://doi.org/10.1016/j.actpsy.2007.05.002 https://doi.org/10.1017/s1357530901000230 https://doi.org/10.1016/j.actpsy.2008.09.009 https://doi.org/10.1080/09541440125915 https://doi.org/10.11588/jddm.2017.1.41543 Vangsness & Young: Difficulty and risk mitigation Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice-Hall Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative/aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psychology, 74(4), 657– 690. doi: 10.1037//0021-9010.74.4.657 Knight, F. H. (1921). Risk, Uncertainty, and Profit. Boston: Houghton Mifflin. Koehler, D. J., Brenner, L., & Griffin, D. (2002). The calibration of expert judgment: Heuristics and biases beyond the laboratory. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 686–715). Cambridge, UK: Cambridge University Press. Koriat, A. (1997). Monitoring one’s own knowledge during study: A cue-utilization approach to judgments of learning. Jour- nal of Experimental Psychology: General, 126(4), 349–370. doi:10.1037//0096-3445.126.4.349 Kurzban, R. (2016). The sense of effort. Current Opinion in Psy- chology, 7, 67–70. doi:10.1016/j.copsyc.2015.08.003 Kurzban, R., Duckworth, A., Kable, J. W., & Myers, J. (2013). An opportunity cost model of subjective effort and task per- formance. Behavioral and Brain Sciences, 36(6), 661–79. doi:10.1017/S0140525X12003196 Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation. American Psycholo- gist, 57(9), 705–717. doi:10.1037//0003-066x.57.9.705 Lorist, M. M., Boksem, M. A., & Ridderinkhof, K. R. (2005). Impaired cognitive control and reduced cingulate activity dur- ing mental fatigue. Cognitive Brain Research, 24(2), 199–205. doi:10.1016/j.cogbrainres.2005.01.018 Lovett, M. C., & Anderson, J. R. (1996). History of success and current context in problem solving: Combined influences on operator selection. Cognitive Psychology, 31(2), 168–217. doi:10.1006/cogp.1996.0016 Lovett, M. C., & Schunn, C. D. (1999). Task representations, strategy variability, and base-rate neglect. Journal of Experimen- tal Psychology: General, 128(2), 107–130. doi:10.1037/0096- 3445.128.2.107 Mitchell, S. (2017). Devaluation of outcomes due to their cost: Extending discounting models beyond delay. In J. R. Stevens (Ed.), Nebraska Symposium on Motivation: Impulsivity (Vol. 64, pp. 145–161). Basel, Switzerland: Springer International Publishing. Ozuru, Y., Kurby, C. A., & McNamara, D. S. (2012). The effect of metacomprehension judgment task on comprehension monitoring and metacognitive accuracy. Metacognition and Learning, 7(2), 113–131. doi:10.1007/s11409-012-9087-y Pleskac, T. J., & Hertwig, R. (2014). Ecologically ratio- nal choice and the structure of the environment. Jour- nal of Experimental Psychology General, 143(5), 2000–2019. doi:10.1037/xge0000013 Shanteau, J. (1992). Competence in experts: The role of task characteristics. Organizational Behavior and Human Decision Processes, 53(2), 252–266. doi:10.1016/0749-5978(92)90064-e Sigurdsson, S. O., Taylor, M., A., & Wirth, O. (2013). Discounting the value of safety: Effects of perceived risk and effort. Journal of Safety Research, 46, 127–134. doi:10.1016/j.jsr.2013.04.006 Unity Game Engine (2016). [Computer Software]. (Version 5.4). San Francisco, CA: Unity. Vallières, B. R., Hodgetts, H. M., Vachon, F., & Tremblay, S. (2016). Supporting dynamic change detection: Using the right tool for the task. Cognitive Research: Principles and Implica- tions, 1(1), 32–52. doi:10.1186/s41235-016-0033-4 Vangsness, L. (2017). Perceptions of effort and risk assessment. (Unpublished master’s thesis). Kansas State University, Manhat- tan, KS. Walton, M. E., Kennerley, S. W., Bannerman, D. M., Phillips, P. E., & Rushworth, M. F. (2006). Weighing up the benefits of work: Behavioral and neural analyses of effort- related decision making. Neural Networks, 19(8), 1302–1314. doi:10.1016/j.neunet.2006.03.005 Zaleskiewicz, T., Piskorz, Z., & Borkowska, A. (2002). Fear or money? Decisions on insuring oneself against flood. Risk, Decision, and Policy, 7(3), 221–233. doi:10.1017/s1357530902000662 Appendix AIC comparisons suggested that the random effect structures for the models used to analyze Experiment 1 data could include inter- cept and time-on-task or intercept, performance, and time-on-task. Random effect structure AIC Experiment 1 – RDO selection intercept only 706.33 intercept and performance 706.09 intercept and time-on-task 668.57 intercept, performance, and time-on-task 674.18 Experiment 1 – JODs intercept only 1199.64 intercept and performance 1181.73 intercept and time-on-task 1199.13 intercept, performance, and time-on-task 1180.63 Experiment 2 – RDO intercept only 877.10 intercept and performance 861.21 intercept and time-on-task 769.85 intercept, performance, and time-on-task 767.62 Experiment 2 – JODs intercept only 1767.43 intercept and performance 1755.55 intercept and time-on-task 1764.33 intercept, performance, and time-on-task 1753.60 10.11588/jddm.2017.1.41543 JDDM | 2017 | Volume 3 | Article 5 | 12 https://doi.org/10.1037//0021-9010.74.4.657 https://doi.org/10.1037//0096-3445.126.4.349 https://doi.org/10.1016/j.copsyc.2015.08.003 https://doi.org/10.1017/S0140525X12003196 https://doi.org/10.1037//0003-066x.57.9.705 https://doi.org/10.1016/j.cogbrainres.2005.01.018 https://doi.org/10.1006/cogp.1996.0016 https://doi.org/10.1037/0096-3445.128.2.107 https://doi.org/10.1037/0096-3445.128.2.107 https://doi.org/10.1007/s11409-012-9087-y https://doi.org/10.1037/xge0000013 https://doi.org/10.1016/0749-5978(92)90064-e https://doi.org/10.1016/j.jsr.2013.04.006 https://doi.org/10.1186/s41235-016-0033-4 https://doi.org/10.1016/j.neunet.2006.03.005 https://doi.org/10.1017/s1357530902000662 https://doi.org/10.11588/jddm.2017.1.41543