Hip Hop as Computational Neuroscience: How the Hood Hacked our Global Rhythmic Nervous System The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, https://jps.library.utoronto.ca/index.php/ijidi DOI: 10.33137/ijidi.v6i1.37127 Hip Hop as Computational Neuroscience: How the Hood Hacked our Global Rhythmic Nervous System Ron Eglash, University of Michigan, USA Abstract Long before the internet provided us with a networked digital system, music exchanges had created a global networked analog system, built of recordings, radio broadcasts, and live performance. The features that allowed some audio formations to go viral, while others failed, fall at the intersection of three domains: access, culture, and cognition. We know how the explosive growth of the hip hop recording industry addressed the access problem, and how hip hop lyrics addressed cultural needs. But why does hip hop make your ass shake? This essay proposes that hip hop artists were creating an innovation in brain-to-brain connectivity. That is to say, there are deep parts of the limbic system that had not previously been connected to linguistic centers in the combination of neural and social pathways that hip hop facilitated. This research is not an argument for using computational neuroscience to analyze hip hop. Rather, it is asking what hip hop artists accomplished as the street version of computational neuroscientists; and, how they strategically deployed Black music traditions to rewire the world’s global rhythmic nervous system for new cognitive, cultural, and political alignments and sensibilities. Keywords: ethnocomputing; hip hop; information science; music; neuroscience Publication Type: research article Introduction usic has often been an object of analysis for fields such as cybernetics, neuroscience, and computing. In this essay I want to reverse that relationship and ask how computational sciences can help us understand hip hop innovators as agentic subjects; creators of their own forms of bio-social information technology. We already know how to write the description of “disruptive technology” for someone like Elon Musk (Jobaid & Naher, 2020); but what kinds of technological narratives describe the disruptive innovation of hip hop? What exactly did hip hop do that managed to shake up our musical, cultural, and political sensibilities in such profound ways? The question has already been approached in terms of sociotechnical history by scholars such as Tricia Rose (1994), Rayvon Fouché (2011), and Nettrice Gaskins (2021). Here I want to extend their analysis both inward to the brain—in particular, the relationship between the limbic system and linguistic system—and outward to what is sometimes called “distributed cognition” (Hutchins, 1995). If Rose, Fouché, and Gaskins can argue for “techno-vernacular creativity”—for hip hop artists as innovators in audio engineering—then it makes sense to ask about the implications of these same innovations in the mediation between technology and the brain, for hip hop as computational neuroscience. M https://jps.library.utoronto.ca/index.php/ijidi Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 12 In the first part of this essay, I review the relationship between computational neuroscience and music as it currently exists. I look at some of the research connecting brain functions and communication, and how musical and linguistic representations differ. The second section provides some empirical data examining the same distinctions for rap music versus other genres. I show that rap (and by extension most of hip hop’s musical foundations) has made unique contributions to the diversity of human communication forms, and what I call the “cosmo-cognitive” sphere of musical understanding. The protest that ‘rap isn’t really music,’ so common at its inception, now seems puzzling; there has been a global shift in our shared musical perceptions. Music and Computational Neuroscience Why do humans have music? Darwin proposed that it evolved as an attractive mating display, the acoustic equivalent of a peacock’s plumage. More recent explanations include parent-child bonding (Dissanayake, 2008); territorial signaling (Hagen & Hammerstein, 2009), repetitive motion synchronization (Larsson, 2013), and a means to strengthen social cohesion within a group (Cross, 2009). One problem with these adaptationist understandings is that they do not account for the role of creativity in music. The singing apes known as Gibbons, for example, have some of the most complex vocalizations of any non-human, but their songs are genetically transmitted, not learned or invented (Geissmann, 2000). When evolutionists explain the biological role of music in terms of synchronizing individuals with repetitive sequences of hoots or howls, they ignore the fact that repetition is only one part of music. A great song not only repeats, it also innovates; we admire well-placed hooks, unexpected musical phrasing or twists on older harmonic relationships. Rhythm (repetition) sets the pace, but the melody tells us where to go, and finding new directions, dimensions, modes of transport and scenic routes is fundamental to music’s reason for existence. Neurobiologists have noted this inadequacy in adaptation-centered evolutionary explanations and provide an alternative understanding of music as emotional communication. Snowden et al. (2015) review this literature, noting that music resembles the non-linguistic or “prosody” parts of vocal communication across many domains, such as emotional intonation. For example, subdued music tends to make more use of minor keys, and upbeat music tends to make more use of major keys. The same minor/major frequency spectra show up in subdued/upbeat contrasts in human emotional intonation. There are deep evolutionary roots of this relationship. Across many species, a low-pitched “satisfaction” sound analogous to cats purring can be heard when being groomed, and high-pitched cries are used to convey alarm. There are many such acoustic/emotional relationships, and while not completely universal, they at least cluster across many cases. Understanding how prosody, intonation, or other non-linguistic elements are used in flexible repertoires across species helps us see why music cannot be reduced to a single adaptive explanation, any more than one can argue that hands evolved specifically for throwing rather than pushing, pulling, twisting, tearing, caressing or a dozen other things. Snowden et al. (2015) also report on experiments showing that human music, when transposed to the audio ranges and tempos appropriate for other species, can have calming effects for calm music and “arousal” (in the sense of alarm) effects with energetic music. Similarly, there can be cross-cultural understanding through music among humans: they report that listeners unfamiliar with a musical form from another culture can, nonetheless, understand some of its emotional intentions. https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 13 If music has its origins in a long-term evolutionary trend by which organisms are conveying information regarding emotions and social relations, we should expect to see that reflected in the brain structures activated by it, and that is indeed the case. Deep within the brain, below the cortex and above the brainstem, lies the limbic system, which is tied to fight or flight, reproduction, caring for young, mood, and other basic emotional responses. Salimpoor et al. (2013) used functional magnetic resonance imaging (fMRI) to examine music listeners. As one would expect, they found that songs deemed pleasurable correlated with connections between reward centers in the limbic system and the auditory cortex, particularly cortical areas related to the prediction of temporal events. Cheung et al. (2019) extended the fMRI experiment by using a machine learning model to quantify two aspects of music: uncertainty versus surprise. As the song progresses, a listener might have a high degree of uncertainty, despite a sense of familiarity (“We thought we knew electric guitars until we heard Hendrix”). Or it might sound predictable but include a surprise (“it sounded like 100 other songs, but then suddenly had this great hook”). Those deemed most pleasurable, by both reports and fMRI brain activity, were at complimentary extremes (either low uncertainty and high surprise, or high uncertainty and low surprise). It is here that we begin to see a model for what constitutes emotionally significant creativity in music. A fresh new melody which is firmly anchored in a familiar tradition is always welcome (uncertainty without surprise), but so is a familiar melody which someone transformed with a fresh new take (surprise without uncertainty: once I hear the start of NWA’s cover of “Express Yourself” I know to expect the notes from Charles Wright & the Watts 103rd Street Rhythm Band’s 1970 song, but it still feels like a new gift every time (Wright, 1970). This model also explains why bizarre avant-garde music like John Cage’s dice roll scores, or Yoko Ono’s screaming is unlikely to gain any popularity. It might win academic admiration from musicologists, but its combination of high unpredictability and high surprise (or, if you listen to random notes and persistent screaming for long enough, predictability and low surprise) does not fit the neurological profile that would inspire ordinary lay people to enjoy it. Music as Analog Representation To dig deeper into an information science approach to music, we have to think carefully about the distinction between analog and digital. Digital encoding works by assigning meaning to some arbitrary symbol. Examples include the ways that words represent concepts, how Morse code represents letters, and how a DNA nucleotide triplet encodes for an amino acid. Digital encoding also works for some neurons. For example, crayfish have a defense reflex in which they fling their claws open. A simple pattern of neural impulses, referred to as a doublet (off-on-on-off) is nature’s digital code for this muscle reflex (Sugano, 1983). Prosody and music should be understood as analog representations. But the research in this area often treats them as if they are composed of digital symbols, which creates analytic barriers. As digital symbols one can only approach them as if acoustic feature A stands for emotional meaning 1, acoustic feature B stands for emotional meaning 2, and so on. If that were truly the case, then we could have a dictionary for prosody just like we do for words, and by extension, a dictionary for music. But what is the meaning of the note C sharp? Or the chord F major? How is it that the C major chord progression (I—V—vi—IV) conveys both the resilient sadness of Marley’s “No Woman No Cry” and the happy optimism of Flo Rida’s “Good Feeling”? https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 14 Analog representation is a much better model for music. Although we are often told the distinction between analog and digital is continuous versus discrete (Shannon & Weaver, 1949), in the deeper sense of the term, analog representation is the change of some meaning parameter in proportion to the change of a physical parameter. Take, for example, the way that tactile sensory cells in our skin send a message about pressure. Rather than a digital encoding, like the crayfish opener doublet, it is proportional: the harder you press on those cells, the faster its impulses fire (Figure 1). If your partner squeezes your hand when the movie gets intense, and that modulates approximately every 10 minutes, you will see patterns like that of Figure 1 repeated as well, mapping the neuron firing rate change to the changes in the film's emotional intensity. Figure 1. As the hand squeeze increases, Merkel cell firing rate increases. Based on simulation in Salimi-Nezhad et al., 2018. Of course, the emotional fluctuations and responses in real life are vastly more complex1, with many simultaneous dimensions. Even with the simple model of hand squeezes in the theater, we can see that there will be rise and fall patterns superimposed over many time scales. There might be an overarching dramatic arc, extending across the entire film. There might be a small glance between characters that only lasts a few seconds, and similar rise and fall of emotion at every scale in-between. Mathematicians have a name for patterns that are similar at every scale: a fractal. We are used to thinking about fractals in space: the psychedelic swirls of the Mandelbrot set, the delicate web of the Sierpinski gasket, and so on. But emotional fluctuations are fractals in time: they rise and fall within a similarly structured rise and fall at many time scales. Any analog representation of emotion—muscle tension, breathing rate, blood pressure, facial expression, and so on—will thus also show a fractal in time. There will be self-similar waves of physical fluctuation, because it is reflecting the fractal semantic fluctuations. https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 15 Movies are a convenient way to document this relationship. Physical fluctuations such as shot duration, scene duration, motion, and sound amplitude all follow a fractal pattern (Cutting et al., 2018). Human reactions are synchronized with the content of the movie; thus, we see similar fractals in movie audience eye movements (Hasson et al., 2008) and brain activation patterns (Hasson et al., 2009). This is not automatic. Filmmaking is a relatively new art, and its patterns have been gradually converging on fractals as they honed their craft over the last 70 years (Cutting et al., 2010). Filmmakers are gradually improving the techniques for making a visual composition to fit the way our brain expects human emotion to work: “the functions of rhythm are to create cycles of tension and release and to synchronize the spectator’s physical, emotional, and cognitive fluctuations with the rhythms of the film” (Pearlman, 2009, p. 61). Music is not new; it likely arose in our evolutionary past as the right brain’s analog complement to the left brain’s digital linguistics (Eglash, 1993). In movies, one must work in fractal timing around the main content of the film’s narrative. In music, the fractal timing of audio waves is the main content. Thus, music’s fractal pattern is far more austere, abstracted, and formalized, but the fundamentals are the same: emotional fluctuations represented in analog relations to acoustic fluctuations. One advantage of music’s formalism is that we can more easily visualize the fractal at work: similar repetitions of pitch changes at many scales. In Figure 2 we can see that a simple song like “Mary had a little lamb” has only three scales of similarity, but more complex music like Bach has many. In both cases, most of the acoustic energy is in the long wavelengths that span the entire song; smaller waves have proportionately less. Since power is the inverse of frequency, this fractal structure is called a 1/F power spectrum. Figure 2. Fractal repetition in “Mary had a little lamb” (top) and Bach’s Goldberg variations (bottom; score removed for legibility). Courtesy of Martin Wattenberg (http://turbulence.org/project/the-shape-of- song/). https://jps.library.utoronto.ca/index.php/ijidi/index http://turbulence.org/project/the-shape-of-song/ http://turbulence.org/project/the-shape-of-song/ Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 16 If we think about cognitive waveforms in “semantic space”—a purely cognitive domain in which waveforms map out meaning fluctuations—then the physical space of acoustics will reflect that structure for real world analog representation. But that is not true for digital encoding of the same information, because it has physically arbitrary2 symbols. For example, in English “cat” is higher pitched than “dog”, but in Spanish, “perro” is higher pitched than “gato”. The pitch of a word is only arbitrarily associated with its meaning, just as the number of letters in a word has little relation to its meaning3. Digital symbols are based on an arbitrary assignment of physical form to meaning. Thus, the pitch sequence of words, within the normal frequency range of voice, is simply a random succession. Even though there is typically a cohesive envelope for the waveform in semantic space, there are only random pitches in physical space. We can see this contrast in Figure 3. At the top is the extended vowel singing4 at the start of the chorus in the Maytals’ reggae hit, “Bam Bam” (Hibbert, 1966). Note that it shows the cohesive envelope of gradual change, typical for analog communication, in three ways. First in pitch: the vertical position of each green stripe declines in gradual steps. Second in duration: the width of each green stripe is progressively longer for the first two seconds, and progressively shorter for the last two. Lastly, in loudness: the width on either side of the yellow center gets wider for the first two seconds and then thinner for the last two. The second example in Figure 3 represents the author’s frequencies while reading the first four seconds of Hamlet’s soliloquy from Act 3, Scene 1 (it begins with “To be, or not to be, that is the question”). There is no cohesive pattern for pitch, just a succession of the arbitrary sounds assigned to each meaning. Hamlet is surely fractal in semantic space—the drama rises and falls within the whole, each act, scenes, vignettes, dialogues, and so on. But when that is digitally encoded by words you cannot see a physical pattern in text symbols or pitch sequence. Figure 3. The contrast between cohesive pitch changes in music and random pitch in speech. https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 17 Quantitative Evidence of Hip Hop’s Break with Tradition: The Fractal Dimension of Rap Music While the broader cultural arts movement of hip hop includes several elements, rap music has predominated, and the two terms are sometimes used interchangeably (Kenon, 2000). In the prior section we looked at how music normally communicates information as analog representation; here we will examine evidence that rap music is doing something quite different. Rap has gone through significant evolution and diversification since its inception, including blending with many other genres (Harris, 2019; Polfuß, 2021). For that reason, I will focus on samples from the early years of hip hop, since that is when its break with musical tradition was clearest. Voss and Clarke (1978) were the first to report that the pitch time series for ordinary speech lacks a fractal structure, and that music’s pitch sequence is always fractal (no matter what genre). It was their study that inspired my thoughts on the above framework. In Eglash (1993), I extended their experiment and used the same measure of “how fractal” a waveform is5 to detect the difference between analog and digital communications. Digital communication will have an arbitrary (statistically random) sequence, so it will tend towards a white noise or flat power spectrum within its main frequency range. Analog communication will tend towards a 1/F power spectrum, which is a fractal distribution. One of the experiments in Eglash (1993) showed that whale songs are literally songs in the sense that they too have a fractal distribution; this tested a prediction based on prior work from my master’s thesis on dolphin and whale communication (Eglash, 1986).6 The most important set of experiments was on hip hop. Figure 4 shows the results of the experiments in Eglash (1993) measuring the fractal dimension of pitch time series for rap music. When Voss and Clarke (1978) reported that all music had a fractal pitch time series, they had examined blues, jazz, rock, and classical. The graph in Figure 4 shows that reggae also has the typical fractal distribution, but rap does not: the fractal dimension for rap lands somewhere between the value for speech and the value for music. It is true that one could obtain similar results by just creating notes randomly, as John Cage has done, but no one actually listens to such compositions for their daily enjoyment.7 Rap artists accomplished something never before done: they created something that is both authentically enjoyed as music, and yet violates the 1/F power spectrum that is characteristic of all popular musical forms. They have, in effect, merged left brain, digital communication and right brain, analog communication in ways not previously performed. Figure 4. The low fractal dimension of rap music. https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 18 Qualitative Evidence of Rap Artists’ Awareness of New Cognitive-Acoustic Relationships The above section used measures of fractal dimension as indicators that, in musical genres prior to hip hop, lyrics do not disrupt analog waveforms in the way rap exhibits. The innovation was perfectly obvious to anyone listening to rap for the first time: lyrics are normally sung, not spoken. This violation caused dismissive rejections by music critics, complaining “that’s not singing, that’s talking”. Later theorists made the connection to the significance of hip hop’s emergence in the late 1970s when technology was making the shift from analog to digital (Eglash, 1993; Rose, 1993; Eglash, 1998; Fouché, 2011). However, it is crucial to understand that the shift in fractal dimension of the music shown in Figure 4 was first done with analog technology—the turntable—used in a digital manner (Goldberg, 2004): reassembling sound into breaks, samples, scratches, and remixes. Hip hop did not emerge in reaction to the shift to digital tech; it led the way, anticipating the new cultural and sonic identity that would be needed. This appropriation of analog technology—using equipment meant for continuous analog waveforms to splice samples as if they were sonic building blocks—is one piece of evidence for the conscious intent of hip hop artists. Conversely, as digital technology became available artists began using it in more fluid, analog ways, as noted by hip hop historian Tricia Rose: Rap technicians employ digital technology as instruments, revising black musical styles and priorities through the manipulation of technology. In this process of techno-black cultural syncretism, technological instruments and black cultural priorities are revised and expanded. In a simultaneous exchange rap music has made its mark on advanced technology and technology has profoundly changed the sound of black music. (Rose, 1994, p. 96) The artists themselves sometimes describe this rapid uptake using the rhetoric of scientific or experimental identity. Fouché (2011) underscores the ways in which analog turntables did not simply vanish; rather they only heightened the work of hip hop musicians as engineers and theorists of turntablism. As Grandmaster Flash put it when asked about the transition from analog turntables to digital devices: “I’m a scientist, I like it all. I just think, quite frankly, if you’re gonna learn how to drive then you should know how to drive stick first, just like with records. Then the modern version of it would be easier” (as cited in Lavin, 2019). In the above quote Grandmaster Flash uses the identity of scientist to describe his practical approach to transitioning between analog and digital technologies. Hip hop science narratives were more often fantastical, highlighting the concept of moving information across the dualisms of analog/digital, right brain/left brain, human/machine, and similar dichotomies in an early version of what would now be called an AfroFuturist imaginary. Group names such as “Digital Underground” and “Dr. Octagon”; artist names such as “El Cerebro” and “Rapper Left Brain”; songs such as “Cyborg Dance” and “Automan”; album covers replete with everything from computer/human hybrids (Figure 5) to alien brain surgery (Figure 6). https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 19 Figure 5. Newcleus (2018) album cover symbolizing their fusion of analog and digital: keyboard on the musicians’ right, computer on the left, and cyborg musician centered as the corpus callosum. Photo credit: Ron Eglash https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 20 Figure 6. Dr. Octagon performs brain surgery, merging analog and digital technology with a vinyl record on a digitally controlled turntable. Photo credit: Ron Eglash In addition to neuroscience-related imagery and rhetoric, there is also evidence of awareness of cognitive-acoustic innovation in the ways rap artists and their fans have developed their own version of music theory and used it in both practice and reflections on their conceptions of cognitive acoustics. Just as ethnomathematics demonstrates an independent body of mathematical ideas and practices outside of the Western canon (Eglash, 1997), one can argue for an “ethnoneuroscience” in hip hop’s independent creation of cognitive-acoustic practices and terminologies. Krims (2000) showed that Western music theory offers a poor analytic framework for understanding rap music, and introduced an alternative ethnomusicology approach, studying the ways the musicians themselves described their particular fusion of linguistic and musical elements and its manipulation to achieve particular cognitive effects. Other scholars (e.g., Adams, 2009; Schloss, 2004; Ohriner, 2019) have built on his approach. Above all else, the artists’ own terminology, practices, and conceptions of “flow” became the key component in these ethnomusicology analyses. A concept essentially absent in Western music theory, Adams (2009) defines flow as “all of the ways in which a rapper uses rhythm and articulation in his/her lyrical delivery” (para. 1). He offers four examples of metrical techniques, and three examples of articulative techniques (Table 1). https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 21 Table 1. Techniques of flow (Adams, 2009) Metrical Techniques of Flow Articulative Techniques of Flow 1. The placement of rhyming syllables. 1. The amount of legato or staccato used. 2. The placement of accented syllables. 2. The degree of articulation of consonants. 3. The degree of correspondence between syntactic units and measures. 3. The extent to which the onset of any syllable is earlier or later than the beat. 4. The number of syllables per beat. Each parameter has a continuous range limited only by phase relationships and perceptual resolution, and some can be further expanded as multidimensional (e.g., manipulating syntactic and semantic ambiguity in metrical technique number three). Adams (2009) shows how manipulating some or all of the seven parameters across their full range creates a control of flow that conveys the lyrical narrative, aesthetic feel, and rapper’s distinctive style. It also parallels the self-conscious reflections of rappers regarding the analog/digital synthesis we have seen featured in album imagery, word play, and fantastical self-descriptions. Several of the hip hop ethnomusicology analyses highlight the self-awareness of flow control by the artists. For example, Ohriner (2019) shows how “Flip Flop Rock” by Outkast uses the lyric “I switch the flow” to mark the place in which the flow is modulated (from two-syllable rhyme groupings separated by whole notes to dense delivery every 16th note); thus, both the effect and the song title are resonating with the flip-flop circuit in computing (the basis for binary coding in all digital technology). Adams (2009) shows how “100 miles and Runnin’” by N.W.A. uses flow changes to distinguish between “doing” lyrics describing the fictional character on the run, and “being” lyrics of self-narration (as performed by Dr. Dre). During the first set of lyrics the synchronization between beats and accented syllables mirror the repeated strides of steady running (doing), while asynchronous beat/syllable relations in the second set represent a chaotic cognitive state (being). Simultaneously, the chaos itself is produced by recursive contradictions (for example in the verse “And while they treat my group like dirt/Their whole fuckin' family, is wearin' our T-shirts”). Pickering’s (2010) history of British cybernetics of the 1960s, The Cybernetic Brain, details a similar quest to link information representation in movement to audio and visual dynamics with cognitive and neurological models. This British cybernetics group largely failed to show any technological advancement by normal standards: Pickering characterizes them as an admirable attempt to create an alternative science, outside the mainstream. Given their lack of technical success, Pickering offers an alternative metric for their impact as scientists by highlighting their involvement with the invention of light shows in the 1960s psychedelic performances. There is, however, a long history by which white cultural capital is easily able to elevate its achievements as the equivalent of academic knowledge and science, whereas comparable forms of Black cultural capital struggle to do so (Kajikawa, 2019; Eglash et al., 2021). So, it is no surprise that https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 22 these British “alternative thinkers” can make psychedelic light shows, and have it count as cybernetics, whereas a musical intervention by low-income Black youth would not be seen in a similar light, despite having far greater impact. In summary, far from unconsciously stumbling into a cognitively unique musical formation, there is evidence that hip hop artists deliberately highlight its innovative musiocological, technological, and communicative characteristics as a kind of computational neuroscience in two ways. The first is understood through the “appropriation of technology framework” (Eglash et al., 2004), in which a technology made for hegemonic use is creatively adapted and manipulated to “make it our own” by a disenfranchised group. Hip hop musicians have appropriated the words, imagery, and ideas of computational neuroscience by creatively reworking concepts such as analog/digital divisions in explicit statements, vivid imagery and word play, from sources ranging from interviews to album covers to lyrics. The second body of evidence for self-awareness of cognitive-acoustic innovation is investigated through the ethnoknowledge framework. In this framework knowledge was created independently from the western mainstream. But through analysis, one can see that it has analogous and perhaps innovative insights. Examples are in ethnomedicine, where an Indigenous culture may develop medical applications of plants that are unknown to the west, or in ethnoastronomy, where recordings of events such as the appearance of a comet occurred centuries prior to western astronomy’s knowledge of it. In this case, the ways rap artists have developed their own version of music theory would offer a similar instance of intellectual achievements outside of the academic mainstream, and perhaps its insights exceed those of the typical academy, at least in this specific area. In particular, the role of flow in describing rap’s unique fusion of instrumental and lyrical (i.e., analog and digital) communication components in relation to the cognitive effects they seek to achieve. Flipping the Flow: From Hip Hop to Academic Neuroscience While hip hop benefited by invoking neuroscience and related fields in its futuristic imagery, and developed its own internal practices and vocabularies for manipulating cognitive-acoustic relationships, academic neuroscience research has directly benefited from this innovation as well. In a ground-breaking study titled “Neural Correlates of Lyrical Improvisation” (Liu et al., 2012), 12 rappers were asked to freestyle—to rap while inventing lyrics on the spot— while under fMRI brain scans (Figure 6). These were compared to fMRI of the same musicians rapping memorized lines. Prior studies had indicated an increase in the dorsolateral prefrontal regions during creative activity, which play a supervisory role in guiding cognitive activity, but these indicated the exact opposite—a decrease in these regions. The authors concluded that freestyle rapping offered a better model for true creativity in the wild. Prior experiments had used exercises that were too artificial to show what authentic, “live” creativity is like. It is when we are released from the brain’s self-supervision and “let the spirit move us” that the more natural cognitive state is achieved. As rapper Mike Eagle, both a subject in the experiment and a co-author of the study put it: “That’s kind of the nature of that type of improvisation. Even as people who do it, we’re not 100% sure of where we’re getting improvisation from” (Liu et al., 2012). The paper is still cited in a wide range of neuroscience publications, ranging from brain network dynamics to limbic system emotional processing. https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 23 Figure 8. Rapper Mike Eagle with neuroscientist Ho Ming Chow Liu et al. (2012) is just one part of a broader field of academic hip hop/cognitive science co-investigations. Of particular importance has been hip hop’s role in music therapy. Pierce (2004) ran controlled studies on mental health patients, showing that psychoeducation was more effective when accompanied by music therapy—in this case patients’ discussions of hip hop lyrics. In 2014, Dr. Akeem Sule (a consultant psychiatrist at the South Essex Partnership Trust) and Dr. Becky Inkster (a neuroscientist at Cambridge University) co-launched HIP HOP PSYCH, which aimed to “bridge the gap between the hip-hop community and the medical community” via the use of hip hop in new psychotherapies, diversity recruitment for medical health careers, educational innovation, and in public anti-stigma campaigns (Sule & Inkster, 2014, para. 5). Computational neuroscience has also become directly involved with social media. The application areas include neuromarketing, which has been criticized as unethical techniques for duping consumers and invading their privacy (Ulman et al., 2014). However, cognitive scientists in social media research also recognize socially beneficial potentials. In their recent contribution to this area, Niederkrotenthaler et al. (2021) used a seasonal autoregressive integrated moving average time series model to study the potential impact of Logic’s hip hop song “1-800-273-8255” on 8Lifeline calls and suicides in the United States. They note that the typical social media trigger for lifeline calls are cases of celebrity suicide, in which an increase in Lifeline calls is accompanied by an increase in the number of suicides. But here the reverse was found: lifeline calls increased, but suicide rates decreased. Thus, they note a unique positive benefit in this case of hip hop influence and use it to further cognitive modeling of social media in relation to social benefits. In 2019 the People’s Choice Award at MIT’s virtual reality hackathon went to BrainRap (Figure 7), created by Micah Brown, a neuroscience entrepreneur from South London (Lazauskas, 2019). https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 24 Figure 9. Micah Brown using BrainRap at MIT’s VR hackathon (Image credit: FastCompany.com) The system combines Neurable, a neurosentiment technology using electrodes on the scalp, with a visualization technology (in this case a Vive VR headset). Like generations of hip hop artists before him, Brown has used the resources these new fusions made available towards a variety of both entrepreneurial and public-serving applications: a live hip hop performance art tool; a free application that enables low-income independent artists to find their ideal audience by matching the content of their lyrics to neurosentiment data; a venture capital business that funds neurocomputing startups; and the start of blue-sky aspirations towards mental health applications. Conclusion The well-known narrative of hip hop as music innovation is that of technology “appropriation” (Eglash et al., 2004; Gaskins, 2021). In this scenario, the turntable, created to merely play back recordings, becomes a new instrument of grassroots creation; it is transformed from mere corporate-sanctioned reproduction to revolutionary neo-production. This essay has examined the significance of hip hop regarding the start of digital technology, and challenged the very meaning of the analog/digital contrast. It has used this to explore the role of hip hop artists as agents of change in broadening humanity’s range of acoustic encoding possibilities, and the possibilities for humanitarian benefit from this grassroots neurocybernetics. https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 25 Acknowledgements The author would like to acknowledge National Science Foundation grants DRL-1640014 and IIS-2128756 in support of this work. The author thanks the following sources for allowing use of images under fair use restrictions: Figure 1: Adapted from Salimi-Nezhad, N., Amiri, M., Falotico, E., & Laschi, C. (2018). A digital hardware realization for spiking model of cutaneous mechanoreceptor. Frontiers in Neuroscience, 12, 322. Figure 2: Created by Martin Wattenberg. Used with permission. Figure 3: Created by Ron Eglash. Figure 4: Created by Ron Eglash. Figure 5: Photo by Ron Eglash. Figure 6: Photo by Ron Eglash. Figure 7: Table by Ron Eglash. Figure 8: The image used by permission of the National Institute on Deafness and Other Communication Disorders, National Institutes of Health, U.S. Department of Health and Human Services. Figure 9: Still shot of gif cover image. Lazauskas, J. (2019, February 15). BrainRap could change how we see hip-hop–and neuroscience. FastCompany.com. Endnotes 1 As are neurons. Even for simple tactile receptors, some are more tuned to transient response and habituate (stop firing) with constant pressure; others are tuned for the opposite. “Tuning” is not just a metaphor in the analog world; the neurobiologists who focus on the brain as an analog system refer to relations of resonance, entrainment, phase transition, and other aspects of nonlinear dynamics (see A. J. Mandell and K. A. Selz, 2003, Brain stem neuronal noise and neocortical “resonance” in Journal of Statistical Physics, 70(1-2), 355-373; W. J. Freeman, and G. Vitiello, 2006, Nonlinear brain dynamics and many-body field dynamics in Electromagnetic Biology and Medicine, 24(3), 233–241, and S. Grossberg, 2017, Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support in Neural Networks, 87, 38–95). https://jps.library.utoronto.ca/index.php/ijidi/index Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 26 2 See Manfu Duan’s 2012 article, “On the arbitrary nature of linguistic sign,” in Theory and Practice in Language Studies, 2(1), 54-59, where he clarifies Saussure’s statements on the arbitrariness of the linguistic signifier. 3 Voss and Clarke (1978) thought that tonal languages such as Chinese would be an exception. But tonal languages are just as arbitrary, they only make more use of pitch distinctions. Empirical measures of pitch time series for Chinese speech showed the same lack of fractal structure as that of English (Eglash, 1993). 4 Called “aggiustamento” in opera. See T. J. Millhouse and D. T. Kenny’s conference paper, Vowel placement during operatic singing: 'Come si Parla’ or ‘Aggiustamento'? INTERSPEECH - 9th Annual Conference of the International Speech Communication Association, Brisbane, QLD, Australia, September 22-September 26, 2008. 5 A pitch time series was created using a spectrum analyzer, and then a Fourier transform applied to obtain the power spectrum of the pitch series. The slope of the power spectrum, measured with a least squares estimate, is proportional to the fractal dimension. 6 The prediction in Eglash 1986 is based in part on neurobiology: humans have a larger left hemisphere, due to our linguistic-centered communication. Dolphins and whales have a larger right hemisphere, which is where human music and paralinguistic (prosody or intonation) is produced. The pitch contours of cetacean communication look like figure 3 top, not bottom. 7 The “unlistenable” quality of Cage’s random notes was not by accident. As Ross (2010) noted, “he fulfilled Schoenberg’s tenet that music should exercise a critical function, disturbing rather than comforting the listener.” References Adams, K. (2009). On the metrical techniques of flow in rap music. Music Theory Online 15(5). https://mtosmt.org/issues/mto.09.15.5/mto.09.15.5.adams.php Cheung, V. K. M., Harrison, P. M. C., Meyer, L., Pearce, M. T., Haynes, J. D., & Koelsch, S. (2019). Uncertainty and surprise jointly predict musical pleasure and amygdala, hippocampus, and auditory cortex activity. Current Biology: CB, 29(23), 4084–4092.e4. https://doi.org/10.1016/j.cub.2019.09.067 Cross, I. (2009). The nature of music and its evolution. In S. Hallam, I. Cross, & M. Thaut (Eds.), Oxford handbook of music psychology (1st ed., pp. 3-13). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199298457.013.0001 Cutting, J. E., DeLong, J. E., & Brunick, K. L. (2018). Temporal fractals in movies and mind. Cognitive Research: Principles and Implications 3(8), 21 pp. https://doi.org/10.1186/s41235-018-0091-x Cutting, J. E., DeLong, J. E., & Nothelfer, C. E. (2010). Attention and the evolution of Hollywood films. Psychological Science, 21(3), 440–447. https://doi.org/10. 1177/0956797610361679 https://jps.library.utoronto.ca/index.php/ijidi/index https://mtosmt.org/issues/mto.09.15.5/mto.09.15.5.adams.php https://doi.org/10.1016/j.cub.2019.09.067 https://doi.org/10.1093/oxfordhb/9780199298457.013.0001 https://doi.org/10.1186/s41235-018-0091-x https://doi.org/10.%201177/0956797610361679 Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 27 Dissanayake, E. (2008). If music is the food of love, what about survival and reproductive success? Musicae Scientiae, 12(1), 169–195. https://doi.org/10.1177/1029864908012001081 Eglash, R. (1984). The cybernetics of Cetacea. Investigations on Cetacea, 16, 151-197. Eglash, R. (1993). Inferring representation type from the fractal dimension of biological communication waveforms. Journal of Social and Evolutionary Systems, 16(4), 375-399. https://doi.org/10.1016/1061-7361(93)90015-J Eglash, R. (1997). When math worlds collide: Intention and invention in ethnomathematics. Science, Technology, & Human Values, 22(1), 79-97. Eglash, R. (1998). Cybernetics and American youth subculture. Cultural Studies, 12(3), 382-409. Eglash, R., Croissant, J., Di Chiro, G., & Fouché, R. (Eds.). (2004). Appropriating technology: Vernacular science and social power. University of Minnesota Press. Eglash, R., Bennett, A., Cooke, L., Babbitt, W., & Lachney, M. (2021). Counter-hegemonic computing: Toward computer science education for value generation and emancipation. ACM Transactions on Computing Education (TOCE), 21(4), 1-30. Fouché, R. (2011). Analog turns digital: Hip-hop, technology, and the maintenance of racial authenticity. In T. Pinch & K. Bijsterveld (Eds.), The Oxford handbook of sound studies (pp. 505-525). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780195388947.013.0108 Gaskins, N. (2021). Techno-vernacular creativity and innovation: Culturally relevant making inside and outside of the classroom. MIT Press. Geissmann, T. (2000). Gibbon songs and human music from an evolutionary perspective. In N. Wallin, B. Merker, & S. Brown (Eds.), The origins of music (pp. 103-123). MIT Press. Goldberg, D. A. M. (2004). The scratch is hip-hop: Appropriating the phonographic medium. In R. Eglash (Ed.), Appropriating technology: Vernacular science and social power (pp. 107-144). University of Minnesota Press. Hagen, E. H., & Hammerstein, P. (2009). Did neanderthals and other early humans sing? Seeking the biological roots of music in the loud calls of primates, lions, hyenas, and wolves. Musicae Scientiae, 13(2_suppl), 291–320. Harris, T. T. (2019). Can it be bigger than hip hop? From global hip hop studies to hip hop. Journal of Hip Hop Studies, 6(2), 151-197. https://scholarscompass.vcu.edu/jhhs/vol6/iss2/7/ Hasson, U., Landesman, O., Knappmeyer, B., Vallines, I., Rubin, N., & Heeger, D. J. (2008). Neurocinematics: The neuroscience of film. Projections: The Journal for Movies and Mind, 2(1), 1-26. https://doi.org/10.3167/proj.2008.020102 https://jps.library.utoronto.ca/index.php/ijidi/index https://doi.org/10.1177/1029864908012001081 https://doi.org/10.1016/1061-7361(93)90015-J https://doi.org/10.1093/oxfordhb/9780195388947.013.0108 https://scholarscompass.vcu.edu/jhhs/vol6/iss2/7/ https://doi.org/10.3167/proj.2008.020102 Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 28 Hasson, U., Malach, R., & Heeger, D. J. (2009). Reliability of cortical activity during natural stimulation. Trends in Cognitive Sciences, 14(1), 40–48. https://doi.org/10.1016/j.tics.2009.10.011 Hibbert, T. (1966). Bam Bam. [performed by The Maytals]. Doctor Bird – DB-1038. Hutchins, E. (1995). Cognition in the Wild. MIT Press. Jobaid, M. I., & Naher, K. (2020). Tesla's relationship building disruptive technology in automobile industry in India. International Journal of Latest Trends in Engineering and Technology, 16(1), 142-153. https://www.ijltet.org/journal/159702051621.3247.pdf Kajikawa, L. (2019). The possessive investment in classical music. In K. W. Crenshaw (Ed.), Seeing race again (pp. 155-174). University of California Press. Kenon, M. (2000, June 03). Hip-Hop: It’s here to stay, OK? Billboard: The International Newsweekly of Music, Video, and Home Entertainment, 112(23), 42. Krims, A. (2000). Rap music and the poetics of identity. Cambridge University Press. Larsson, M. (2013). Self-generated sounds of locomotion and ventilation and the evolution of human rhythmic abilities. Animal Cognition, 17(1), 1–14. Lavin, W. (2019, May 31). Grandmaster Flash: “Hip hop has always had a misunderstood beginning.” NME. https://www.nme.com/features/grandmaster-flash-hip-hop- interview-2019-2485540 Lazauskas, J. (2019, February 15). BrainRap could change how we see hip-hop-and neuroscience. FastCompany. https://www.fastcompany.com/90306790/brainrap-could- change-how-we-see-hip-hop-and-neuroscience Liu, S., Chow, H. M., Xu, Y., Erkkinen, M. G., Swett, K. E., Eagle, M. W., Rizik-Baer, D. A., & Braun, A. R. (2012). Neural correlates of lyrical improvisation: An fMRI study of freestyle rap. Scientific Reports 2, Article 834, 1-8. https://doi.org/10.1038/srep00834 Niederkrotenthaler, T., Tran, U. S., Gould, M., Sinyor, M., Sumner, S., Strauss, M. J., Voracek, M., Till, B., Murphy, S., Gonzalez, F., Spittal, M.J., & Draper, J. (2021). Association of Logic’s hip hop song “1-800-273-8255” with Lifeline calls and suicides in the United States: Interrupted time series analysis. The British Medical Journal (Online), 375, e067726–e067726. https://doi.org/10.1136/bmj-2021-067726 Ohriner, M. (2019). Flow: The rhythmic voice in rap music. Oxford University Press. Pearlman, K. (2009). Cutting rhythms: Shaping the film edit. Routledge. https://doi.org/10.4324/9780080927763 Pickering, A. (2010). The cybernetic brain: Sketches of another future. University of Chicago Press. https://jps.library.utoronto.ca/index.php/ijidi/index https://doi.org/10.1016/j.tics.2009.10.011 https://www.ijltet.org/journal/159702051621.3247.pdf https://www.nme.com/features/grandmaster-flash-hip-hop-interview-2019-2485540 https://www.nme.com/features/grandmaster-flash-hip-hop-interview-2019-2485540 https://www.fastcompany.com/90306790/brainrap-could-change-how-we-see-hip-hop-and-neuroscience https://www.fastcompany.com/90306790/brainrap-could-change-how-we-see-hip-hop-and-neuroscience https://doi.org/10.1038/srep00834 https://doi.org/10.1136/bmj-2021-067726 https://doi.org/10.4324/9780080927763 Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 29 Pierce, J. W. (2004). The effect of music therapy and psychoeducation versus psychoeducation for mainstreaming mental health patients into society [Master’s thesis, School of Music, Florida State University]. DigiNole: FSU's Digital Repository. http://purl.flvc.org/fsu/fd/FSU_migr_etd-0811 Polfuß, J. (2021). Hip-hop: A marketplace icon. Consumption Markets & Culture, 1-15. https://doi.org/10.1080/10253866.2021.1990050 Rose, T. (1994). Black noise: Rap music and Black cultural resistance in contemporary American popular culture. Wesleyan University Press. Ross, A. (2010). Searching for silence. The New Yorker, 86(30), 52-61. Salimi-Nezhad, N., Amiri, M., Falotico, E., & Laschi, C. (2018, June). A digital hardware realization for spiking model of cutaneous mechanoreceptor. Frontiers in Neuroscience, 12, Article 322, 1-13. https://doi.org/10.3389/fnins.2018.00322 Salimpoor, V. N., van den Bosch, I., Kovacevic, N., McIntosh, A. R., Dagher, A., & Zatorre, R. J. (2013, April). Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science, 340(6129), 216-219. PMID 23580531. https://doi.org/10.1126/Science.1231059 Schloss, J. G. (2004). Making beats: The art of sample-based hip-hop. Wesleyan University Press. Shannon, C. E., & Weaver, W. (1949). A mathematical theory of communication. University of Illinois Press. Snowdon, C. T., Zimmermann, E., & Altenmüller, E. (2015). Music evolution and neuroscience. Progress in Brain Research. 217, 17-34. https://doi.org/10.1016/bs.pbr.2014.11.019 Sugano, N. (1983). Effect of doublet impulse sequences in the crayfish claw opener muscles and the computer-simulated neuromuscular synapse. Biological Cybernetics, 49(1), 55-61. Sule, A., & Inkster, B. (2014). A hip-hop state of mind. The Lancet Psychiatry, 1(7), 494-495. https://doi.org/10.1016/S2215-0366(14)00063-7 Ulman, Y. I., Cakar, T., & Yildiz, G. (2014, 24 August). Ethical issues in neuromarketing: 'I consume, therefore I am!'" Science and Engineering Ethics, 21(5), 1271–1284. https://doi.org/10.1007/s11948-014-9581-5 Voss, R. F., & Clarke, J. (1978). “1/f noise’’ in music: Music from 1/f noise. The Journal of the Acoustical Society of America, 63(1), 258-263. Wright, C. (1970). Express Yourself. [Recorded by Charles Wright & the Watts 103rd Street Rhythm Band]. On Express yourself [Album]. Warner Bros. Records. https://jps.library.utoronto.ca/index.php/ijidi/index http://purl.flvc.org/fsu/fd/FSU_migr_etd-0811 https://doi.org/10.1080/10253866.2021.1990050 https://doi.org/10.3389/fnins.2018.00322 https://doi.org/10.1126/Science.1231059 https://doi.org/10.1016/bs.pbr.2014.11.019 https://doi.org/10.1016/S2215-0366(14)00063-7 https://doi.org/10.1007/s11948-014-9581-5 Hip Hop as Computational Neuroscience The International Journal of Information, Diversity, & Inclusion, 6(1/2), 2022 ISSN 2574-3430, jps.library.utoronto.ca/index.php/ijidi/index DOI: 10.33137/ijidi.v6i1.37127 30 Ron Eglash (eglash@umich.edu) received his BS in cybernetics, his MS in systems engineering, and his PhD in History of Consciousness. He is a professor in the School of Information at the University of Michigan (USA), with a secondary appointment in the Stamps School of Art and Design. https://jps.library.utoronto.ca/index.php/ijidi/index mailto:eglash@umich.edu Introduction Music and Computational Neuroscience Music as Analog Representation Quantitative Evidence of Hip Hop’s Break with Tradition: The Fractal Dimension of Rap Music Qualitative Evidence of Rap Artists’ Awareness of New Cognitive-Acoustic Relationships Flipping the Flow: From Hip Hop to Academic Neuroscience Conclusion Acknowledgements Endnotes References