Saturday, July 12, 2014

Emergence and Persistence of Communities in Coevolutionary Networks

Here is some serious geekery for your Saturday entertainment. This is a fairly complex and computational model of how social communities emerge and survive over time through a process called "adaptive rewiring."

Here is a brief overview of how these coevolutionary networks might exist in different realms:
In a social network, communities might indicate factions, interest groups, or social divisions [1]; in biological networks, they encompass entities having the same biological function [5–7]; in the WorldWideWeb they may correspond to groups of pages dealing with the same or related topics [8]; in food webs they may identify compartments [9]; and a community in a metabolic or genetic network might be related to a specific functional task [10].
And for those of us who are not familiar with this field or its terminology, here is a glossary of network terminology, taken from Gross and Blasius (2008).
A brief network glossary.

Degree. The degree of a node is the number of nearest neighbours to which a node is connected. The mean degree of the network is the mean of the individual degrees of all nodes in the network.
Dynamics. Depending on the context, the term dynamics is used in the literature to refer to a temporal change of either the state or the topology of a network. In this paper, we use the term dynamics exclusively to describe a change in the state, while the term evolution is used to describe a change in the topology.
Evolution. Depending on the context the term evolution is used in the literature to refer to a temporal change of either the state or the topology of a network. In this paper, we use the term evolution exclusively to describe a change in the topology, while the term dynamics is used to describe a change in the state.
Frozen nodes. A node is said to be frozen if its state does not change over in the long-term behaviour of the network. In certain systems discussed here, the state of frozen nodes can change nevertheless on an even longer (topological) time scale.
Link. A link is a connection between two nodes in the networks. Links are also sometimes called edges or simply network connections.
Neighbours. Two nodes are said to be neighbours if they are connected by a link.
Node. The node is the principal unit of the network. A network consists of a number of nodes connected by links. Nodes are sometimes also called vertices.
Scale-free network. In scale-free networks, the distribution of node degrees follows a power law.
State of the network. Depending on the context, the state of a network is used either to describe the state of the network nodes or the state of the whole network including the nodes and the topology. In this review, we use the term state to refer exclusively to the collective state of the nodes. Thus, the state is a priori independent of the network topology.
Topology of the network. The topology of a network defines a specific pattern of connections between the network nodes.
And from the same authors (citation at the bottom of the post), here is a visual representation of how adaptive rewiring might look:


This is interesting stuff and it seems very relevant to emerging coevolutionary networks in P2P communities and other cooperative networks that emerging in the post-capitalist world.

Full Citation:
González-Avella, JC, Cosenza, MG, Herrera, JL, & Tucci, K. (2014, Jul 1). Emergence and persistence of communities in coevolutionary networks. arXiv:1407.0388v1

Emergence and persistence of communities in coevolutionary networks

J. C. González-Avella, M. G. Cosenza, J. L. Herrera, K. Tucci
ABSTRACT
We investigate the emergence and persistence of communities through a recently proposed mechanism of adaptive rewiring in coevolutionary networks. We characterize the topological structures arising in a coevolutionary network subject to an adaptive rewiring process and a node dynamics given by a simple voterlike rule. We find that, for some values of the parameters describing the adaptive rewiring process, a community structure emerges on a connected network. We show that the emergence of communities is associated to a decrease in the number of active links in the system, i.e. links that connect two nodes in different states. The lifetime of the community structure state scales exponentially with the size of the system. Additionally, we find that a small noise in the node dynamics can sustain a diversity of states and a community structure in time in a finite size system. Thus, large system size and/or local noise can explain the persistence of communities and diversity in many real systems.

I. INTRODUCTION

Many social, biological, and technological systems possess a characteristic network structure consisting of communities or modules, which are groups of nodes distinguished by having a high density of links between nodes of the same group and a comparatively low density of links between nodes of different groups [1–4]. Such a network structure is expected to play an important functional role in many systems. In a social network, communities might indicate factions, interest groups, or social divisions [1]; in biological networks, they encompass entities having the same biological function [5–7]; in the WorldWideWeb they may correspond to groups of pages dealing with the same or related topics [8]; in food webs they may identify compartments [9]; and a community in a metabolic or genetic network might be related to a specific functional task [10].

Since community structure constitutes a fundamental feature of many networks, the development of methods and techniques for the detection of communities represents one of the most active research areas in network science [2, 11–17]. In comparison, much less work has been done to address a fundamental question: how do communities arise in networks? [18].

Clearly, the emergence of characteristic topological structures, including communities, from a random or featureless network requires some dynamical process that modifies the properties of the links representing the interactions between nodes. We refer to such link dynamics as a rewiring process. Links can vary their strength, or they can appear and disappear as a consequence of a rewiring process. In our view, two classes of rewiring processes leading to the formation of structures in networks can be distinguished: (i) rewirings based on local connectivity properties regardless of the values of the state variables of the nodes, which we denote as topological rewirings; and (ii) rewirings that depend on the state variables of the nodes, where the link dynamics is coupled to the node state dynamics and which we call adaptive rewirings.

Topological rewiring processes have been employed to explain the origin of small-world and scale-free networks [19, 20]. These rewirings can lead to the appearance of community structures in networks with weighted links [21] or by preferential attachment driven by local clustering [22]. On the other hand, there is currently much interest in the study of networks that exhibit a coupling between topology and states, since many systems observed in nature can be described as dynamical networks of interacting nodes where the connections and the states of the nodes affect each other and evolve simultaneously [23–29]. These systems have been denoted as coevolutionary dynamical systems or adaptive networks and, according to our classification above, they are subject to adaptive rewiring processes. The collective behavior of coevolutionary systems is determined by the competition of the time scales of the node dynamics and the rewiring process. Most works that employ coevolutionary dynamics have focused on the characterization of the phenomenon of network fragmentation arising from this competition. Although community structures have been found in some coevolutionary systems [30–33], investigating the mechanisms for the formation of perdurable communities remains an open problem.

In this paper we investigate the emergence and the persistence of communities in networks induced by a process of adaptive rewiring. Our work is based on a recently proposed general framework for coevolutionary dynamics in networks [29]. We characterize the topological structures forming in a coevolutionary network having a simple node dynamics. We unveil a region of parameters where the formation of a supertransient modular structure on the network occurs. We study the stability of the community configuration under small perturbations of the node dynamics, as well as for different initial conditions of the system.

Reference:
Gross, T, and Blasius, B. (2008, Mar). Adaptive coevolutionary networks: A review. Journal of the Royal Society Interface; 5(20): 259-271. doi: 10.1098/​rsif.2007.1229

Friday, July 11, 2014

Kelly Clancy - Your Brain Is On the Brink of Chaos


From Nautilus, Kelly Clancy takes a look at the increasing evidence for chaos in the brain and nervous system. The nervous system is literally overwhelmed by incoming sensory data, so much so that much of it never makes it into consciousness.

On the other hand, the brain stem and its adjacent structures, a collection of filters in one sense (which Antonio Damasio calls the protoself), function as gatekeepers to decide what gets passed up into the limbic system and cerebral cortex (i.e., what becomes conscious).

The proposal that there is some chaos in this system is perfectly reasonable - a lot of biological systems contain chaos, which is not the same as disorder. That is an important point that is made in this article:
While disordered systems cannot be predicted, chaos is actually deterministic: The present state of the system determines its future. Yet even so, its behavior is only predictable on short time scales: Tiny differences in inputs result in vastly different outcomes. Chaotic systems can also exhibit stable patterns called “attractors” that emerge to the patient observer. Over time, chaotic trajectories will gravitate toward them. Because chaos can be controlled, it strikes a fine balance between reliability and exploration. Yet because it’s unpredictable, it’s a strong candidate for the dynamical substrate of free will. 
Even with these qualifications, chaos in the nervous system makes a lot of neuroscientists, specifically the computationalists, very nervous because it would completely derail their model.

http://static.nautil.us/3724_4172f3101212a2009c74b547b6ddf935.png

Your Brain Is On the Brink of Chaos

Neurological evidence for chaos in the nervous system is growing.

By Kelly Clancy Illustration by Josh Cochran July 10, 2014

IN ONE IMPORTANT WAY, the recipient of a heart transplant ignores its new organ: Its nervous system usually doesn’t rewire to communicate with it. The 40,000 neurons controlling a heart operate so perfectly, and are so self-contained, that a heart can be cut out of one body, placed into another, and continue to function perfectly, even in the absence of external control, for a decade or more. This seems necessary: The parts of our nervous system managing our most essential functions behave like a Swiss watch, precisely timed and impervious to perturbations. Chaotic behavior has been throttled out.

Or has it? Two simple pendulums that swing with perfect regularity can, when yoked together, move in a chaotic trajectory. Given that the billions of neurons in our brain are each like a pendulum, oscillating back and forth between resting and firing, and connected to 10,000 other neurons, isn’t chaos in our nervous system unavoidable?

The prospect is terrifying to imagine. Chaos is extremely sensitive to initial conditions—just think of the butterfly effect. What if the wrong perturbation plunged us into irrevocable madness? Among many scientists, too, there is a great deal of resistance to the idea that chaos is at work in biological systems. Many intentionally preclude it from their models. It subverts computationalism, which is the idea that the brain is nothing more than a complicated, but fundamentally rule-based, computer. Chaos seems unqualified as a mechanism of biological information processing, as it allows noise to propagate without bounds, corrupting information transmission and storage.

At the same time, chaos has its advantages. On a behavioral level, the arms race between predator and prey has wired erratic strategies into our nervous system.[1] A moth sensing an echo-locating bat, for example, immediately directs itself away from the ultrasound source. The neurons controlling its flight fire in an increasingly erratic manner as the bat draws closer, until the moth, darting in fits, appears to be nothing but a tumble of wings and legs. More generally, chaos could grant our brains a great deal of computational power, by exploring many possibilities at great speed.

Motivated by these and other potential advantages, and with an accumulation of evidence in hand, neuroscientists are gradually accepting the potential importance of chaos in the brain.

CHAOS IS NOT the same as disorder. While disordered systems cannot be predicted, chaos is actually deterministic: The present state of the system determines its future. Yet even so, its behavior is only predictable on short time scales: Tiny differences in inputs result in vastly different outcomes. Chaotic systems can also exhibit stable patterns called “attractors” that emerge to the patient observer. Over time, chaotic trajectories will gravitate toward them. Because chaos can be controlled, it strikes a fine balance between reliability and exploration. Yet because it’s unpredictable, it’s a strong candidate for the dynamical substrate of free will.

The similarity to random disorder (or stochasticity) has been a thorn in the side of formal studies of chaos. It can be mathematically tricky to distinguish between the two—especially in biological systems. There are no definite tests for chaos when dealing with multi-dimensional, fluctuating biological data. Walter Freeman and his colleagues spearheaded some of the earliest studies attempting to prove the existence of chaos in the brain, but came to extreme conclusions on limited data. He’s argued, for example, that neuropil, the extracellular mix of axons and dendrites, is the organ of consciousness—a strong assertion in any light. Philosophers soon latched onto these ideas, taking even the earliest studies at face value. Articles by philosophers and scientists alike can be as apt to quote Jiddu Krishnamurti as Henri Poincaré, and chaos is often handled with a semi-mystical reverence.[2, 3]

As a result, researchers must tread carefully to be taken seriously. But the search for chaos is not purely poetic. The strongest current evidence comes from single cells. The squid giant axon, for example, operates in a resting mode or a repetitive firing mode, depending on the external sodium concentration. Between these extremes, it exhibits unpredictable bursting that resembles the wandering behavior of a chaotic trajectory before it settles into an attractor. When a periodic input is applied, the squid giant axon responds with a mixture of both oscillating and chaotic activity.[4] There is chaos in networks of cells, too. The neurons in a patch of rat skin can distinguish between chaotic and disordered patterns of skin stretching.[5]


More evidence for chaos in the nervous system can be found at the level of global brain activity. Bizarrely, an apt metaphor for this behavior is an iron slab.[6] The electrons it contains can each point in different directions (more precisely, their spins can point). Like tiny magnets, neighboring spins influence each other. When the slab is cold, there is not enough energy to overcome the influence of neighboring spins, and all spins align in the same direction, forming one solid magnet. When the slab is hot, each spin has so much energy that it can shrug off the influence of its neighbor, and the slab’s spins are disordered. When the slab is halfway between hot and cold, it is in the so-called “critical regime.” This is characterized by fluctuating domains of same-spin regions which exhibit the highest possible dynamic correlations—that is, the best balance between a spin’s ability to influence its neighbors, and its ability to be changed.

The critical state can be quite useful for the brain, allowing it to exploit both order and disorder in its computations—employing a redundant network with rich, rapid chaotic dynamics, and an orderly readout function to stably map the network state to outputs. The critical state would be maintained not by temperature, but the balance of neural excitation and inhibition. If the balance is tipped in favor of more inhibition, the brain is “frozen” and nothing happens. If there is too much excitation, it will descend into chaos. The critical point is analogous to an attractor.

But how can we tell whether the brain operates at the critical point? One clue is the structure of the signals generated by the activity of its billions of neurons. We can measure the power of the brain’s electrical activity at different oscillation frequencies. It turns out that the power of activity falls off as the inverse of the frequency of that activity. Once referred to as 1/f “noise,” this relationship is actually a hallmark of systems balanced at their critical point.[7] The spatial extent of regions of coordinated neuronal activity also depend inversely on frequency, another hallmark of criticality. When the brain is pushed away from its usual operating regime using pharmacological agents, it usually loses both these hallmarks,[8, 9] and the efficiency of its information encoding and transmission is reduced.[10]

THE PHILOSOPHER Gilles Deleuze and psychiatrist Felix Guattari contended that the brain’s main function is to protect us, like an umbrella, from chaos. It seems to have done so by exploiting chaos itself. At the same time, neural networks are also capable of near-perfect reliability, as with the beating heart. Order and disorder enjoy a symbiotic relationship, and a neuron’s firing may wander chaotically until a memory or perception propels it into an attractor. Sensory input would then serve to “stabilize” chaos. Indeed, the presentation of a stimulus reduces variability in neuronal firing across a surprising number of different species and systems,[11] as if a high-dimensional chaotic trajectory fell into an attractor. By “taming” chaos, attractors may represent a strategy for maintaining reliability in a sensitive system.[12] Recent theoretical and experimental studies of large networks of independent oscillators have also shown that order and chaos can co-exist in surprising harmony, in so-called chimera states.[13]

The current research paradigm in neuroscience, which considers neurons in a snapshot of time as stationary computational units, and not as members of a shifting dynamical entity, might be missing the mark entirely. If chaos plays an important role in the brain, then neural computations do not operate as a static read-out, a lockstep march from the transduction of photons to the experience of light, but a high-dimensional dynamic trajectory as spikes dance across the brain in self-choreographed cadence.

While hundreds of millions of dollars are being funneled into building the connectome—a neuron-by-neuron map of the brain—scientists like Eve Marder have argued that, due to the complexity of these circuits, a structural map alone will not get us very far. Functional connections can flicker in and out of existence in milliseconds. Individual neurons appear to change their tuning properties over time [14, 15] and thus may not be “byte-addressable”—that is, stably represent some piece of information—but instead operate within a dynamic dictionary that constantly shifts to make room for new meaning. Chaos encourages us to think of certain disorders as dynamical diseases, epileptic seizures being the most dramatic example of the potential failure of chaos.[16] Chaos might also serve as a signature of brain health: For example, researchers reported less chaotic dynamics in the dopamine-producing cells of rodents with brain lesions, as opposed to healthy rodents, which could have implications in diagnosing and treating Parkinson’s and other dopamine-related disorders.[17]

Economist Murray Rothbard described chaos theory as “destroying math from within.” It usurps the human impulse to simplify, replacing the clear linear relationships we seek in nature with the messy and unpredictable. Similarly, chaos in the brain undermines glib caricatures of human behavior. Economists often model humans as “rational agents”: hedonistic calculators who act for their future good. But we can’t really act out of self-interest—though that would be a reasonable thing to do—because we are terrible at predicting what that is. After all, how could we? It’s precisely this failure that makes us what we are.

Kelly Clancy studied physics at MIT, then worked as an itinerant astronomer for several years before serving with the Peace Corps in Turkmenistan. As a National Science Foundation fellow, she recently finished her PhD in biophysics at the University of California, Berkeley. She will begin her postdoctoral research at Biozentrum in Switzerland this fall.

References

1. Humphries, D.A. & Driver, P.M. Protean defence by prey animals. Oecologia 5, 285–302 (1970).
2. Abraham, F.D. Chaos, bifurcations, and self-organization: dynamical extensions of neurological positivism. Psychoscience 1, 85-118 (1992).
3. O’Nuallain, S. Zero power and selflessness: what meditation and conscious perception have in common. Cognitive Science 4, 49-64 (2008).
4. Korn, H. & Faure, P. Is there chaos in the brain? II. Experimental evidence and related models. Comptes Rendus Biologies 326, 787–840 (2003).
5. Richardson, K.A., Imhoff, T.T., Grigg, P. & Collins, J.J. Encoding chaos in neural spike trains. Physical Review Letters 80, 2485–2488 (1998).
6. Beggs, J.M. & Timme, N. Being critical of criticality in the brain. Frontiers in Physiology 3, 1–14 (2012).
7. Bak, P., Tang, C. & Wiesenfeld, K. Self-organized criticality: an explanation of 1/f noise. Physical Review Letters 59, 381–384 (1987).
8. Mazzoni, A. et al. On the dynamics of the spontaneous activity in neuronal networks. PLoS ONE 2 e439 (2007).
9. Beggs, J.M. & Plenz, D. Neuronal avalanches in neocortical circuits. Journal of Neuroscience 23, 11167–11177 (2003).
10. Shew, W.L., Yang, H., Yu, S., Roy, R. & Plenz, D. Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. Journal of Neuroscience 31, 55–63 (2011).
11. Churchland, M.M. et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neuroscience 13, 369–378 (2010).
12. Laje, R. & Buonomano, D.V. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nature Neuroscience 16, 925–933 (2013).
13. Kuramoto, Y. & Battogtokh, D. Coexistence of coherence and incoherence in nonlocally coupled phase oscillators: a soluble case. Nonlinearity 26, 2469-2498 (2002).
14. Margolis, D.J. et al. Reorganization of cortical population activity imaged throughout long-term sensory deprivation. Nature Neuroscience 15, 1539–1546 (2012).
15. Ziv, Y. et al. Long-term dynamics of CA1 hippocampal place codes. Nature Neuroscience 16, 264–266 (2013).
16. Schiff, S.J. et al. Controlling chaos in the brain. Nature 370, 615–620 (1994).
17. di Mascio, M., di Giovanni, G., di Matteo, V. & Esposito, E. Decreased chaos of midbrain dopaminergic neurons after serotonin denervation. Neuroscience 92, 237–243 (1999).

Headhunters: The Search for a Science of the Mind (Book Review)


From the Times Literary Supplement (London Times), this is a review of a new book not yet published in the U.S. called Headhunters: The Search for a Science of the Mind, by Ben Shephard. This book appears to be an interesting look at the history of modern anthropology, neuroscience, psychology, and psychotherapy in Britain.

Forgotten pioneers of the science of the mind

ADAM KUPER
Published: 9 July 2014

HEADHUNTERS: The search for a science of the mind
Ben Shephard
323pp. Bodley Head. £25.
 
Charles Seligman; from the book under review Photograph: Bodley Head 
 
We hope you enjoy this free piece from the TLS, which is available every Thursday in print and on the TLS app. This week’s issue features Mary Beard on the afterlife of Pompeii, Henrietta Foster and Kathryn Sutherland on a portrait of Jane Austen, Leo A. Lensing on the self-destructive life of Ingeborg Bachmann, Anson Rabinbach on the First World War – and much more. 

In the early twentieth century, a handful of Cambridge men, young medical doctors mostly, established modern anthropology, neuroscience, psychology and psychotherapy in Britain. Ben Shephard sums up their quest as “a search for a science of the mind”, which was certainly a large part of it, but they were interested in a great many other things as well. They were close associates who influenced one another, but it would be a mistake to exaggerate the coherence of their projects or the extent to which they shared a common sense of what they were after. Because they were so eclectic and ranged so widely, they were not installed as ancestor figures in the disciplines into which the human sciences were beginning to fragment, even if they were influential in the committees that helped to shape the new professional institutions. Their names are therefore mostly unfamiliar today. Shephard rescues them from the oubliette of disciplinary histories and presents them as members of a cohort: a network of eccentric, wilful, brilliant men who were prepared to go anywhere, try anything, to advance the scientific understanding of human nature. 

Central members of this cohort were brought together in the 1898 “Cambridge Anthropological Expedition to Torres Straits”, that narrow stretch of sea, with numerous islands, which separates Australia and New Guinea. The expedition was organized by a zoologist, Alfred Haddon. Encouraged by Thomas Huxley, Haddon had visited the Torres Strait a year earlier to study tropical fauna and coral reefs. He became interested in the islanders and was tempted to take up anthropology, although Huxley did warn him that nobody could make a living at it. (Mrs Haddon pluckily remarked that “you may as well starve as an anthropologist as a zoologist”.) Haddon then put together a scientific expedition to the Strait. The islanders seemed particularly interesting since they might constitute a link between the peoples of New Guinea and Indonesia and the Australian Aborigines, who represented for the Victorians the archetypal savage hunter-gatherers. And their origins were mysterious. The Cambridge team would study the anatomy, physiology, sense perception and sociology of the natives. 

Haddon lectured on comparative anatomy in the Natural Sciences department in Cambridge alongside a young doctor, W. H. R. Rivers, who lectured on the sense organs. Rivers had studied experimental psychology at the leading school in the world, in Jena, in Germany, and developed an unfashionable interest in mental illnesses. Haddon invited him to lead the psychological side of the expedition, but Rivers hesitated until his two favourite students, William McDougall and C. S. Myers, signed on. He then told Haddon that after the recent death of his mother he felt run down and in need of a holiday. He would come along and pay his own way. Charles Seligman, a pathologist who was a friend and contemporary of Myers, also volunteered and was directed to join Rivers, Myers and McDougall in doing experiments on the sight and hearing of the islanders. 

“I put the direction of the psychological department entirely into the hands of Rivers”, Haddon reported, “and for the first time psychological observations were made on a backward people in their own country by trained psychologists with adequate equipment.” Despite Haddon’s optimism, their equipment turned out to be far from adequate in tropical conditions and they ended up doing quite simple experiments, but the results appeared to show that the sense perception of the islanders was not very different from that of Europeans. They might be better able to discriminate birds at a distance, but only because they had been trained to do so. Their colour vocabulary was undeveloped, but they could pick out the same shades of difference as an average Englishman. Personalities, too, were familiar. “The character of the natives appears to be as diverse as it would be in any English town”, Myers remarked. Their cook was “the sturdy, plodding thick-set workman”. Another man was “the best example of the high-strung nervous type: his excitement when he is telling a story is remarkable”. And then there was Ulai, “the personification of cunning”, who grasped what the expedition was after and sold Rivers a collection of rods that recorded his sexual conquests, explaining that at his age he would not be adding to their number. 

Haddon directed the ethnological research. One coup was to persuade the Murray islanders to revive a male initiation ceremony that the missionaries had banned. Rivers found himself drawn to sociological issues, especially family and kinship. “While going over the various names which one man would apply to others, I was occasionally told that such and such a man would stop a fight, another would bury a dead man, and so on”, he noted. “When the clues given by these occasional remarks were followed up it was found that there were certain very definite duties and privileges attached to certain bonds of kinship.” Rivers went on to become the leading kinship theorist of his generation. 

Studying hearing, Myers and Seligman collected recordings of local music. Then Seligman went off to do some ethnological work of his own in New Guinea, and Myers and McDougall made a difficult journey to Sarawak, where the administrator, Charles Hose, was orchestrating a peace conference between two groups of headhunters. Myers found it all rather trying, and objected particularly to having to do his daily rounds surrounded by pigs. “How much this distracts from the pleasures of defecation and subsequent pig eating”, he complained. Nor could he get on with the local people. His one achievement was the collection of music recordings. McDougall fitted in rather better with the swashbuckling Hose and enjoyed the company of the headhunters. He was tempted to make a career of anthropology. However, he decided that it was too easy and instead went to Germany to master the latest experimental methods in psychology. 

Another Cambridge friend of Rivers was a young Australian doctor, Grafton Elliot Smith, who had been studying the brains of marsupials. When the Haddon team set out for the Torres Strait, Elliot Smith went to Cairo as professor of anatomy at the medical school. Rivers turned up in Egypt in 1900, to investigate the colour vision of the Egyptians in order to establish a comparison with the Torres Strait islanders. His experimental subjects were labourers working for another former member of the Torres Strait team, Anthony Wilken, who was doing archaeological research. Rivers brought Elliot Smith in to examine the human remains that Wilken had uncovered, and especially the well-preserved brains of pre-dynastic people. Elliot Smith soon became a world authority on the human brain. He modelled the brain’s structure as though it was an archaeological site, the different levels supposedly reflecting evolutionary advances. The neocortex, shared by all mammals, controlled basic functions while the prefrontal area was the seat of more advanced abilities. 

Elliot Smith also began to develop theories about Egyptology and anthropology. His most famous idea was that the wonders of ancient Egypt were created not by Africans but by northern invaders, and that civilization had then spread from Egypt to all the corners of the world. Rivers was impressed, and he now decided that Melanesia had been transformed by an invasion of more advanced peoples. Seligman, who had gone on to do ethnological fieldwork in the Sudan, also adopted Elliot Smith’s thesis and argued that the most advanced African civilizations were the work of light-skinned “Hamitic” invaders who had passed through Egypt. The great African pro-consul, Lord Lugard, was a convert to this doctine, which had an obvious appeal to colonialists.
Rivers, more and more absorbed by ethnology and, engaged in writing his great work, The History of Melanesian Society, found the time to collaborate in a famous experiment with his friend Henry Head, another young doctor who had studied experimental psychology in Germany and Czechoslovakia. Investigating the perception of pain, Head had two cutaneous nerves on his left forearm severed. Every Friday for the next four years, he visited Rivers in his college rooms to chart the process of regeneration and the areas of acute sensitivity. Echoing Elliot Smith’s ideas about the evolutionary levels of the brain, Rivers and Head decided that the nervous system contained two layers: one older and more primitive; the other more subtle and localized. They speculated that the two systems “owed their origin to the developmental history of the nervous system. They reveal the means by which an imperfect organism has struggled towards improved functions and physical unity”. And this “could be seen as a metaphor for the triumph of civilization over savagery in human history”. Frederic Bartlett, a student of Rivers who went on to become a leading psychologist in the next generation, noted that this metaphor informed all Rivers’s later theories in physiology, psychology and anthropology. The structure of every human organ, every social institution, revealed cumulative layers of progressive development. 

Myers and McDougall stuck with psychology but developed very different approaches. Myers became an anti-racist and a humanist. McDougall was a biological determinist, although he did become president of the English Society for Psychical Research. 

Like Rivers, Myers initially took an evolutionary view: “The primitive mind first or the child mind, if you like; then the industrial mind; and the abnormal last of all: that seems to me the natural order, since each in a sense implies the last”. He moved from St Bartholomew’s Hospital in London to Cambridge to work with Rivers, and set up a small laboratory in a cottage in Mill Lane, funded by a gift from his mother. Here he worked on the psychological basis of rhythm in music. 

Psychology was looked down on by the Cambridge establishment, but Ludwig Wittgenstein was intrigued and regularly came to Mill Lane to work with Myers. “I had a discussion with Myers about the relations between Logic and Philosophy”, he wrote to Bertrand Russell. “I was very candid and I am sure he thinks that I am the most arrogant devil who ever lived . . . . I think he was a bit less confused after the discussion than before.” When the laboratory was opened to the public in 1913, Wittgenstein exhibited an apparatus for investigating the perception of rhythm. Perhaps influenced by Wittgenstein, Myers was moving away from biological determinism. The physiologists, he complained, “in their attempts to penetrate the reality of the known, were deliberately ignoring the knower”. Experimental psychology made unrealistic assumptions, he came to believe. “The factor of feeling was expressly eliminated from our experiments on memory.” Together with Rivers, Myers lobbied, unsuccessfully, for Cambridge to establish a mental health centre. He also criticized racial theories. The “mental characters” of a European peasant were “essentially the same as those of primitive communities”, he insisted. “In temperament we meet the same variations in primitive as in civilized communities.” 

McDougall, in contrast, remained committed to biological explanations of human behaviour. He moved to Oxford, championed eugenics, and began to theorize about race and instinct. A quirky and difficult man, who had quarrelled with most of his associates, McDougall made a satisfactory marriage with the daughter of a chimney sweep in 1900 and began to repair his strained relationships with Rivers and Myers. In 1901, the three men were instrumental in the establishment of the British Psychological Society. And they shared a commitment to develop the treatment of mental illnesses. Psychoanalytic theories came to their attention with the publication of lectures Freud had delivered in 1910 at Clark University in Massachusetts. In 1913, Jung attended a medical congress in London and impressed McDougall. He arranged to visit Jung for analysis, but was frustrated by the outbreak of the First World War. 
 
The war brought them all together again. The cohort of medical doctors, anthropologists and psychologists was now deployed in a new and contentious field: the treatment of soldiers traumatized by battle. (Shephard has published a study of military psychiatry, and this section of his book is particularly strong.) Myers had gone to France immediately the war broke out, and French psychiatrists had introduced him to soldiers who were suffering from strange symptoms – struck dumb, paralysed without any evident physical cause, or suffering from a total loss of memory. He concluded that these symptoms might sometimes be caused by the stress of battle, but that they might also have a physiological basis, resulting from close proximity to heavy explosions. In 1915, he published a description of “shell shock” in the British Medical Journal. In the following year he was appointed “Specialist in Nerve Shock” to the British army with the temporary rank of major. 

The diagnosis was, however, vague, and it was also problematic in operational terms. One in six shell-shock patients were officers (who constituted only one in thirty of the fighting men). And when men diagnosed with shell shock were being evacuated from the front line, even returned to Britain, there were suspicions of faking. Manpower losses from shell shock became a serious drain. In one fortnight at the height of the Battle of the Somme, 2 Division had 2,400 wounded men and 501 cases of “shell-shock wounded”. Myers was sidelined, clinics were set up near the front line to rehabilitate traumatized men, and the diagnosis of shell shock was shelved in favour of the pre-war categories of neurasthenia and hysteria. 

But large numbers of soldiers had returned to Britain, wounded or psychologically incapacitated, and suffering acutely from depression: “nervous wrecks” was the common description. McDougall found himself “the head of a hospital section full of ‘shell shock’ cases, a most strange, wonderful and pitiful collection of nervously disordered soldiers”. Elliot Smith was posted to Magull hospital near Liverpool, and recruited Rivers. They experimented with various methods, including the new talking therapy favoured by Freud and Jung, and began to pay special attention to dreams. 

Rivers moved at the end of 1916 to Craiglockhart, in Scotland, a facility for officers who had been diagnosed with shell shock. He cherry-picked the most interesting cases, one of whom, famously, was Siegfried Sassoon, who had been sent there after making an anti-war protest. Sassoon recalled that at their first session he asked Rivers if he was suffering from shell shock. 

“’Certainly not’, he replied.
‘What have I got then?’
‘Well, you appear to be suffering from an anti-war complex.’ We both of us laughed at that.” 


Rivers engaged critically with Freudian ideas. He reworked Freud’s dream theory in a neglected classic, Conflict and Dreams, which was published posthumously, edited by Elliot Smith. But as Sassoon apparently recognized, Rivers could well have featured as a classical Freudian case study himself, tormented by homosexual desires. In 1914, Rivers had abandoned a young student, John Layard, on a remote Melanesian island, evidently unable to cope with a physical attraction that Layard himself was willing to express. “Rivers had obviously not recognized the whole homosexual content of our relationship, probably on both sides”, Layard remarked. He also recalled that Rivers had “immense quantities of clever young men around him. He told me once that it was only the affection of young men that kept him alive”. (Layard returned to England suffering from depression. Rivers attended him but many years later he attempted suicide, was treated by Jung, and eventually published a Jungian study of dream analysis.) 

Rivers also suffered greatly from the strain of treating young officers only so that they could be sent back to the front, and he had to retire owing to nervous exhaustion. In 1922, while fighting the general election in the London University seat on behalf of the Labour Party, he had a fatal heart attack. In the same year the founding texts of the new functionalist anthropology were published, and his speculative history of Melanesian society was consigned to the scrapheap. 

More broadly, in the 1920s structural-functional accounts replaced speculative historical approaches in the natural and social sciences. The brain was now seen as an intricate working organism, its parts functionally specialized. Societies were machines for living in, not deposits of archaeological strata. Academic psychologists set out to win respect by making psychology over into an experimental natural science, abandoning therapy. 

The members of the cohort did not have the best of luck in the post-war world. Elliot Smith became a famous anatomist, but his reputation was badly damaged by his endorsement of the Piltdown forgery. (A hoaxer had put together a modern cranium and an ape’s jaw and teeth, which led Elliot Smith to conclude that early hominids had highly developed brains.) McDougall was appointed to a chair at Harvard in 1919, but his advocacy of instinct theory, eugenics and racial determinism did not go down well. Nor did his enthusiasm for psychic research. American psychology was now “behaviorist” and experimental, and McDougall was regarded as a Victorian relic, “still thinking of himself as an Englishman in the British colonies”, colleagues complained. He lost a small fortune in the stock market crash of 1929 and was then swindled over an oil well. (“The professor is noted for his experiments on animal behavior through tests made mostly with rats”, the New York Times noted with some glee. “Seems this time he got caught by two oily ones.”) Seligman became professor of ethnology at the London School of Economics, but in the 1920s he was sidelined by the charismatic Bronisław Malinowski. Myers was held responsible for the shell shock fiasco. (“There’s no such thing as shell shock”, General George S. Patton was to insist. “It’s an invention of the Jews.”) 

“Although many of the answers which Rivers, McDougall and Elliot Smith gave have proved to be wrong, because in their time the data to answer them did not exist, the questions they posed are still relevant”, Ben Shephard suggests. In fact their theories were discredited and their research programmes declared obsolete. And so they have largely been forgotten. Only Rivers has had an unexpected afterlife, resurrected in the imaginations of Siegfried Sassoon and Pat Barker. Yet this cohort of open-minded, cosmopolitan, adventurous young doctors was surely more interesting, more creative and more admirable than most of the narrow specialists who succeeded them. 

Adam Kuper is Centennial Professor of Anthropology at the London School of Economics.

Thursday, July 10, 2014

What Happens to the Cool Kids When They Grow Up?


I could easily have been one of the subjects of this study. As an adolescent and teen I was desperate to be "cool," to be seen as mature, and to be "popular." It never really happened, and in some ways I was heading down the path these kids traveled - more relationship difficulties, higher rates of drug and alcohol abuse.

Fortunately for me, I bottomed out as an 18-19 year old, dropped out of the world I had been living in (including leaving behind my friends from high school), and then went back to school (after flunking out of my first college).

The kids in the study didn't make the same changes:
Allen's team said their results show that "early adolescent attempts to gain status via pseudomature behaviour are not simply passing annoyances of this developmental stage, but rather may signal movement down a problematic pathway and away from progress toward real psychosocial competence."
Hitting my bottom and becoming introspective (thank you Plato, Aristotle, Shakespeare, Walt Whitman, St Theresa, Mirabi, Rumi, and so many others) saved my life. AND it allowed me to do some growing up that I failed to do as a teenager (no one grows up psychologically when they are high or drunk much of the time).

What happens to the cool kids when they grow up?

Wednesday, July 2, 2014


"Cool kids", according to a new study, are those early teens (aged 13 to 15) who want to be popular, and try to impress their peers by acting older than their years. They have precocious romantic relationships, commit relatively minor acts of bad behaviour (such as sneaking into the cinema without paying), and surround themselves with good-looking friends. These teenagers attract respect from their peers at first, but what's the story by the time they reach early adulthood?

Joseph Allen and his colleagues made contact with 184 thirteen-year-olds (98 girls) from a diverse range of backgrounds, living in the Southeastern United States. They interviewed them at that age, and then again when they were aged 14 and 15. The researchers also contacted some of their close friends and peers. Finally, the sample and their friends were followed up again a decade later, when they were aged 21 to 23.

There were short-term advantages to being a cool kid - these teens tended to be popular when they were in early adolescence. However, this popularity began to fade through teenhood. And ten years later, the cool kids were at greater risk for alcohol and drug problems, more serious criminal behaviour, and, according to their friends, they struggled with their platonic and romantic relationships. As adults, cool kids also tended to blame their recent relationship break ups on their partner not thinking they were popular enough - as if they were still viewing life through the immature lens of cool.

Allen's team said their results show that "early adolescent attempts to gain status via pseudomature behaviour are not simply passing annoyances of this developmental stage, but rather may signal movement down a problematic pathway and away from progress toward real psychosocial competence." They think cool kids' preoccupation with being precocious and rebellious gets in the way of them developing important socialisation skills. It's also likely that as they get older, cool kids feel the need to engage in ever greater acts of rebellion to command respect from their peers.

Is it possible that the researchers were simply measuring a propensity to deviance and criminality in early adolescence, making their longitudinal findings unsurprising? They don't think so. They point out that serious criminality, and alcohol and cannabis use, in early adulthood were more strongly correlated with being a cool kid in early adolescence (i.e. as measured by desire for popularity; precious romantic relationships; minor deviance; and surrounding oneself with good-looking friends) than with alcohol and drug use, and criminality at that time.

The study is not without limitations - for example, cool kids were found to lose their popularity through adolescence, but this was based on a measure of their peers' desire to be with them, not on their status. It's also possible they retained or earned popularity with teens older than them. Nonetheless, Allen and his team said their findings are novel and show that the "seemingly minor behaviours" associated with being a cool kid "predict far greater future risk than has heretofore been recognised."

_________________________________

Allen JP, Schad MM, Oudekerk B, & Chango J (2014, Jun 11). What Ever Happened to the "Cool" Kids? Long-Term Sequelae of Early Adolescent Pseudomature Behavior. Child Development; Epub ahead of print. doi: 10.1111/cdev.12250 | PMID: 24919537
* * * * *

What Ever Happened to the "Cool" Kids? Long-Term Sequelae of Early Adolescent Pseudomature Behavior.

Allen JP, Schad MM, Oudekerk B, Chango J.

Abstract

Pseudomature behavior-ranging from minor delinquency to precocious romantic involvement-is widely viewed as a nearly normative feature of adolescence. When such behavior occurs early in adolescence, however, it was hypothesized to reflect a misguided overemphasis upon impressing peers and was considered likely to predict long-term adjustment problems. In a multimethod, multireporter study following a community sample of 184 adolescents from ages 13 to 23, early adolescent pseudomature behavior was linked cross-sectionally to a heightened desire for peer popularity and to short-term success with peers. Longitudinal results, however, supported the study's central hypothesis: Early adolescent pseudomature behavior predicted long-term difficulties in close relationships, as well as significant problems with alcohol and substance use, and elevated levels of criminal behavior.

Research Offers New Insight into How the Brain Processes Emotions

Parametric modulation analysis (univariate) for independent ratings of positive and negative valence.

This new study sheds some light on how the brain processes emotions, although it certainly does not explain everything. According to Cornell University neuroscientist, Adam Anderson,
“It appears that the human brain generates a special code for the entire valence spectrum of pleasant-to-unpleasant, good-to-bad feelings, which can be read like a ‘neural valence meter’ in which the leaning of a population of neurons in one direction equals positive feeling and the leaning in the other direction equals negative feeling.”
Interesting stuff - too bad the full article is hidden from readers behind a paywall.

Study cracks how brain processes emotions

Date: July 9, 2014
Source: Cornell University

Summary:
Although feelings are personal and subjective, the human brain turns them into a standard code that objectively represents emotions across different senses, situations and even people, reports a new study. “Despite how personal our feelings feel, the evidence suggests our brains use a standard code to speak the same emotional language,” one researcher concludes.

The study's findings provide insight into how the brain represents our innermost feelings – what Anderson calls the last frontier of neuroscience – and upend the long-held view that emotion is represented in the brain simply by activation in specialized regions for positive or negative feelings, he says. Credit: Image courtesy of Cornell University.
Although feelings are personal and subjective, the human brain turns them into a standard code that objectively represents emotions across different senses, situations and even people, reports a new study by Cornell University neuroscientist Adam Anderson.

“We discovered that fine-grained patterns of neural activity within the orbitofrontal cortex, an area of the brain associated with emotional processing, act as a neural code which captures an individual’s subjective feeling,” says Anderson, associate professor of human development in Cornell’s College of Human Ecology and senior author of the study. “Population coding of affect across stimuli, modalities and individuals,” published online in Nature Neuroscience.

Their findings provide insight into how the brain represents our innermost feelings – what Anderson calls the last frontier of neuroscience – and upend the long-held view that emotion is represented in the brain simply by activation in specialized regions for positive or negative feelings, he says.

“If you and I derive similar pleasure from sipping a fine wine or watching the sun set, our results suggest it is because we share similar fine-grained patterns of activity in the orbitofrontal cortex,” Anderson says.

“It appears that the human brain generates a special code for the entire valence spectrum of pleasant-to-unpleasant, good-to-bad feelings, which can be read like a ‘neural valence meter’ in which the leaning of a population of neurons in one direction equals positive feeling and the leaning in the other direction equals negative feeling,” Anderson explains.

For the study, the researchers presented participants with a series of pictures and tastes during functional neuroimaging, then analyzed participants’ ratings of their subjective experiences along with their brain activation patterns.

Anderson’s team found that valence was represented as sensory-specific patterns or codes in areas of the brain associated with vision and taste, as well as sensory-independent codes in the orbitofrontal cortices (OFC), suggesting, the authors say, that representation of our internal subjective experience is not confined to specialized emotional centers, but may be central to perception of sensory experience.

They also discovered that similar subjective feelings – whether evoked from the eye or tongue – resulted in a similar pattern of activity in the OFC, suggesting the brain contains an emotion code common across distinct experiences of pleasure (or displeasure), they say. Furthermore, these OFC activity patterns of positive and negative experiences were partly shared across people.

“Despite how personal our feelings feel, the evidence suggests our brains use a standard code to speak the same emotional language,” Anderson concludes.


Story Source:
The above story is based on materials provided by Cornell University. The original article was written by Melissa Osgood. Note: Materials may be edited for content and length.

Journal Reference:
Junichi Chikazoe, Daniel H Lee, Nikolaus Kriegeskorte, Adam K Anderson. (2014, Jun 22). Population coding of affect across stimuli, modalities and individuals. Nature Neuroscience, ePub ahead of print. DOI: 10.1038/nn.3749
* * * * *

Here is the abstract from the full article, which is sequestered safely behind a paywall, although the article can be yours for the low, low rate of $32.

Population coding of affect across stimuli, modalities and individuals

Junichi Chikazoe, Daniel H Lee, Nikolaus Kriegeskorte & Adam K Anderson

Nature Neuroscience (2014). doi:10.1038/nn.3749 
Received 19 January 2014, Accepted 23 May 2014, Published online 22 June 2014
Abstract

It remains unclear how the brain represents external objective sensory events alongside our internal subjective impressions of them—affect. Representational mapping of population activity evoked by complex scenes and basic tastes in humans revealed a neural code supporting a continuous axis of pleasant-to-unpleasant valence. This valence code was distinct from low-level physical and high-level object properties. Although ventral temporal and anterior insular cortices supported valence codes specific to vision and taste, both the medial and lateral orbitofrontal cortices (OFC) maintained a valence code independent of sensory origin. Furthermore, only the OFC code could classify experienced affect across participants. The entire valence spectrum was represented as a collective pattern in regional neural activity as sensory-specific and abstract codes, whereby the subjective quality of affect can be objectively quantified across stimuli, modalities and people.

Wednesday, July 09, 2014

Are Implanted False Memories Permanent?


Back in the 1980s, there was an explosion of "repressed memories" by children who had suffered satanic abuse in schools, day cares, and family homes. The only problem was that most of these memories were iatrogenic, which means they were "implanted" by their therapists. Before the truth came out (that the majority of these memories were implanted by a small percentage of therapists, a lot of innocent people had their lives literally destroyed.

So what happened to those children and the "false memories" that were implanted in them? This brief article from io9 takes a look at that topic.

Are Implanted False Memories Permanent?


Esther Inglis-Arkell
July 10, 2014


The 1980s saw psychologists discovering a lot of "repressed" memories in patients. As it turned out, they weren't so much memories as inventions. Are all those patients stuck with false memories of Satanic abuse and alternate personalities forever?

In 1973, the book Sybil took the world by storm. A pioneering psychiatrist took a very troubled young woman under her wing. After a lot of therapy and a lot of drugs, she discovered that the eponymous Sybil had many alternate personalities. What was the source of these alternate personalities? Extended therapy revealed that Sybil's mind created them to deal with the horrific abuse she experienced at her mother's hands. As therapy continued, the doctor learned more about the abuse by uncovering memories repressed by Sybil's conscious mind for decades.

The problem is, neither the personalities nor the abuse ever existed. Later records show that Sybil was dependent on the doctor for money and drugs, and tried several times to tell her that she was making everything up. That didn't make it into the book, and it didn't make it into the public discourse. What stayed with people was the idea that they could be unhappy because of deeply repressed memories. A kind of medical entertainment industry flourished as people "remembered" abuse by family, friends, and most famously, Satanic cults. These memories became criminal trials, books, and movies. Eventually, the claims became too fantastic, the defendants got the right lawyers and fact checkers, and many of the most famous "repressed memory" stories went down in a hail of justified lawsuits.

Sybil knew she did not have multiple personality disorder, and she knew most of her "memories" were false. She had come to her psychiatrist as an adult, and had known her motivation for making up false memories. Many of the kids who had remembered "repressed memories" had no such background or context. People began wondering whether medical professionals had forced "memories" of abuse into children's minds, and whether those children would ever be able to remember their real life again.

There is no ethical way of studying the memory of children who have been encouraged to form false memories of extreme abuse, but there have been studies done on children who formed more innocuous false memories. The most famous false memory test was done under the direction of Elizabeth Loftus. Her study showed that people, including children, could have detailed and vivid memories of an event that had been made up and implanted in their heads. Most of the false events were innocuous, like seeing Bugs Bunny at Disneyland, when as a Warner Brothers character, Bugs would never be at Disneyland. One memory was only slightly darker. Participants would remember being lost in a mall until an elderly stranger helped them find their parent. Children between three and six, studies found, were especially susceptible to imagining that a story told to them was their own story.

One study rounded up a group of 22 children who had participated in a Loftus study, two years after the study was over. The researches found that children remembered their true memories about 78 percent of the time, but they only remembered about 13 percent of false memories. This isn't as dramatic a drop as it sounds. The first time around, the children remembered about 22 percent of false memories.

An overall review of studies done on children with false memories is less hopeful. Sometimes children clung to made-up events. How the memory came about was the key factor in whether or not it stuck around. Children who had spontaneously come up with false memories tended to forget them rather easily. Children who were implanted with false memories, who were prompted and guided into specific memories, tended to remember them even more persistently than they remembered real events. The significance, and repetition, of these implanted memories overshadowed the real world. So even if kids are told that they had been coached into remembering a false event, it did nothing to dull the memory.

[Via Long-term Survival of Children's False Memories, Are False Memories Permanent, Misinformation Effects.]

Further Reading




What disease did Sybil, the world's most famous multiple-personality patient, actually have? 

Sybil, the book supposedly based on a real case study, made the concept of multiple personality… Read more



No matter how good your recall is, you still have false memories 
There is a condition known as Highly Superior Autobiographical Memory, in which people can remember … Read more

Neuroscientifically Challenged - "Know Your Brain" Series

http://static.squarespace.com/static/52ec8c1ae4b047ccc14d6f29/t/5359e9dfe4b0e4fde0889f5e/1398401503422/Neuron_in_tissue_culture.jpg

The blog Neuroscientifically Challenged has an on-going series called "Know Your Brain." So far, there have been entries on the cerebellum, amygdala, nucleus accumbens, the HPA axis, hippocampus, prefrontal cortex, and hypothalamus.

There's nothing fancy, just a basic introduction to major areas of the brain.

Here is the entry on the hypothalamus to give you a taste of how these posts are written.

Know your brain: Hypothalamus

May 11, 2014

Where is it?



The hypothalamus is the small red dot in these images.

The hypothalamus is a small (about the size of an almond) region located directly above the brainstem. It is buried deep within the brain and not visible without dissecting the brain.

What is it and what does it do?

The hypothalamus is a collection of nuclei with a variety of functions. Many of the important roles of the hypothalamus involve what are known as the two H's: Homeostasis and Hormones.

Homeostasis is the maintenance of equilibrium in a system like the human body. Optimal biological function is facilitated by keeping things like body temperature, blood pressure, and caloric intake/expenditure at a fairly constant level. The hypothalamus receives a steady stream of information about these types of factors. When it recognizes an unanticipated imbalance, it enacts a mechanism to rectify that disparity.

The hypothalamus generally restores homeostasis through two mechanisms. First, it has connections to the autonomic nervous system, through which it can send signals to influence things like heart rate, digestion, and perspiration. For example, if the hypothalamus senses that body temperature is too high, it may send a message to sweat glands to cause perspiration, which acts to cool the body down.

The second way the hypothalamus can restore homeostasis, and another way the hypothalamus can influence behavior in general, is through the control of hormone release from the pituitary gland. The pituitary gland is a hormone-secreting gland that sits just below the hypothalamus. It consists of two lobes called the anterior and the posterior pituitary. The hypothalamus secretes substances into the bloodstream that are known as releasing hormones. They are so named because they travel to the anterior pituitary and cause it to release hormones that have been synthesized in the pituitary gland. Hormones released by the anterior pituitary due to signals from the hypothalamus (and their general role in parentheses) include growth hormone (growth), follicle-stimulating hormone (sexual development and reproduction), luteinizing hormone (testosterone production and reproduction), adrenocorticotropic hormone (stress/fear response), thyroid simulating hormone (metabolism), and prolactin (milk production).

The hypothalamus also synthesizes a couple hormones of its own: oxytocin and vasopressin. These are then sent to the posterior pituitary for release into the bloodstream. Oxytocin can act as a hormone and a neurotransmitter. It has important roles in facilitating childbirth (hence the use of Pitocin to induce labor) and lactation, but also has been the subject of a lot of recent research due to its hypothesized role in compassion and social bonding. Vasopressin's main functions are to control urine output and regulate blood pressure (although it also seems to play a part in social and sexual behavior).

The hypothalamus thus has widespread effects on the body and behavior, which stem from its role in maintaining homeostasis and its stimulation of hormone release. It is often said that the hypothalamus is responsible for the four Fs: fighting, fleeing, feeding, and fornication. Clearly, due to the frequency and significance of these behaviors, the hypothalamus is extremely important in everyday life.

Tuesday, July 08, 2014

A Progressive Agenda for the Supreme Court - Why the Next President Must Be a Democrat


If there is any hope of undoing the recent decisions by the Roberts' Court, we must elect a Democratic president. As much as it pains me to say this, we have to do so even if the Democratic candidate is Hillary Clinton (shudder).

There are potentially three Justices who can retire in the next term (unless Ginsburg steps down while Obama is still president). The two oldest are Kennedy and Scalia - the first to retire will likely be Kennedy (Scalia is just too mean to retire - he'll probably live to be 100 or something, just out of spite for the rest of us).

Replacing Kennedy, now considered the swing vote on many cases, with a liberal or progressive justice could swing the balance of power to the point that the "progressive agenda" outlined below might actually be possible.

Fantasy or Forecast? A Progressive Agenda for the Supreme Court

- E.P. Clapp Distinguished Professor of Politics, Occidental College
Posted: 07/04/2014


"It's always darkest before the dawn" sang Pete Seeger. "And that's what keeps me moving on."

The recent spate of reactionary decisions by the Roberts Supreme Court -- including this week's outrageous Hobby Lobby ruling -- triggers thoughts of a better day, when the right wingers on the court will have retired or died, replaced by thoughtful liberals who will restore some semblance of fairness and democracy to this great country. On this July 4th, let's consider what it would be like if our nation's highest court was actually committed to the notion of "liberty and justice for all."

Doing so requires making a few leaps of faith, but none of them are far fetched. It depends on the outcome of the next few election cycles.

If the Democrats retain a majority in the Senate after this November, the feisty, brilliant Ruth Bader Ginsburg, now 81 years old, should retire so Obama can appoint another (younger) liberal member who will have a long tenure on the court. That won't shift the current 5-4 conservative majority, but it will guarantee that Ginsburg won't be replaced by conservative.

It would great if one or both of the older conservatives -- Antonin Scalia (now 78) or Anthony Kennedy (78 later this month) -- would retire, too, so Obama could appoint their successors. But they'll probably try to hang on until a Republican president enters the White House. Let's pray (and organize) so that doesn't happen.

Best-case scenario: A Democrat becomes president in 2016, the Democrats keep control of the Senate, and Scalia and/or Kennedy are so enfeebled by then that they have to quit. At that point, a Democratic president can replace one or both with a liberal justice.

It has become a no-no in American politics for candidates for president or Senate to discuss the characteristics they'd like to see in new Supreme Court justices, except in the vaguest, general terms. They are not supposed to have a "litmus test" for justices. Everyone knows this is bogus. Presidents generally appoint justices who agree with their political views -- compromising only enough to get their nominations confirmed by the Senate or to avoid a huge controversy.

Occasionally presidents miscalculate -- or, more accurately, their nominees change their views -- and upset the ideological applecart. The most famous example is President Dwight Eisenhower's appointment of California Gov. Earl Warren as Chief Justice in 1953. Eisenhower thought he was appointing a conservative Republican. Warren turned out to be (or became as a result of changing social and political conditions) a liberal and turned the Warren Court into one of the most liberal in history. Another turncoat was David Souter, who turned out to be more liberal - or at least centrist -- than George H.W. Bush had anticipated. Among other things, Souter dissented in Bush v Gore, but his side was outvoted 5-4, handing the presidency to GHWB's son. "Poppy" Bush wouldn't make that mistake again. His next, and last, Supreme Court appointment was Clarence Thomas.

So don't expect Hillary Clinton (or any other Democratic candidates for president) to discuss her thoughts about what kind of person she'd appoint to the Supreme Court. She won't want to get boxed in by any dreaded "litmus test," which the Republicans would use against her. But if she (or another Democrat ) wins the White House in 2016, and has a Democratic majority, liberals and progressives should push her (and the Senate Dems, especially those on the Judiciary Committee) to make appointments that will dramatically change the court's direction. Under that scenario, liberals could have a 5-4, perhaps even a 6-3, majority on the court for the next 20, 30, or even 40 years.

What would that mean in terms of public policy? A liberal majority on the Supreme Court could, and should, address the following issues:
  • Campaign Finance: Overturn Citizens United and McCutcheon rulings in order to allow real campaign finance reform that eliminates our current system of corporate-dominated legalized bribery. As David Gans recently wrote in the New Republic: "The Roberts Court is leading a free speech revolution of its own, but this time for the benefit of corporations and the wealthy." Citizens United (2010) equated "free speech" with money, giving corporations a stranglehold on elections. The McCutcheon (April 2014) ruling eliminated dollar limits for super-rich donors like the Koch brothers. Both have been boondoggles for the super-rich, big business, and the right, undermining democracy and tilting the political playing field in the wrong direction.
  • Workers Rights: Reverse the Robert Court's anti-union rulings, including last week's Harris v Quinn decision. This was yet another decision in which, by a 5-4 majority, the court sided with wealthy special interests to weaken worker protections and undermine workers' right to organize. It should come as no surprise that the right-wing National Right to Work Legal Defense Foundation -- funded by the Koch and Walton families and other corporate groups - was responsible for filing the Harris v. Quinn suit against SEIU. The Court ruled that workers who benefit from a union contract (with higher pay, health benefits, paid vacations, etc) don't have to pay union dues. They can be "free riders." Here again, the Court equates money with free speech. In this case, workers can exercise their "free speech" to avoid supporting the union, even if their lives are significantly improved by a collective bargaining contract negotiated by the union in their workplace. The Court decided that the "free speech" interests of those who object to paying for representation outweigh the right of the democratically elected majority that formed the union. Unions are the strongest bulwark to strengthen the middle class, challenge widening inequalities, and lift hardworking Americans out of poverty. The U.S. has the weakest workers' rights laws of any democratic country, which accounts in part for the decline of union membership and big business' ability to violate existing labor laws (such as firing workers who support union organizing efforts in their workplace) without suffering serious consequences. The Roberts court has piled on, siding at every turn with employers over workers.
  • Same-Sex Marriage: Make same-sex marriage a federal right and not leave it up to the states. As I've written elsewhere, the Roberts Court's June 2013 rulings on same-sex marriage favored states rights over equal rights. In its two decisions (on the Defense of Marriage Act and California's Proposition 8), the Court stopped short of proclaiming same-sex marriage a basic right. It left it to the states to determine whether gay Americans have the same right to marry as their straight counterparts. As a result, same-sex marriage advocates have to mobilize and litigate to overturn bans on same-sex marriage in those states that have them. That could take five, 10, 20, or more years, and some states may resist legalizing same-sex marriage forever. In 1967, in Loving v. Virginia, the Supreme Court knocked down all state anti-miscegenation laws that banned inter-racial marriage. It did not leave it up to the states to decide for themselves. That was a bold move, way ahead of public opinion. The Roberts Court was far more cautious. A liberal Supreme Court should apply the same logic to same-sex marriage as the Warren Court applied in Loving to inter-racial marriage. It is a basic right for all Americans, regardless of where they live.
  • Women's Rights: Overturn Hobby Lobby. This decision, rendered last week, is yet another ruling that treats corporations like "citizens" with rights -- in this case, endowing a corporation the "right" of "religious freedom." Under this outrageous ruling, corporate owners who object to birth control don't have to provide contraceptives and other forms of birth control to employees if it violates the owners' religious beliefs. This is little different from saying that a segregationist restaurant owner can avoid serving black customers if it violates his belief in white supremacy. The Hobby Lobby ruling favors corporations' so-called "religious" freedoms over women's right to control their bodies. Did anyone notice that, by the accident of history, the five conservatives on the current Supreme Court who voted in favor of Hobby Lobby, each appointed by Republican presidents, all happen to be Catholic men? They are Samuel Alito, Roberts, Scalia, Thomas and Kennedy. The three women justices -- Ginsburg (Jewish), Elena Kagan (Jewish) and Sonia Sotomayor (Latina Catholic) -- plus Stephen Breyer (Jewish) dissented in the Hobby Lobby case. This isn't meant to stereotype all Catholic men. One of the greatest liberals and civil libertarians in the Supreme Court's history -- William Brennan -- was male and Catholic. He was a staunch supporter of abortion rights and joined the pro-choice majority in Roe v Wade. But it is clear that at least one or more of the five justices who supported Hobby Lobby (certainly Scalia) were guided by religious beliefs over constitutional logic. When and if a Democrat gets to appoint the next one, two or three justices, the choices should be based on the nominees' judicial views, not their religion, but if their previous judicial decisions or writing reveal that their religious views (strict Catholicism, Orthodox Judaism, fundamentalist Protestantism, traditional Islam) lead them to reject basic rights for women, gays, people of color, or other groups, they shouldn't be appointed to any federal court, much less the Supreme Court.
  • Voting Rights: Strengthen enforcement of the Voting Rights Act (VRA), thus reversing the Robert Court's Shelby v Holder (June 2013) ruling that allows voter suppression under the guise of states rights. The 1965 act, which outlawed literacy tests and other obstacles to voting, was an important tool for civil rights activists to challenge other barriers to black political participation, such as gerrymandering of city council, state legislature, and congressional districts in order to dilute black voting strength. It had huge consequences. In 1970 there were only 1,469 black elected officials in the entire country. By 2000, that number had reached 9,040. Today, the figure is close to 11,000. In 1965, only 6.7 percent of Mississippi's black citizens were registered to vote. But four years later the number had jumped to 66.5 percent. By 2000, Mississippi had 897 black elected officials in local and state offices, plus Congress--the largest number of any state in the country. Roberts had been trying to weaken the 1965 Voting Rights Act ever since he was a young lawyer in Ronald Reagan's Justice Department. He finally got his way last year when his court, by a 5-4 margin, ruled that Section 5 of the Voting Rights Act is unconstitutional. That's the provision that requires states with the worst history of voting discrimination have to get Justice Department approval before they can revise their voting laws. Roberts said that blatant racial discrimination in voting no longer exists, so Section 5 isn't needed. As Cong. John Lewis, a veteran civil rights activist, said about the Supreme Court ruling: "There are more black elected officials in Mississippi today not because attempts to discriminate against voters ceased but because the Voting Rights Act kept those attempts from becoming law." In recent years, Republican politicians and operatives, including Karl Rove, have sought to restrict voting rights to keep people of color, young people, and poor people from voting. The Roberts Court's Shelby ruling gave them permission to declare war on voting rights. As soon as the Court made its ruling, a host of states (mostly but not entirely Southern states) began adopting laws to suppress voting rights -- such as requiring IDs in order to vote and setting the stage to gerrymander political districts to weaken black and Latino voting strength.
  • Education Funding: Mandate sufficient funding for all public K-12 schools as a basic right of all students regardless of the tax base of the surrounding community or the political/spending priorities of the states. The famous unanimous 1954 Brown vs. Board of Education ruling stated that "separate but equal" schools were inherently unequal. The justices were writing about racial segregation and later mandated that states and localities desegregate their schools "with all deliberate speed." Today, America's public schools are segregated by race and income, as Jonathan Kozol reported in his book Savage Inequalities, as UCLA professor Gary Orfield and his colleagues have documented in recent reports, and as many other studies have revealed by examining per-student spending in different school districts. As many scholars and journalists have observed, our public schools are beset with outrageously unequal funding. Students from well-off families generally go to public schools with much higher per-student spending levels than students from less affluent families. The solution is not busing or charter schools but adequate funding for all students and all schools, regardless of the size of a community's tax base. Since the 1970s, an increasing number of state courts have sought to address these disparities by requiring state legislatures to spend more money on education and/or to distribute those funds more equally. Although these rulings have made some difference, huge disparities persist. This is true within metro areas and states, but also true between states. In 2012, New Jersey spent $18,485 per student while Oklahoma spent only $8,285 per student. Differences in the cost of living do not account for these differences; it is not more than twice as expensive to live in New Jersey than in Oklahoma. And within each state, there are huge disparities. In Illinois a few years ago, New Trier Township High School District (in an affluent Chicago suburb) spent $19,927 per student while the Farmington Central Community Unit School District (a rural area in central Illinois) spent only $6,548 per student. Across the country, the accident of geography determines the quality of education that students get. We need a Supreme Court that will rule that a decent K-12 education is a basic right and that the federal government needs to enforce this right by taking over responsibility for funding public education, or requiring states not only to provide "equal" funding (per student) for every school district but also to provide "equal opportunity" for all students, which would mean spending more money in schools and school districts with a higher percentage of disadvantaged students.
Progressives can surely add to this list of issues that a Supreme Court with a liberal majority should address. Unfortunately, presidential candidates won't directly address these issues or the views of candidates they would appoint to the Supreme Court when vacancies arise. But as we watch the Roberts Court eviscerate our democracy, and protest its outrageous (usually 5-4) rulings, we should also recognize that part of why we want liberal Democrats in the White House and Congress is to make sure that the third branch of government reflects what's best about country's values -- fairness, equality, civil liberties, and civil rights.

Peter Dreier teaches Politics and chairs the Urban & Environmental Policy Department at Occidental College. His most recent book is The 100 Greatest Americans of the 20th Century: A Social Justice Hall of Fame (Nation Books, 2012).
Follow Peter Dreier on Twitter: www.twitter.com/peterdreier