Saturday, June 06, 2009

Review - B. Alan Wallace: Mind in the Balance

When Columbia University offered me a review copy of B. Alan Wallace's newest book, Mind in the Balance: Meditation in Science, Buddhism, and Christianity, I was thrilled. I have been a fan of Wallace's more academic works for quite some time.

In the past couple of years I have read Genuine Happiness: Meditation as the Path to Fulfillment (2005), Contemplative Science: Where Buddhism and Neuroscience Converge (2007), and Embracing Mind: The Common Ground of Science and Spirituality (2008). These are very good books, but this new one feels to me more general in its appeal than earlier books. It would be nice to see his work reach a non-Buddhist audience, but judging by the lack of promotion on the part of the press, I don't see that happening. In my opinion, it's a missed opportunity on their part - and a loss to the culture at large.

Mind in the Balance was written for his daughter, who has been a Christian for most of her life. She wanted her father to write a book that could help her improve the quality of her inner life and mind, a book that would be useful to everyone interested in a better quality of life, whether they are Christian, Buddhist, or something in-between.

The book begins with four good chapters, though brief considering he has written whole books on the topic, on the science of meditation, it's origins, and it benefits. He cites many of the most recent and promising articles on the benefits and uses of meditation in medicine and psychology. For the general reader, this is great introduction that may lead some into his more academic books on the subject, including the two mentioned above.

The remainder of the book is devoted to twin chapters on meditation techniques and the philosophical/psychological theory behind the practices, bridging a variety of religions, not just Buddhism and Christianity. There are ten of these practice/theory pairs, ranging from simple mindfulness of breath to contemplation on the emptiness of matter and finishing with a chapter on being mindful in our daily lives - the idea of "meditation in action."

I'm sure that Buddhists from different schools might quibble over some of the subjective states Wallace associates with various techniques, but that is a matter for experts. The lay reader should simply keep in mind that Wallace writes from a Tibetan Buddhist tradition, which is only one of several variations on the teachings of the Buddha.

Likewise, I'm sure that some Christians (and those other faiths) may not recognize their own religion in some of these practices, which is sad. So much of Christianity (especially Protestant, but also Catholic) has been divorced from the contemplative practices of the monks and nuns who spent lifetimes cultivating a direct relationship with their conception of the divine. This book offers a way back to those traditions in a non-denominational practice - and it may contribute to a post-modern Christianity (as advocated by the Trappist monk Father Thomas Keating in the form of Centering Prayer).

Overall, this is a highly recommended book for meditation novices and experts alike. The ten practices and theories presented, along with the scientific research on meditation, form a compelling arguments for living a life informed by meditation. And as a side note, I suspect that this book can contribute to a better understanding of how Buddhism and Christianity will co-mingle in the West in the coming decades.


John Perry - Thinking and Talking About the Self

John Perry talks about the differences between self-awareness in the proximate self ("I," the subjective self) and the distal self ("me," the objective self).
John Perry investigates two quite different ways of thinking of ourselves; one, that we express with the first person, that is a special way of considering ourselves; the other, for which we use our name, that allows us to think of ourselves more or less as others do. He explores these two different ways of thinking, and talking, about ourselves, and draws some conclusions about the structure of thought and language. Series: UC Berkeley Graduate Council Lectures [6/2009]





P2P Wiki - Importance of neotraditional approaches in the reconstructive transmodern era

Excellent article on the necessity of the emergence of Buddhist Economics in post-Capitalist world. This is from Michel Bauwens - we desperately need this kind of forward thinking to avoid illusory change in the economic structures that will simply lead to more of the same.

Importance of neotraditional approaches in the reconstructive transmodern era

From P2P Foundation

Article: The importance of neotraditional approaches in the reconstructive transmodern era. Michel Bauwens.

Written for a delayed Buddhist Economics conference planned at Ubon Rajathanee University, Warin Chamrab, Ubon Rathchathani, Thailand, for December 5-7, 2008 and rescheduled for April 9-11, 2009.

Text

I see the emergence of Buddhist Economics as part of a broader canvass of initiatives, thought streams and social practices, that could be broadly termed ‘neotraditional’. My aim in this essay is to offer a hypothesis of why their emergence is important, and what role they could play in movements aimed at reforming and transforming the current political economy.

The Main Argument: the common immateriality of traditional and post-industrial eras

It is not difficult to argue that modern industrial societies are dominated by a materialist paradigm. What exists for modern consciousness is material physical reality, what matters in the economy is the production of material products, and the pursuit of happiness is in very strong ways related to the accumulation of goods for consumption. For the elite, its powers derive essentially from the accumulation of capital assets, whether these are industrial or financial. Infinite material growth is really the core mantra of capitalism, even if it happens through the medium of money.

But this was not the case in traditional, agriculture-based societies. In such societies, people of course do have to eat and to produce, and the possession of land and military force is crucial to obtain tribute from the agricultural workers, but it cannot be said that the aim is accumulation of assets. Feudal-type societies are based on personal relations consisting of mutual obligations. These are of course very unequal in character, but are nevertheless very removed from the impersonal and obligation-less property forms that came with capitalism, where there is little impediment for goods and capital to move freely to whomever it is sold to.

In the more traditional societies that we have in mind, both the elite and the mass body of producers are united by a common immaterial quest for salvation, and it is the institution that is in charge of organizing that quest, like the Church in the western Middle Ages or the Sangha in South-East Asia, that is the determining organization for the social reproduction of the system. Tribute flows up from the farming population to the owning class, but the owning class is engaged in a two-fold pursuit: showing its status through festivities, where parts of the surplus is burned up; and gifting to the religious institutions. It is only this way that salvation/enlightenment, i.e. spiritual value or merit in all its forms, can be obtained. The more you give, the higher your spiritual status. Social status without spiritual status is frowned upon by those type of societies. This is why the religious institutions like the Church of the Sangha end up so much land and property themselves, as the gifting competition is relentless. At the same time, these institutions serve as the welfare and social security mechanisms of their day, by ensuring that a part of that flow goes back to the poor and can be used in times of social emergencies.

It is still a little bit harder to argue in Asian than in the West, but the current era, despite the rapid industrialization and ‘materialisation’ of East Asia, is undergoing a fundamental shift to immateriality.

Material goods still need to be made, and Asia is furiously industrializing, but nevertheless, for the world system, important shifts have already happened, which are most readily visible in the West.

Here are just a few of the facts and arguments to illustrate my point for a shift towards once again a immaterial focus in our societies.

The cosmopolitan elite of capital has already transformed itself for a long time towards financial capital. In this form of activity, financial assets are moved constantly where returns are the highest, and this makes industrial activity a secondary activity. If we then look at the financial value of corporations, only a fraction of it is determined by the material assets of such corporation. The rest of the value, usually called good will, is in fact determined by the various immaterial assets of such corporation, it’s expertise and collective intelligence, it’s brand capital, the trust in the present and the future that it can generate.

The most prized material goods, such as say Nike shoes, show a similar quality, only 5% of its sales value is said to be determined by physical production costs, all the rest is the value imparted to it by the brand (both the cost to create it, and the surplus value created by the consumers themselves).

The shift towards a immaterial focus can also be shown sociologically, for example through the work of Paul Ray on cultural creatives, and of Ronald Inglehart on the profound shift to postmaterial values and aspirations.

For populations who have lived for more than one generation in broad material security, the value system shifts again to the pursuit of knowledge, cultural, intellectual and spiritual experience. Not all of them, not all the time, but more and more, and especially so for the cultural elite of ‘cultural creatives’ or what Richard Florida has called the Creative Class, which is also responsible for key value creation in cognitive capitalism.

One more economic argument could be mentioned in the context of cognitive capitalism. In this model of our economy, the current dominant model as far as value creation is concerned, the key surplus value is realized through the protection of intellectual properties. While Asia is still (mostly) engaged in producing cheap industrial goods (though it is changing fast), the dominant Western companies can sell goods at over 100 to 1,000 times their production value, through state and WTO enforced intellectual rents. It is clearly the immaterial value of such assets that generate the economic streams, even though it requires creating fictitious scarcities through the legal apparatus.

However, it must be said, and we will develop that issue later, that this model is undermined through the emergence of distributed infrastructures for the production, distribution and consumption of immaterial and cultural goods, which makes such fictitious scarcity untenable in the long run. The immaterial value creation is indeed already leaking out of the market system.

Read the whole article.

Discover - The Biocentric Universe Theory: Life Creates Time, Space, and the Cosmos Itself

Not sure I can buy into an biocentric universe (in fact, I'm pretty sure it's an intensely egocentric perspective), but this is an interesting article. Such a theory may explain our unique experience of the Kosmos, but I doubt that it will unlock any secrets of the Kosmos itself.

The Biocentric Universe Theory: Life Creates Time, Space, and the Cosmos Itself

Stem-cell guru Robert Lanza presents a radical new view of the universe and everything in it.

by Robert Lanza and Bob Berman

From the May 2009 issue, published online May 1, 2009

NASA Hubble Space Telescope Collection
NASA/ESA/A. Schaller (for STScI)

Adapted from Biocentrism: How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe, by Robert Lanza with Bob Berman, published by BenBella Books in May 2009.

The farther we peer into space, the more we realize that the nature of the universe cannot be understood fully by inspecting spiral galaxies or watching distant supernovas. It lies deeper. It involves our very selves.

This insight snapped into focus one day while one of us (Lanza) was walking through the woods. Looking up, he saw a huge golden orb web spider tethered to the overhead boughs. There the creature sat on a single thread, reaching out across its web to detect the vibrations of a trapped insect struggling to escape. The spider surveyed its universe, but everything beyond that gossamer pinwheel was incomprehensible. The human observer seemed as far-off to the spider as telescopic objects seem to us. Yet there was something kindred: We humans, too, lie at the heart of a great web of space and time whose threads are connected according to laws that dwell in our minds.

Is the web possible without the spider? Are space and time physical objects that would continue to exist even if living creatures were removed from the scene?

Figuring out the nature of the real world has obsessed scientists and philosophers for millennia. Three hundred years ago, the Irish empiricist George Berkeley contributed a particularly prescient observation: The only thing we can perceive are our perceptions. In other words, consciousness is the matrix upon which the cosmos is apprehended. Color, sound, temperature, and the like exist only as perceptions in our head, not as absolute essences. In the broadest sense, we cannot be sure of an outside universe at all.

For centuries, scientists regarded Berkeley’s argument as a philosophical sideshow and continued to build physical models based on the assumption of a separate universe “out there” into which we have each individually arrived. These models presume the existence of one essential reality that prevails with us or without us. Yet since the 1920s, quantum physics experiments have routinely shown the opposite: Results do depend on whether anyone is observing. This is perhaps most vividly illustrated by the famous two-slit experiment. When someone watches a subatomic particle or a bit of light pass through the slits, the particle behaves like a bullet, passing through one hole or the other. But if no one observes the particle, it exhibits the behavior of a wave that can inhabit all possibilities—including somehow passing through both holes at the same time.

Some of the greatest physicists have described these results as so confounding they are impossible to comprehend fully, beyond the reach of metaphor, visualization, and language itself. But there is another interpretation that makes them sensible. Instead of assuming a reality that predates life and even creates it, we propose a biocentric picture of reality. From this point of view, life—particularly consciousness—creates the universe, and the universe could not exist without us.

MESSING WITH THE LIGHT
Quantum mechanics is the physicist’s most accurate model for describing the world of the atom. But it also makes some of the most persuasive arguments that conscious perception is integral to the workings of the universe. Quantum theory tells us that an unobserved small object (for instance, an electron or a photon—a particle of light) exists only in a blurry, unpredictable state, with no well-defined location or motion until the moment it is observed. This is Werner Heisenberg’s famous uncertainty principle. Physicists describe the phantom, not-yet-manifest condition as a wave function, a mathematical expression used to find the probability that a particle will appear in any given place. When a property of an electron suddenly switches from possibility to reality, some physicists say its wave function has collapsed.

What accomplishes this collapse? Messing with it. Hitting it with a bit of light in order to take its picture. Just looking at it does the job. Experiments suggest that mere knowledge in the experimenter’s mind is sufficient to collapse a wave function and convert possibility to reality. When particles are created as a pair—for instance, two electrons in a single atom that move or spin together—physicists call them entangled. Due to their intimate connection, entangled particles share a wave function. When we measure one particle and thus collapse its wave function, the other particle’s wave function instantaneously collapses too. If one photon is observed to have a vertical polarization (its waves all moving in one plane), the act of observation causes the other to instantly go from being an indefinite probability wave to an actual photon with the opposite, horizontal polarity—even if the two photons have since moved far from each other.

In 1997 University of Geneva physicist Nicolas Gisin sent two entangled photons zooming along optical fibers until they were seven miles apart. One photon then hit a two-way mirror where it had a choice: either bounce off or go through. Detectors recorded what it randomly did. But whatever action it took, its entangled twin always performed the complementary action. The communication between the two happened at least 10,000 times faster than the speed of light. It seems that quantum news travels instantaneously, limited by no external constraints—not even the speed of light. Since then, other researchers have duplicated and refined Gisin’s work. Today no one questions the immediate nature of this connectedness between bits of light or matter, or even entire clusters of atoms.

Before these experiments most physicists believed in an objective, independent universe. They still clung to the assumption that physical states exist in some absolute sense before they are measured.

All of this is now gone for keeps.

WRESTLING WITH GOLDILOCKS
The strangeness of quantum reality is far from the only argument against the old model of reality. There is also the matter of the fine-tuning of the cosmos. Many fundamental traits, forces, and physical constants—like the charge of the electron or the strength of gravity—make it appear as if everything about the physical state of the universe were tailor-made for life. Some researchers call this revelation the Goldilocks principle, because the cosmos is not “too this” or “too that” but rather “just right” for life.

At the moment there are only four explanations for this mystery. The first two give us little to work with from a scientific perspective. One is simply to argue for incredible coincidence. Another is to say, “God did it,” which explains nothing even if it is true.

The third explanation invokes a concept called the anthropic principle,? first articulated by Cambridge astrophysicist Brandon Carter in 1973. This principle holds that we must find the right conditions for life in our universe, because if such life did not exist, we would not be here to find those conditions. Some cosmologists have tried to wed the anthropic principle with the recent theories that suggest our universe is just one of a vast multitude of universes, each with its own physical laws. Through sheer numbers, then, it would not be surprising that one of these universes would have the right qualities for life. But so far there is no direct evidence whatsoever for other universes.

The final option is biocentrism, which holds that the universe is created by life and not the other way around. This is an explanation for and extension of the participatory anthropic principle described by the physicist John Wheeler, a disciple of Einstein’s who coined the terms wormhole and black hole.

SEEKING SPACE AND TIME
Even the most fundamental elements of physical reality, space and time, strongly support a biocentric basis for the cosmos.

According to biocentrism, time does not exist independently of the life that notices it. The reality of time has long been questioned by an odd alliance of philosophers and physicists. The former argue that the past exists only as ideas in the mind, which themselves are neuroelectrical events occurring strictly in the present moment. Physicists, for their part, note that all of their working models, from Isaac Newton’s laws through quantum mechanics, do not actually describe the nature of time. The real point is that no actual entity of time is needed, nor does it play a role in any of their equations. When they speak of time, they inevitably describe it in terms of change. But change is not the same thing as time.

To measure anything’s position precisely, at any given instant, is to lock in on one static frame of its motion, as in the frame of a film. Conversely, as soon as you observe a movement, you cannot isolate a frame, because motion is the summation of many frames. Sharpness in one parameter induces blurriness in the other. Imagine that you are watching a film of an archery tournament. An archer shoots and the arrow flies. The camera follows the arrow’s trajectory from the archer’s bow toward the target. Suddenly the projector stops on a single frame of a stilled arrow. You stare at the image of an arrow in midflight. The pause in the film enables you to know the position of the arrow with great accuracy, but you have lost all information about its momentum. In that frame it is going nowhere; its path and velocity are no longer known. Such fuzziness brings us back to Heisenberg’s uncertainty principle, which describes how measuring the location of a subatomic particle inherently blurs its momentum and vice versa.

All of this makes perfect sense from a biocentric perspective. Everything we perceive is actively and repeatedly being reconstructed inside our heads in an organized whirl of information. Time in this sense can be defined as the summation of spatial states occurring inside the mind. So what is real? If the next mental image is different from the last, then it is different, period. We can award that change with the word time, but that does not mean there is an actual invisible matrix in which changes occur. That is just our own way of making sense of things. We watch our loved ones age and die and assume that an external entity called time is responsible for the crime.

There is a peculiar intangibility to space, as well. We cannot pick it up and bring it to the laboratory. Like time, space is neither physical nor fundamentally real in our view. Rather, it is a mode of interpretation and understanding. It is part of an animal’s mental software that molds sensations into multidimensional objects.

Most of us still think like Newton, regarding space as sort of a vast container that has no walls. But our notion of space is false. Shall we count the ways? 1. Distances between objects mutate depending on conditions like gravity and velocity, as described by Einstein’s relativity, so that there is no absolute distance between anything and anything else. 2. Empty space, as described by quantum mechanics, is in fact not empty but full of potential particles and fields. 3. Quantum theory even casts doubt on the notion that distant objects are truly separated, since entangled particles can act in unison even if separated by the width of a galaxy.

UNLOCKING THE CAGE
In daily life, space and time are harmless illusions. A problem arises only because, by treating these as fundamental and independent things, science picks a completely wrong starting point for investigations into the nature of reality. Most researchers still believe they can build from one side of nature, the physical, without the other side, the living. By inclination and training these scientists are obsessed with mathematical descriptions of the world. If only, after leaving work, they would look out with equal seriousness over a pond and watch the schools of minnows rise to the surface. The fish, the ducks, and the cormorants, paddling out beyond the pads and the cattails, are all part of the greater answer.

Recent quantum studies help illustrate what a new biocentric science would look like. Just months? ago, Nicolas Gisin announced a new twist on his entanglement experiment; in this case, he thinks the results could be visible to the naked eye. At the University of Vienna, Anton Zeilinger’s work with huge molecules called buckyballs pushes quantum reality closer to the macroscopic world. In an exciting extension of this work—proposed by Roger Penrose, the renowned Oxford physicist—not just light but a small mirror that reflects it becomes part of an entangled quantum system, one that is billions of times larger than a buckyball. If the proposed experiment ends up confirming Penrose’s idea, it would also confirm that quantum effects apply to human-scale objects.

Biocentrism should unlock the cages in which Western science has unwittingly confined itself. Allowing the observer into the equation should open new approaches to understanding cognition, from unraveling the nature of consciousness to developing thinking machines that experience the world the same way we do. Biocentrism should also provide stronger bases for solving problems associated with quantum physics and the Big Bang. Accepting space and time as forms of animal sense perception (that is, as biological), rather than as external physical objects, offers a new way of understanding everything from the microworld (for instance, the reason for strange results in the two-slit experiment) to the forces, constants, and laws that shape the universe. At a minimum, it should help halt such dead-end efforts as string theory.

Above all, biocentrism offers a more promising way to bring together all of physics, as scientists have been trying to do since Einstein’s unsuccessful unified field theories of eight decades ago. Until we recognize the essential role of biology, our attempts to truly unify the universe will remain a train to nowhere.

~ Adapted from Biocentrism: How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe, by Robert Lanza with Bob Berman, published by BenBella Books in May 2009.

Brain Science Podcast #58: Interview with author Alva Noë

Awesome. It's always great to hear a neuroscientist admit that mind is greater than just the brain. It's painfully obvious to anyone who meditates, but neuroscientists (with a few notable exceptions) appear not to be meditators.

Brain Science Podcast #58: Interview with author Alva Noë

noe-crop

Episode 58 of the Brain Science Podcast is an interview with philosopher Alva Noë, whose book Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness argues persuasively that our Minds are MORE than just our brains. He says that “the brain is necessary but not sufficient” to create the mind. Listen to Episode 58 [Play]

Show Notes and Links:

Important scientists mentioned in the interview:

  • Paul Bach-y-Rita: pioneering studies in sensory substitution using tactile stimuli to substitute for vision
  • Held and Hein: experiments with cats showing that development of normal vision requires motor-sensory feedback

References:

  • Brain Mechanisms in Sensory Substitution by Paul Bach-y-Rita, 1972.
  • Bach-y-Rita, P “Tactile-Vision Substitution: past and future”, International Journal of Neuroscience 19, nos. 1-4, 29-36, 1983.
  • Held, R and Hein, “Movement-produced stimulation in the development of visually guided behavior.” Journal of Comparative and Physiological Psychology. 56(5), 872-876, 1963.
  • Held, R. “Plasticity in sensory-motor systems.” Scientific American. 213(5) 84-91, 1965.

listen-to-audio Listen to Episode 58 [Play]

Episode Transcript (Download PDF)


Friday, June 05, 2009

Seed - The New Interface of Governance

Interesting article - I'd like to think that politics can actually make use of what psychology has to offer in the realm of decision making and choices - but I doubt it.

The New Interface of Governance

Frontier / by Nancy Scola / June 2, 2009

If we can just tweak the way we make choices, we can make smarter ones. A look at Obama’s plans to put the science of human nature to work.


Illustration: Mike Pick, adapted from photograph by Sir Mervs

For those of us familiar with the strange land that is Washington, DC, it’s tempting to snicker a bit at the sudden star turn of the field of behavioral economics in our nation’s capital. Books like Cass Sunstein and Richard Thaler’s Nudge, Dan Ariely’s Predictably Irrational, and George Akerlof and Robert Shiller’s Animal Spirits are being passed around like samizdat. Human beings, the thinking goes, bear little more than a passing resemblance to the “economic man” of classic econ textbooks. We’re messy creatures, not altogether skilled at maximizing value, or efficiency, or all those other things our self-interest is supposed to drive us to attain.

“People make bad choices,” says Swarthmore psychologist Barry Schwartz, author of The Paradox of Choice. “Or they make no choices at all unless you hold a gun to their heads.” Look no further than the mortgage mess. For all the malevolence on the part of mortgage lenders, many of us simply took on loans we couldn’t possibly carry long term. Moreover, we regularly do things like leave money on the table, letting our confusion over retirement plans scare us away from employers matching funds. The dream of Nudge-ers, as the shorthand goes, is that if the Obama Administration can just tweak the way that we make choices, even just slightly, we might make smarter ones.

Sure, we scoff. That stuff might win you Nobel Prizes (Note Princeton’s Daniel Kahneman, 2002). But, we think, Washington and the convoluted political process therein is where grand intellectual visions go to die.

There’s a chance, though, that the web-savvy Obama Administration might have an opening through which to put its behaviorist vision into practice. Behavioral economists use the term “choice architecture” to frame how decision makers can gently scoot people towards better choices. Internet experts talk about “information architecture” or “interaction design.” But they share much in common—particularly, the understanding that today’s information-rich world is confusing, and that attention dedicated to crafting the environment in which people make choices gives us, as Schwartz puts it, “a fighting chance of knowing what we’re doing.”

“When you’re doing interaction design,” explains Christian Crumlish, curator of Yahoo’s Pattern Design Library, “you often start by studying people’s behavior in the wild, because you want to map and facilitate what they already do.” Take Amazon.com’s “Customers Who Bought This Item Also Bought…” feature. Search for Herbert Marcuse and Amazon suggests that perhaps you might also be interested in something by Hannah Arendt. The site’s architecture is mimicking the experience you might get by walking through your neighborhood bookshop’s philosophy aisle.

Though other times, says Crumlish, the intent behind interaction design is to guide new behaviors. If you’re launching a new social network, for example, your goal is to shape an environment that encourages users to share as much as possible. “There are,” says Crumlish, “all sorts of tricks to do that.”

Facebook, for example, sets defaults that encourage their users to reveal as much as possible about their lives—details that other humans find irresistible. (Some moves, like Facebook’s News Feed, at first violated users sense of privacy. But tellingly, it was users’ comfort level that adjusted, not company policy.) What’s striking is how well those web design tricks find their match in behavioral economics’ literature. Sunstein and Thaler, for instance, prescribe defaults to solve the problem of uneconomic humans who simply fail to pick a 401(k) plan: make enrollment in a plan the default option. Most people will stick with the status quo. Others will work up enough energy to change it. All will be better off than if they were enrolled in no plan at all.

If you’re US Treasury Secretary Tim Geithner, and you really want Americans to renegotiate the terms of their mortgage—a major feature of the Obama Administration’s plan to ease pain for homeowners—the web can be an ally. The Treasury Department boiled the complex economic plan down into the bare bones site MakingHomeAffordable.gov. The interface is built like a cattle chute, shunting mortgage holders into the program most appropriate for them. It’s perhaps not the most nuanced of tools, but it’s better than letting Americans flounder under stifling mortgages. There’s a corollary from Obama himself: As a candidate, Obama distributed a drop-dead simple web widget that calculated an individual’s tax burdens under his economic plan. “Too crude!” his critics cried. The campaign said nothing, while millions of Americans got the idea that—contrary to what McCain kept arguing—what they owed the IRS might just be a little lighter under President Obama.

Stanford’s Persuasive Technology Lab makes academic study of these computer tricks and of the technological engineering that shape human behaviors. One example they cite is the popular sale site, Woot.com. Every day a new item is featured on the homepage, like a digital watch that sets itself according to atomic time at a price of $19.99—good for one day only. Mere mortals are powerless to resist the call of a limited-time deal. Studies show, writes Sunstein and Thaler, “losing something makes you twice as miserable as gaining the same thing makes you happy.” And so, we buy. Of course, bricks-and-mortar retail stores take advantage of the one-day sale, but the web is different. Running to Best Buy for the latest deal raises questions in our minds like, “What if the parking lot is so crowded that I can’t get a space?” With Woot.com, it’s one or two-click satisfaction. Even more powerful, though, is the web’s built-in transparency. Commenters on that Global Atomic Watch one-day sale, for example, helpfully point out that the cheapest online price for this timepiece is $39.95. Sold.

The transparent nature of the digital world has potentially powerful public policy implications. Obama, for example, is pushing in Congress for greater credit card transparency, and Sunstein and Thaler see a day when our own personal MasterCard and Visa records get uploaded to a site that spits out a determination of whether our APRs, payment terms, and frequent-flier miles are a good match for our individual economic needs. The hope is that that direct, individualized feedback can prompt people to use their resources more wisely; that’s also the thinking behind Google’s PowerMeter project, which aims to display your home energy usage rate right on your Google home page. Another example: The state of California runs an online greenhouse gas registry that makes public just how much CO2 local businesses emit. The Environmental Protection Agency is planning on launching a nationwide version of the program. The agency’s proposal, while still in its rough-draft stages, doesn’t yet spell out the registry’s online component.

To understand how the online component of the EPA’s greenhouse gas registry will evolve, it’s necessary to take a brief tour through the Federal regulatory process and just how President Obama plans to overhaul it. Already in this young administration, using the web to add a dose of “public” to public policy has become standard operating procedure. Witness Recovery.gov, HealthReform.gov, FinancialStablity.gov, to name just a few of the executive branch sites that have blossomed in Obama’s Washington. There are signs that the White House is planning to take this approach further, imprinting the way it does business on the rest of the sprawling executive branch.

There exists in the Federal Government a little-known office called the Office of Information and Regulatory Affairs. OIRA’s job is to make sure that as Federal agencies regulate their actions carry, as Congress has phrased it, “The President’s voice.” In a barely-noticed directive issued just after taking office, President Obama called for a rethinking of OIRA and the regulatory process to in part, “Clarify the role of behavioral science in formulating regulatory policies.” For the position of OIRA’s director, the president appointed Nudge’s Cass Sunstein himself—a strong sign that the Obama White House is eager to examine how “choice architecture” and gentle nudging could help Federal agencies and departments tackle their regulatory challenges. When the EPA’s greenhouse gas registry eventually rolls out, there’s a good chance that it will go beyond a simple website to be a carefully-crafted framework to use what we know about human nature to rein in greenhouse gases.

There’s no doubt a tendency to recoil a bit at the idea of the Federal Government shaping behavior through a pre-checked checkbox and a tempting user interface. It’s not pure paranoia. Says Yahoo’s Crumlish, “People coming to use an interface have an interest, but the ‘house’ has an interest as well.” Is the fact that “house” also happens to be the executive branch of the US Federal Government enough to provoke fears of Big Brother, and thus resistance to whatever the White House’s latest web project might be?
Not necessarily. “Get over the idea that you’re not pushing people one way or another,” Schwartz says he regularly tells decision makers.

If the general public understood that choice shaping is ubiquitous, Schwartz believes, they might get over their initial wariness at being nudged. Behaviorists like to use the example of the school cafeteria. Nudge-ers might recommend that fruit choices be placed before cookies and cakes on the dessert table. It’s not, they argue, like there’s a pre-ordained natural order according to which food choices should be placed before children. Throwing all the food up in the air to see where it lands isn’t a very sensible approach, so decisions have to be made. If everyone who designs anything—whether it’s a lunchroom or a new government home page—is architecting choice, the argument goes, that power might as well be used to advance rational, efficient, economic public policy.

Still, the behaviorists in President Obama’s inner circle likely anticipate a good amount of skepticism from the public. One recently launched executive branch project might be read as an attempt to allay some of those fears. Data.gov exposes some of the dry details of Federal Government operations and the information it regularly collects. By making it easier for the public to find, download, and make use of government data sets, Data.gov aims to “make government more transparent,” and create “an unprecedented level of openness.”

Of course, it’s a fair question whether one-click access to US Geological Survey spreadsheets showing the “locations and characteristics of world copper smelters” really amounts to pulling back the curtain on Big Brother. There’s a risk of going down a rabbit hole, trying to make sense of whether government transparency is a counterbalance to being nudged—or if it is itself “choice architecture.” In the end, whether the new politics of choice succeeds in bettering our lives may depend on letting go of the idea that we always have to be fully in-control of our choices. Perhaps the coming age of smarter, more efficient public policy has to start with personal admissions that as flesh-and-blood human beings, we’re not always smart, and we’re very often inefficient.

We live in a world awash with knowledge. “The challenge we have now is to shape how we navigate that information in meaningful ways,” says Swarthmore’s Schwartz. “The people who truly figure that out,” he predicts,” are going to be the ones to run the world.” Perhaps they already are.


Discover - Susskind Lectures on General Relativity

Very cool - I love physics, even when it makes me feel dumb. This comes from Discover.

Susskind Lectures on General Relativity

by Sean in Academia, Science | 24 comments | RSS feed | Trackback >
June 2nd, 2009 9:23 AM

Via Dmitry Podolsky, a series of YouTube videos from Stanford encompassing an entire course by Lenny Susskind on general relativity. I didn’t look closely enough to figure out exactly what level the lectures are pitched at, but it looks like a fairly standard advanced-undergrad or beginning-grad introduction to the subject. (For which I could recommend an excellent textbook, if you’re interested.) This is the first lecture; there are more.

It’s fantastic that Stanford is giving this away. I don’t worry that it will replace the conventional university. The right distinction is not “people who would physically go to the lectures” vs. “people who will just watch the videos”; it’s between “people who can watch the videos” and “people who have no access to lectures like this.” And Susskind is a great lecturer.


Mel Schwartz - Coming into Balance

Nice article.

Coming into Balance

Coming into balance

I recently broke my foot, a fracture that occurred as I missed a step on my front porch. The break occurred on the outside part of my foot- the fifth metatarsal. My doctor provided some good news in that I wouldn’t need a cast and I proceeded to adjust to my broken foot. Or so I thought. In deference to the pain on the outer perimeter of my foot I shifted my weight toward my other side, compensating for the damage.

By the following week later I had developed a new and more painful problem. I stressed the unbroken part of my foot by placing an inordinate amount of pressure on it. I actually experienced more acute pain in that area than in the break itself. A month later the broken bone had essentially healed--but the damage I caused to the inner part of my foot still lingers. This is an issue of compensation. And nowhere does this tendency provoke more havoc than in our emotional and psychological states.

At different times in life—and most particularly in childhood—we develop coping mechanisms to adjust to the challenges and travails that we encounter. Coping mechanisms are the adjustments that we make to our personalities, typically in our childhood. We’re not usually aware that we’re developing them as they assimilate into our being in very subtle ways. We craft them so that we might deal with the challenges, wounds, rejections or other stressors that life brings us. Coping mechanisms are our way of defending against challenges. Ordinarily, these alterations to our natural state of being are adaptations to the assaults to our emotional and psychological being.

An abusive or unloving parent may cause us to become indifferent to the hurt so that we can survive the pain. So we fashion a personality to protect us from being vulnerable. And in so doing we preclude having more open and intimate relationships. A chaotic or turbulent home environment may induce us to fashion the mask of being a people pleaser, as we try to placate everyone so that peace may reign. We might also seek the security of predictability to compensate for the uncertainty of childhood. Over time, becoming rooted to the need for that predictability, we dull the growth and creativity that only comes from embracing uncertainty.

We might be simply compensating for not feeling good enough, popular enough or loved enough. In most cases the temporary defensive formation can be a helpful mechanism. It assists us in getting through a difficult transition. Over time, however, the coping mechanism becomes a fixed and habitual feature of our persona, which limits our growth.

These adaptive techniques are reasonably purposeful when we first adorn them. The problem is that most of us struggle to shed these previously adaptive parts of our personality and over time they become hardened. In other words, they burden us and they block our greater emergence. What was once a coping mechanism becomes a suit of armor—and we clank through life wearing it.
Read the rest of the article to see what Schwartz is really talking about here.


Elisha Goldstein - Can Mindful Eating Change Your Life?

Nice post - and good advice.

Mindfulness and Psychotherapy

Can Mindful Eating Change Your Life?

By Elisha Goldstein, Ph.D.
June 3, 2009

Sour, sweet, bitter, pungent-all must be tasted.
Chinese proverb

Whether you are a food lover or someone who wishes they could just take a food pill and get on with their day, food is an inevitable part of our lives and we can learn to relate to it in a way that supports our mental and physical health. More and more people are beginning to learn of a new way to relate to food whether they love food or not. Surprise, surprise, I’m talking about Mindful Eating. Here’s how to engage in it. While there is a lot of fervor over the benefits of mindful eating, my biggest suggestion is always to trust your own experience.

Here’s how to do it (This is an excerpt from the upcoming Mindfulness-Based Stress Reduction Workbook, New Harbinger Publications, February 2010 by Bob Stahl, Ph.D. & Elisha Goldstein, Ph.D.):

When practicing mindful eating you can choose to intentionally be aware of the food you are eating during any meal or snack. Begin each meal by carefully noticing your food choices before you eat them. Notice the colors of the food, the shapes, and the fragrance.

You can also reflect for a moment on the number of people who may have been involved in bringing the food to your table; the farmers, truckers, grocery workers, and others who’ve made it possible. In this way, you deepen your appreciation for the interconnectedness we all truly share. Below are five mindful reflections inspired by Thich Nhat Hanh that I’ve found to be meaningful and supportive when sitting down to eat.

  • May we receive this food as a gift from the earth, the sky and all the living beings and all their hard work that made it possible for me to nourish this body and mind.
  • May we eat with mindfulness and gratitude so as to be worthy to receive it.
  • May we recognize and transform our unskillful ways, especially our greed, and learn to eat with moderation.
  • May we keep our compassion alive by eating in such a way that we reduce the suffering of living beings, preserve our planet and reverse the process of global warming.
  • May we accept this food so that we may nurture our strength to be of service to others.

In practicing mindful eating, we’re taking the first bite to our lips, opening them, and taking the food into your mouth. Pay careful attention to what happens next. How does it feel in your mouth? Notice whether there are thoughts, judgments, or stories arising. Try to keep the focus on the direct sensation unfolding as you begin to chew. Notice the taste. Is it sweet, sour, earthy, bitter, or something else? Is the texture smooth, grainy, chewy? Does the taste change as you continue chewing? Notice how the mouthful disappears, how swallowing happens. Just acknowledge this as it occurs and let it be.

The Center for Mindful Eating also is doing a lot of professional work to help people change their relationship to food. Which food is often integrated into issues such as stress, anxiety, depression, and addiction.

Try this practice with your next snack or meal. Share with us below what you noticed or what thoughts came up. Your interaction below provides a living wisdom for us all to benefit from.


Science Daily - High Population Density Triggers Cultural Explosions

This is not surprising. Culture and collective humanity undoubtedly had a huge impact in human evolution. Collective intelligence is always more powerful than individual intelligence.

High Population Density Triggers Cultural Explosions

ScienceDaily (June 4, 2009) — Increasing population density, rather than boosts in human brain power, appears to have catalysed the emergence of modern human behaviour, according to a new study by UCL (University College London) scientists published in the journal Science.

High population density leads to greater exchange of ideas and skills and prevents the loss of new innovations. It is this skill maintenance, combined with a greater probability of useful innovations, that led to modern human behaviour appearing at different times in different parts of the world.

In the study, the UCL team found that complex skills learnt across generations can only be maintained when there is a critical level of interaction between people. Using computer simulations of social learning, they showed that high and low-skilled groups could coexist over long periods of time and that the degree of skill they maintained depended on local population density or the degree of migration between them. Using genetic estimates of population size in the past, the team went on to show that density was similar in sub-Saharan Africa, Europe and the Middle-East when modern behaviour first appeared in each of these regions. The paper also points to evidence that population density would have dropped for climatic reasons at the time when modern human behaviour temporarily disappeared in sub-Saharan Africa.

Adam Powell, AHRC Centre for the Evolution of Cultural Diversity, says: "Our paper proposes a new model for why modern human behaviour started at different times in different regions of the world, why it disappeared in some places before coming back, and why in all cases it occurred more than 100,000 years after modern humans first appeared.

"By modern human behaviour, we mean a radical jump in technological and cultural complexity, which makes our species unique. This includes symbolic behavior, such as abstract and realistic art, and body decoration using threaded shell beads, ochre or tattoo kits; musical instruments; bone, antler and ivory artefacts; stone blades; and more sophisticated hunting and trapping technology, like bows, boomerangs and nets.

Professor Stephen Shennan, UCL Institute of Archaeology, says: "Modern humans have been around for at least 160,000 to 200,000 years but there is no archaeological evidence of any technology beyond basic stone tools until around 90,000 years ago. In Europe and western Asia this advanced technology and behaviour explodes around 45,000 years ago when humans arrive there, but doesn't appear in eastern and southern Asia and Australia until much later, despite a human presence. In sub-Saharan Africa the situation is more complex. Many of the features of modern human behaviour – including the first abstract art – are found some 90,000 years ago but then seem to disappear around 65,000 years ago, before re-emerging some 40,000 years ago.

"Scientists have offered many suggestions as to why these cultural explosions occurred where and when they did, including new mutations leading to better brains, advances in language, and expansions into new environments that required new technologies to survive. The problem is that none of these explanations can fully account for the appearance of modern human behaviour at different times in different places, or its temporary disappearance in sub-Saharan Africa."

Dr Mark Thomas, UCL Genetics, Evolution and Environment, says: "When we think of how we came to be the sophisticated creatures we are, we often imagine some sudden critical change, a bit like when the black monolith appears in the film 2001: A Space Odyssey. In reality, there is no evidence of a big change in our biological makeup when we started behaving in an intelligent way. Our model can explain this even if our mental capacities are the same today as they were when we first originated as a species some 200,000 years ago.

"Ironically, our finding that successful innovation depends less on how smart you are than how connected you are seems as relevant today as it was 90,000 years ago."


Adapted from materials provided by University College London, via EurekAlert!, a service of AAAS.

Thursday, June 04, 2009

Barack Obama's Cairo Speech - June 04, 2009

Here is the whole speech in 6 parts, for those who want more than cherry-picked sound bites on the cable news networks.

I have to say that this is why I voted for Obama: the compassionate vision, the worldcentric perspective, the post-conventional morality.

Part 1:


Part 2:


Part 3:


Part 4:


Part 5:


Part 6:



Why "The Singularity" Scares the Bejesus Out of Me

More and more, I am seeing research and advancements that suggest that Ray Kurzweil isn't as far off as I would like to believe he is with the whole singularity thing. For those who don't know about this, here is a definition of the technological singularity that Kurzweil promotes.

The technological singularity is the theoretical future point which takes place during a period of accelerating change sometime after the creation of a superintelligence.[1]

In 1965, I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to an exponential and quite sudden growth in intelligence.

In 1993, Vernor Vinge called this event "the Singularity" as an analogy between the breakdown of modern physics near a gravitational singularity and the drastic change in society he argues would occur following an intelligence explosion. In the 1980s, Vinge popularized the concept in lectures, essays, and science fiction. More recently, some prominent technologists such as Bill Joy, founder of Sun Microsystems, voiced concern over the potential dangers of Vinge's singularity.(Joy 2000)

Today I just came across an article about a join effort between Google and NASA to develop such a technological advancement. Here is the article, which is reporting on the new Singularity University.

NASA & Google Join Forces to Research Singularity -the "Intellegence Revolution" (VIDEO)

Post-02-04-09-su2 It is the best of times. Anyone who complains about science not delivering it's promises simply doesn't comprehend how incredible this information age truly is: you can go to the mall RIGHT NOW and buy devices which would have reshaped the world ten years ago, are reshaping it today, and technology isn't slowing down - it's accelerating exponentially. There are incredible innovations just around the corner and that's the thinking behind the creation of Singularity University..

An advanced academic institution sponsored by leading lights including NASA and Google (so it couldn't sound smarter if Brainiac 5 traveled back in time to attend the opening ceremony). The "Singularity" is the idea of a future point where super-human intellects are created, turbo-boosting the already exponential rate of technological improvement and triggering a fundamental change in human society - after the Agricultural Revolution, and the Industrial Revolution, we would have the Intelligence Revolution

Real AI effects are closer than you might think, with entirely automated systems producing new scientific results and even holding patents on minor inventions. The key factor in singularity scenarios is the positive-feedback loop of self-improvement: once something is even slightly smarter than humanity, it can start to improve itself or design new intelligences faster than we can leading to an intelligence explosion designed by something that isn't us.

The Singularity University proposes to train people to deal with the accelerating evolution of technology, both in terms of understanding the directions and harnessing the potential of new interactions between branches of science like artificial intelligence, genetic engineering and nanotechnology.

Inventor and author Raymond Kurzweil is one of the forces behind SU, which we presume will have the most awesomely equipped pranks of all time ("Check it out, we replaced the Professor's chair with an adaptive holographic robot!"), and it isn't the only institutions he's helped found. There's also the Singularity Institute for Artificial Intelligence whose sole function is based on the exponential AI increases predicted. The idea is that the first AI created will have an enormous advantage over all that follow, upgrading itself at a rate they can never catch up on simply because it started first, so the Institute wants to work to create a benevolent AI to guard us against all that might follow.

Make no mistake: the AI race is on, and Raymond wants us to win.

This is all very interesting and may well produce some technology that improves our lives for the better - but to be honest, this shit scares the bejesus out of me.

As near as I can tell, reaching the singularity puts our society at risk of technological disaster. When you have a few people evolved enough to create that technology (the cutting edge of human development in the intellectual/cognitive line), you have many more people still at lower moral developmental stages (possibly including the inventors themselves - there is no reason to believe that cognitive development is paired with moral development) who would use that technology to generate power and wealth, which is necessarily an egocentric drive.

We could seriously end up with an elite class who have access to the technology to extend life, avoid illness, increase intelligence, and so on - but those people won't likely be the most developmentally advanced people, nor the most morally developed or compassionate people. It will be the richest people.

So the downside is that there won't be compassionate use of the technology, but rather egocentric use, which will likely lead to catastrophe.

It's the same principle as allowing tribal/egocentric cultures to acquire nuclear weapons - there is NOT the developmental maturity to handle the technology.

Suppose, as Kuzweil has argued, that nanobots become a reality in healing disease and extending life in the next couple of decades. Who will be able to afford these things? Not me, and likely not you. Only the richest of people will have access to this technology. How will this impact our culture?

And suppose that not only do these nanobots prolong life and eliminate disease, maybe they increase intelligence or strength. Do we then have a "super race" of techno-humans, or transhumans? And if (or when) we do, how do we deal with this?

Where is the moral intelligence to deal with the technological advances we see escalating at an ever increasing rate? How do we prevent these advances from being weaponized?

Where is the spiritual intelligence to make sense of this in a world where the divide between rich and poor is greater than it has ever been, where the divide between those with an egocentric and worldcentric worldview is totally in favor of the lesser, more power-oriented worldview, and not the greater more compassionate worldview?

Is anyone else even asking these questions?


Ode Magazine - The placebo effect is not all in your mind

Interesting article. The more I learn, the more amazing it all becomes.

The placebo effect is not all in your mind

What the placebo effect tells us about the healing power of the brain.

David Servan-Schreiber | May 2009 issue

Illustration: Marc Kolle

In the lab, Takeo COULDN’T stand it anymore. The itching WAS driving him crazy. He watched his right arm turn red and wondered why he’d decided to take part in this experiment. He knew he was allergic to poison ivy (Rhus radicans). So what was the point of re-exposing himself? An hour later, Takeo refused to believe what Yujiro Ikemi, founder of the Institute of Psychosomatic Medicine at Kyushu University in Fukuoka, Japan, was telling him: The Rhus radicans extract hadn’t been applied to his right arm (which, nevertheless, had continued to swell) but his left, which was showing no symptoms.

What made his right arm swell so much wasn’t poison ivy at all, but a harmless leaf. Takeo, like half the participants of this experiment, was reacting to the idea of the allergy, not the physical reality.

Modern medicine, which doesn’t always understand this power of the mind over the body, calls it the “placebo effect.” This refers to the cultural and relational factors that make someone who’s sick feel better when a doctor prescribes treatment, regardless of its biological impact. Nowadays, doctors think they know everything about the placebo effect. They were taught that 30 percent of sick people treated with placebos show signs of improvement. But they’re also taught that this improvement is subjective and temporary—because the illness continues to take its course.

Yet after studying the placebo effect, some scientists wonder whether it may be one of the strongest driving forces in medicine. A study published in Clinical Psychology Review in 1993 concludes that several types of placebos are effective in treating illnesses such as stomach ulcers, angina pectoris and herpes 70 percent of the time. In addition, rare but famous cases testify to the effectiveness of placebos in reducing cancerous tumors or regenerating the immune cells of AIDS sufferers. The part of our brains known as the hypothalamus directs the distribution of essential hormones and operates the diffuse network of nerves controlling the function of the internal organs. The most intriguing mechanism is that proposed by pharmacologist Candace Pert, author of Molecules Of Emotion: The Science Behind Mind-Body Medicine. She demonstrated that neuropeptides—molecules that help transmit messages among the brain’s neurons—affect the behavior of nearly all the body’s cells. This means what we refer to as our mind isn’t located just in the brain but throughout the body. It also implies that, driven by the comings and goings of these molecular messengers, the mind constitutes an immense communications network encompassing the functions of the organism.

So what is the placebo effect? Everything we don’t know about the capacity of the brain to heal the body. Therein, undoubtedly, lies the secret of the shamans and other healers. Their rituals, chants and restorative acts address the most archaic parts of the brain, those that regulate our organism and can participate in its healing.

Scientific medicine has lost this knowledge, replacing it with mechanical principles that allow the illness to be cured without speaking to the sufferer’s spirit. In practice, you can take advantage of these mind-body connections. Find a doctor whose personality is comforting and who knows how to listen to your life story. Ask him to explain your symptoms and the proposed treatment. And finally, invite her to describe the stages that will guide you from illness to well-being.

David Servan-Schreiber is a French psychiatry professor and the author of Healing without Freud or Prozac and Anticancer.


You Are What Your Mother Worried About

We've known for a long time that trauma in one generation translates to the next - the trauma legacy. Much of it is a form of cultural transmissions (often through the attachment process). Now we are learning that it is also biological/genetic. Epigenetics is changing our understanding of the trauma legacy.

You Are What Your Mother Worried About

By: Matt Palmquist | May 15, 2009 | 11:40 AM (PDT)

feature photo

A study of rats has shown that when a mother experiences some form of trauma even before her pregnancy begins, it will still influence her offspring's behavior.

For the first time, a study of rats has shown that when a mother experiences some form of trauma even before her pregnancy begins, it will still influence her offspring's behavior.

And there are strong implications for humans, especially mothers who have experienced the effects of war, natural disasters or social upheaval.

"The findings show that trauma from a mother's past, which does not directly impact her pregnancy, will affect her offspring's emotional and social behavior. We should consider whether such effects occur in humans, too," said study author Micah Leshem of the University of Haifa, in a press release announcing the study.

The results are published in the journal Developmental Psychology, and build on previous research that has explored the effects of trauma that a mother experiences during the course of her pregnancy. Until now, however, pre-conception adversity had not been examined.

The researchers — including Leshem and Alice Shachar-Dadon, also of the University of Haifa, and Jay Schulkin of the Georgetown University School of Medicine — chose to investigate rats because they are social mammals whose brains behave similarly to humans.

One group of rats was put through a series of stress-inducing tests two weeks before mating, giving the female time to recover before she got pregnant; the second group was similarly treated over the course of a week prior to mating; and the third, a control group, experienced no adversity at all. When the rats' offspring reached maturity (at 60 days), the researchers examined their emotional behavior, including anxiety and depression, and social behavior.

The researchers found that the offspring of stressed mothers exhibited less social contact, interacting infrequently with each other, compared with that of the control mothers' offspring. There were also important differences in behavior related to gender. The female rats showed more symptoms of anxiety, while the males exhibited less. And the rats whose mothers became pregnant immediately after being stressed were the most hyperactive, indicating that the time period in which adversity is experienced, relative to conception, is also important.

"Everyone knows that smoking harms the fetus and therefore a mother must not smoke during pregnancy," Leshem said. "The findings of the present study show that adversity from a mother's past, even well before her pregnancy, does affect her offspring, even when they are adult. We should be prepared for analogous effects in humans: For example, in children born to mothers who may have been exposed to war well before becoming pregnant."