Ten statisticians every psychologist should know about

As psychology students past and present will be only too aware, statistics are a key part of every psychology undergrad course and they also appear in nearly every published journal article. And yet have we ever stopped to recognise the statisticians who have brought us these wonderful mathematical tools? As psychologist Daniel Wright puts it: "Statistical techniques are often taught as if they were brought down from some statistical mount only to magically appear in [the software package] SPSS."

To help address this oversight, Wright has compiled a list of ten statisticians he thinks every psychologist should know about. The list is strict in the sense that it only includes statisticians, whilst omitting psychologists, such as Jacob Cohen and Lee Cronbach, who have made significant contributions to statistical science in psychology.

Wright divides his list in three, beginning with three founding fathers of modern statistics. First up is Karl Pearson (pictured), best known to psychologists for the Pearson Correlation and Pearson's chi-square test. He was a socialist who turned down a knighthood in 1935. His first momentous achievement was his 1932 book The Grammar of Science and he also founded the world's first university statistics department at UCL in 1911.

Ronald Fisher was the author of Statistical Methods for Research Workers, which Wright describes as "one of the most important books of science." Fisher was also instrumental in the development of p values in null hypothesis significance testing.

Together with Pearson's son, Egon, Jerzy Neyman produced the framework of null and alternative hypothesis testing that dominates stats to this day. He also created the notion of confidence intervals. Neyman and Fisher were big critics of each other's theories. After a brief spell at UCL with Fisher, Neyman moved later to Berkeley where he set up the stats department - now one of the top such departments in the world.

Wright also lists three of his statistical heroes: John Tukey of post-hoc test fame, who made major contributions in robust methods and graphing (and who coined the terms ANOVA, software and bit); Donald Rubin who has conducted influential work on effect sizes and meta-analyses; and Brad Efron who developed the computer-intensive bootstrap resampling technique.

Wright devotes the last section of his list to four statisticians who have gifted psychology particular statistical techniques: David Cox and the Box-Cox transformation; Leo Goodman and categorical data analysis; John Nelder and the Generalised Linear Model; and Robert Tibshirani and the lasso data reduction technique.

"The list is meant to introduce some of the main statistical pioneers and their important achievements in psychology," Wright concludes. "It is hoped learning about the people behind the statistical procedures will make the procedures seem more humane than many psychologists perceive them to be."

What do you think of Wright's list? Is there anyone he's overlooked?
_________________________________

ResearchBlogging.orgDaniel B Wright (2009). Ten Statisticians and Their Impacts for Psychologists. Perspectives on Psychological Science, 4 (6), 587-597. [Draft pdf via author website].

Bookmark and Share

See original: BPS Research Digest Ten statisticians every psychologist should know about

Brands leave their mark on children's brains

The idea may be "unpalatable", but companies seeking an edge over their rivals should ensure that children are exposed to their brands as early in life as possible. That's according to Andrew Ellis and colleagues, whose new research shows that the classic "age-of-acquisition" effect in psychology applies to brand names as much as it does to everyday words.Ellis's team found that student participants were quicker to recognise brand names they had encountered from birth. This was demonstrated by presenting students with a range of real and fictional brand names and asking them to indicate as quickly as possible whether a brand was real. If a brand had been experienced from birth, the students were quicker to recognise it as real than if it had been encountered from age five and up. A second experiment showed that students were also quicker at accessing information about early encountered brands compared with late-encountered brands, as indicated by the speed with which they said a product was or was not made by a given brand.These findings resemble classic "age-of-acquisition" effects, in which people are more proficient at processing words they encountered earlier in life. Research has shown that this effect is not explainable purely in terms of greater cumulative exposure to early encountered words. One alternative proposal is that words (and presumably brands too) encountered early in life shape the maturing brain in such a way that a life-long advantage is maintained for processing those early words. Ellis's team's final experiment was perhaps the most striking. In this case, participants aged between 50 and 83 years were quicker to recognise early brands over newer, current brands, even if the early brands were long since defunct.Combined with prior research showing that people generally feel more favourable towards words and pictures that they find easier to process - a phenomenon called the "fluency effect" - Ellis and his colleagues said their findings have serious implications for brand success. "The evidence suggests that mere exposure to brands in childhood will make for more fluent recognition of those brand names in adulthood that will persist through to old age," they said.The Digest asked the researchers for some examples of the brands used in the research but they've been sworn to secrecy by their sponsors, Unilever, for reasons of commercial sensitivity._________________________________Ellis, A., Holmes, S., & Wright, R. (2009). Age of acquisition and the recognition of brand names: On the importance of being early. Journal of Consumer Psychology DOI: 10.1016/j.jcps.2009.08.001...

Ellis, A., Holmes, S., & Wright, R. (2009) Age of acquisition and the recognition of brand names: On the importance of being early. Journal of Consumer Psychology. DOI: 10.1016/j.jcps.2009.08.001  Age of acquisition and the recognition of brand names: On the importance of being early


See original: Research Blogging - All Topics - English Brands leave their mark on children's brains

Adaptations for the visual assessment of formadibility: Part II

In Part I of this series, I summarized the experiments and findings of Aaron Sell and colleagues' paper "Human adaptations for the visual assessment of strength and fighting ability from the body and face". In Part II, I evaluate their claims.

This evidence Sell et. al. present seems compelling with regards to proposition (i): adults appear to be able to make remarkably accurate estimates of upper-body strength from even degraded cues such as static images of faces. As I noted in Part I, however, the truth of propositions (ii) (that this ability is an adaptation) and (iii) (that upper-body strength determines formidability) are more doubtful. I will assess the evidence for each of these claims, starting with the latter.

Concluding that the truth of (i) implies people can visually estimate fighting ability – the likelihood of an individual prevailing in combat – requires us to assume (iii): that upper-body strength is a good proxy for formidability. Unfortunately, Sell and his colleagues provide only indirect, theoretical, reasons for supposing this is true: namely, the greater sexual dimorphism between upper-body and lower-body strength and the fact that the driving force of certain weapons is largely a function of upper-body strength (p. 576). , These considerations, however, seem far from decisive and while it is certainly plausible that upper-body strength is a very (or the most) important component of fighting ability, rigour clearly requires direct empirical evidence. Other likely components of formidability – speed, stealth, skill, bravery, etc. – are either orthogonal to, or even negatively correlated with, high upper-body strength. There are doubtlessly multiple complex tradeoffs between the different components of fighting ability and thus there are likely multiple local-optima in ‘formidability space’. The point of this argument is that without an empirical determination of the magnitude of the correlation between formidability and strength, Sell et. al.’s conclusion rests on an (admittedly plausible) assumption. More importantly, however, it is at least possible that uncontrolled-for components of formidability may introduce confounds or complications that could influence the correlation between perceived and actual strength in either direction. For example, there may be a semi-independent ability to estimate fighting skill, and, depending on the direction of the correlation between upper-body strength and this skill, it may lead us to under- or overestimate the accuracy of visual assessment of fighting ability. The problems around claim (iii), however, are comparatively minor; the major weakness of Sell et. al.’s paper lies with their claim that the ability to visually estimate formidability evolved by natural selection.

An adaptationist claim like (ii) is significantly more complex than other types of propositions because it entails assertions about the past and about design (Symons, 1992: 140-141). As Richard Burian has explained, when one asserts some trait is an adaptation, “one is claiming not only that the feature was brought about by differential reproduction among alternative forms, but also that the relative advantage of this feature vis-à-vis its alternatives played a significant causal role in its production” (1983: 294). In other words, the assertion that the ability to estimate formidability is an adaptation entails that it evolved over deep time by natural selection, and that the ‘function’ of this psychological trait and its neurological substrate is to detect formidability. To say some feature is an adaptation, then, is a compound claim involving multiple independent propositions, each of which requires substantiation. Sell et. al., it seems to me, fall short of this evidentiary standard, not least because they never  mount an explicit defense of (ii), despite the fact that they have the onus and that it is a crucial aspect of their paper. There are, nonetheless, a number of arguments that can be extracted from the paper (or advanced on behalf of the authors). In rough order from least to most persuasive, these are (a) that (i) was previously unknown and that Sell et. al. predicted its existence from evolutionary considerations, (b) that comparative data indicates that such assessments are widespread and perhaps even homologous across taxa, (c) that accurate visual formidability assessment is at a minimum not highly culturally bound, and perhaps universal, and (d) the functional goodness-of-fit between the ‘design problem’ and its ‘solution’.

It seems highly significant, firstly, that Sell and his colleagues predicted the existence of a previously unknown trait – i.e. (i) – from general comparative and evolutionarily psychological considerations. It is important to be careful here, though, because it is entirely possible for (i) to be true but for (ii) to be false (but obviously not vice versa). Some philosophy of science should clarify the situation. Hans Reichenbach (1938) usefully distinguished between the “context of discovery” (the creative process of using background knowledge to invent new hypotheses and theories) and the “context of justification” (the evidence-driven process of testing hypotheses and subjecting them to peer evaluation). For example, Friedrich Kekulé von Stradonitz reportedly first imagined the six-carbon ring structure of benzene after having a dream of an ouroboros (the context of discovery). It does not follow from this, however, that benzene actually had anything to do with snakes, or that testing the idea (the context of justification) involved an ancient symbol. Similarly, even if Sell et. al. predicted the existence of an ability to make formidability estimates from evolutionary theory, it does not necessarily follow that the trait evolved. Propositions (i) and (ii) are logically and epistemically independent, and each needs to be tested against related but different sets of evidence. The fact that (i) was predicted rather than retrodicted from evolution gives as no more than prima facie reason to think it evolved and (a) is thus weak evidence for (ii).

Sell et. al. cite a large and growing body of literature that documents parallels between human and non-human conflict, including, importantly, evidence that non-human animals can visually detect formidability. If this trait is homologous across species, including humans – and that is a gargantuan if – we can be confident (ii) is true since homology suggests that the emergence and persistence of the trait is due to natural selection. It should be clear, however, that building a convincing phylogenetic case for such a widespread homology would be a mammoth undertaking, and no one, as far as I know, has yet done so. The fact that there seems to be a preliminary case for homology is at best suggestive, no conclusions can reasonably be drawn until much more science is done. In other words, were (b) true we could reasonably infer (ii), but we simply do not have enough evidence to conclude (b) is in fact true so, on current evidence, it provides minimal support.

The logic of argument (c) is the following: given that traits that reliably emerge in developmentally normal individuals are likely (though not necessarily) adaptations, de...

Sell, A., Cosmides, L., Tooby, J., Sznycer, D., von Rueden, C., & Gurven, M. (2009) Human adaptations for the visual assessment of strength and fighting ability from the body and face. Proceedings of the Royal Society B: Biological Sciences, 276(1656), 575-584. DOI: 10.1098/rspb.2008.1177  Human adaptations for the visual assessment of strength and fighting ability from the body and face


See original: Research Blogging - All Topics - English Adaptations for the visual assessment of formadibility: Part II

Drug company funded events for health professionals: the state of play in Australia

The links between the pharmaceutical industry and doctors are many and tangled. Drug companies are keen to schmooze doctors and, directly or not, persuade clinicians to prescribe their drug instead of a similar one by a competitor. One way that drug companies try to influence doctors is by sponsoring events, such as conferences or [...]...

Robertson, J., Moynihan, R., Walkom, E., Bero, L., & Henry, D. (2009) Mandatory Disclosure of Pharmaceutical Industry-Funded Events for Health Professionals. PLoS Medicine, 6(11). DOI: 10.1371/journal.pmed.1000128  Mandatory Disclosure of Pharmaceutical Industry-Funded Events for Health Professionals


See original: Research Blogging - All Topics - English Drug company funded events for health professionals: the state of play in Australia

Alvin: World Usability Day | Making Life Easy!

Alvin
World Usability Day | Making Life Easy! - http://www.worldusabilityday.org/
Mr. Gunn liked this

See original: FriendFeed - search Alvin: World Usability Day | Making Life Easy!

Dark Energy: Where did the Light go? (Part 3) [Starts With A Bang]

Though the Sun is gone, I have a light. -Kurt Cobain

Last time we visited dark energy, we discussed its initial discovery. This came about from the fact that supernovae observed with a certain redshift (i.e., moving away from us) appear to be systematically fainter than we were able to explain.

supernova.jpg

But we weren't satisfied with simply saying that there must be dark energy. We asked a lot of critical questions about why these supernovae might appear so faint.

First off, we asked the question, "Could these supernovae from far away be different than the type Ia supernovae we have today?"

phot-31b-07-preview.jpg

Unfortunately, the answer is a resounding no. So long as atoms work exactly the same way, they require the same pressure to collapse at all points, times, and places in the Universe. The process of forming a Type Ia Supernova -- having a white dwarf accrete mass until the core collapses and it explodes -- should be independent of location and time.

Well, if the supernovae are constant, could the environments that they form in be different than the environments today? Of course they could. So, is there any way to make them appear fainter without them actually being fainter, and without having to resort to dark energy? Sure, you might say, block some of that light! All you need is some dust, like so.

barnard68v2_vlt.jpg

What a simple idea, right? Problem solved?

Not so fast. Dust, in real life, is made up of real particles (atoms, molecules, grains, etc.), with real sizes. This means they affect light differently at different wavelengths. Not just red, green, and blue, but X-rays, ultraviolet, infrared, and more. We don't see this light dimmed more in one spectral band than any other; it's dimmed equally at all wavelengths!

So, real dust is out. But what if we invented some new type of dust that absorbed light the same at all wavelengths? We can give it a name: grey dust. We have no idea what would cause it, but it's a lot more believable that there's some new kind of dust out there than there is a whole new type of energy pervading the Universe.

Well, if this grey dust were there, then the light from distant supernovae would simply continue to appear dimmer and dimmer the farther away they were. Whereas, if the Universe had dark energy, the supernovae should start to appear relatively brighter beyond a certain distance. Take a look at the graph below to compare some different theories with the data.

img44.gif

As you can see, grey dust (the top line) is as inconsistent as a Universe with only normal matter (bottom line) when compared to the data.

So you can't simply blame it on a trick of the light. In fact, if we look at the most modern supernova data, it clearly favors dark energy significantly over even a flat, low-density Universe.

dDM-vs-z-Union-2008-75.gif

Other "light-blocking" schemes, such as photon-axion oscillations, suffer from the same problem; they don't give the right turn-over as shown above. If we've got the right laws of gravity, there's pretty much no way around dark energy.

But we don't like relying on only one source of data. Supernovae are nice, but what happens when we look at all the other evidence? Does that tell us there must be dark energy too, or could it be that the supernova data just cannot be trusted? Seems like a job for part 4, and so I'll see you then!

Read the comments on this post...

See original: ScienceBlogs Select Dark Energy: Where did the Light go? (Part 3) [Starts With A Bang]

The dual-tasking meditation master

I recently read an article in the latest Scientific American Mind magazine discussing the cell mechanisms underlying meditative states. The author briefly mentioned the fact that expert meditators were able to avoid the attentional blink that lay people are prone to experiencing when barraged with rapidly presented visual stimuli.This brought up a question for me. Would expert meditators perform better on dual-tasks compared to age-matched subjects?I believe the answer is in the affirmative. My reasoning behind this hypothesis has to do with the fact that meditation not only strengthens attentional abilities, but fosters neural efficiency as well (dual-tasking is not about doing two things simultaneously, but more about doing one thing at a time at an extremely fast pace, thus creating an illusion as if one is doing two things at once). A 2007 study by Farb et. al has shown that meditation activates the anterior cingulate cortex, which is a region central to switching your attention. With the development of my dual-task paradigm underway I hope to prove that daily meditation practice can have beneficial effects when it comes to multi-tasking. If my prediction prove to be true, not only will this ancient practice developed thousands of years ago better our physical and emotional well-being, but help us keep afloat in the fast pace era of technology as well. Farb, N., Segal, Z., Mayberg, H., Bean, J., McKeon, D., Fatima, Z., & Anderson, A. (2007). Attending to the present: mindfulness meditation reveals distinct neural modes of self-reference Social Cognitive and Affective Neuroscience, 2 (4), 313-322 DOI: 10.1093/scan/nsm030...

Farb, N., Segal, Z., Mayberg, H., Bean, J., McKeon, D., Fatima, Z., & Anderson, A. (2007) Attending to the present: mindfulness meditation reveals distinct neural modes of self-reference. Social Cognitive and Affective Neuroscience, 2(4), 313-322. DOI: 10.1093/scan/nsm030  Attending to the present: mindfulness meditation reveals distinct neural modes of self-reference


See original: Research Blogging - All Topics - English The dual-tasking meditation master

Guess the Dow, Win Chow! [The Quantum Pontiff]

Last month a local restaurant group, Chow foods---among whose restaurants is one of our favorite Sunday breakfast spots, The Five Spot---ran a contest/charity event: "Chow Dow." The game: guess the value of the Dow Jones Industrial Average at the close of the market on October 29th, 2009. The closest bet under the closing value which did not go over the value would be the winner. The prize was the value of Dow in gift certificates to the Chow restaurants: i.e. approximately $10K in food (or as we would say in Ruddock House at Caltech: "Eerf Doof!" We said that because it fit nicely with another favorite expression, "Eerf Lohocla!", this later phrase originating in certain now obscure rules enforced by administrative teetotalers.) I love games like this, and I especially love games where the rules are set up in an odd way. Indeed what I found amusing about this game was that, as a quick check of the rules on the Chow website showed, you could enter your guesses at anytime up until October 28th. Relevant also: maximum of 21 bets per person with a suggested donation of $1 per guess. So what would your strategy be optimizing your probability of winning, assuming that you are going to enter 21 times?

Below the fold: my strategy, the amazing power of the X-22 computer, and....chaos!

Read the rest of this post... | Read the comments on this post...

See original: ScienceBlogs Select Guess the Dow, Win Chow! [The Quantum Pontiff]

Digging Deeper Into p66Shc and Enhanced Longevity

Mitochondria, you will recall, are the power plants of our cells, churning out stored energy in the form of ATP molecules, and pollution in the form of damaging free radicals or reactive oxygen species (ROS). Mitochondria have their own DNA, separate from the DNA in the nucleus of our cells, a legacy of their origin as free-roaming bacteria. Free radicals are very reactive, which means that they can tear apart the biochemical machinery of cells by reacting with crucial components. This free radical pollution is at the heart of the mitochondrial free radical theory of aging, which presents a large component of the aging process as essentially a runaway feedback loop: mitochondria damage themselves via their own free radicals, making them produce even more free radicals. This in turn leads to cells overtaken by that pollution, and which throw free radicals out into the body to cause widespread harm as the years pass. Thus mitochondria are considered to be important: changes in genes that alter the operation of mitochondria can cause dramatic shifts in life span in mice. Differences in mitochondrial biochemistry are correlated with differences in life span between similar species. Mitochondria are involved with cellular programmed death mechanisms,......

Tomilov, A., Bicocca, V., Schoenfeld, R., Giorgio, M., Migliaccio, E., Ramsey, J., Hagopian, K., Pelicci, P., & Cortopassi, G. (2009) Decreased superoxide production in macrophages of long-lived p66Shc-knockout mice. Journal of Biological Chemistry. DOI: 10.1074/jbc.M109.017491  Decreased superoxide production in macrophages of long-lived p66Shc-knockout mice


See original: Research Blogging - All Topics - English Digging Deeper Into p66Shc and Enhanced Longevity