Wednesday, 26 August 2009

Polycomb group proteins in Drosophila

ResearchBlogging.orgIn the earlier post titled ' Role of polycomb and trithorax in developmental regulation' we have seen that the main function of the polycomb group proteins (PcGs) is to repress target gene expression, their major targets being the homeobox (Hox) genes. These proteins bind to Polycomb Response Elements (PREs), their target sequences on the DNA. The recruitment of the PcGs to these sites occurs due to covalent histone modifications in the DNA. Here we look into this aspect in some detail.

Till date three distinct PcG complexes have been discovered in Drosophila Polycomb repressor complex 1 and 2 (PRC1 and 2) and Pho repressive complex (Pho RC). All three contain multiple subunits encoded by PcG genes that are crucial for Hox gene silencing. PRC2 is known to have histone methyl transferase activity. It has H3K27 methylase activity. The PRC1 subunit inhibits nucleosome remodelling and transcription and brings about chromatin compaction. The PRC1 and 2 have various catalytic and non catalytic subunits which play a role in silencing target sites. The Pho repressive complex contains Pho and dSfmbt. Pho subunit has a distinct sequence specific DNA binding activity which is essential for targeting. dSfmbt binds to H3 or H4 tail peptides which are mono or di methylated at H3K9 or H4K20 which is crucial for repression.

The above figure depicts a speculative model for long range interactions between the PRE tethered PcG complexes and the methylated nucleosomes in the flanking chromatin. PRC1, PRC2 and PhoRC are locally bound to the PREs whereas trimethylation at H3K27, H3K9 and H4K20 is observed over an extended domain emcompassing the promoter and the coding regions.

Left : In case of PRC1 the Pc chromadomain (red triangle) would dock onto the nucleosomes that are tri methylated at H3K27 and through such bridging interactions will bring them in proximity to the other PRE bound PcG proteins. This may permit the other PRC1 subunits to block remodelling of the target nucleosomes or facilitate methylation of the neighbouring hypomethylated (pink) nucleosomes by PRC2.

Right : In case of PhoRC, the MBT (Malignant Brain Repeats) of dSfmbt (pink triangle) would interact with mono or dimethylated nucleosomes at H3K9 or H4K20.This bridging interaction would permit PRC2 to efficiently tri methylate H3K27 in hypomethylated nucleosomes.

Its is also possible that as yet unindentified HMTases which tri methylate these histones are also localized at the PREs.Similarly proteins with specificity for binding to other mrthyl-lysine modifications in histones might also be localized to the PREs.Various proteins have been purified which are associated to PcG complexes that are strictly essential for silencing. Pcl (Polycomb like) has been shown to be associated with PRC2 and mutations in this protein affect PRC2 functioning. Another protein zeste is a component of PRC1 and it has DNA binding activity.

As described above biochemical purification of PRC1, PRC2 and PhoRC suggests that these three complexes are separate entities. However since they co localize at the PREs various studies are aimed at finding out the physical interactions between subunits from the different complexes which may explain how PRC1 and PRC2 are localized to the PREs. Different models have been put forward based on various studies. In particular Wang et al (2004), reported that E(z) and Esc (subunits of PRC2) directly interact with Pho, leading to the proposal that Pho directly tethers PRC2 to the PREs. the PRE tethered PRC2 then locally trimethylates H3K27 thereby creating binding sites for the chromodomain of the PRC1 subinut Pc. More recent studies however challenge this model. First quantitative ChIp studies by two different labs suggest that PREs at the Ubx gene are infact devoid of nucleosomes.(V Pirrotta). Moreover PREs constitue of hypersensitive sites providing additional evidence that PRE DNA is not packaged into nucleosomes. Second studies by Mohd Sarip et al (2002), suggested that Pho can directly interact with PRC1 subunits, and that in in vitro studies Pho and PRC1 can co assemble on a naked PRE DNA template in absence of nucleosomes. This assembly assumes a conformation that is difficult to reconcile with the conformation of a nucleosome core particle. Thus available evidence suggests that not only PhoRC, but also PRC1 and PRC2 localize onto the PRE through interactions with Pho and other DNA binding proteins and not through covalent histone modifications or interactions with nucleosomes.

This conclusion then raises a major question with regards to the role of histone modification (trimethylation) in PcG silencing. Quantitative ChIp analysis that compared the Ubx gene in its off and on states in Drosophila larvae suggest that trimethylation in the promoter and coding region is essential for PcG silencing. In the 'off' state extensive trimethylation is present throughout the upstream control, promoter and coding regions but in the 'on' state this methylation is restricted only to the upstream control region and not seen in the promoter and coding regions.This led to the proposal that trimethylation at H3K9, H3K27 and H4K20 in the promoter and coding regions is required to demarcate the chromatin interval that is targeted for repression by the PRE-tethered PcG protein complexes. In this context, an important aspect about the relationship between the PREs and histone modifications of the target genes comes from studies with PRE reporter genes in Drososphila, in which the PRE DNA was flanked with FRT ( Flip Recombinase Target) sites and could thus be deleted from the reporter gene. This study indicated that excision of the PRE DNA from the silenced reporter gene, caused loss of PcG silencing, even when the excision was done late in development. Thus although the PcG proteins appear to repress transcription by methylation in promoter and coding regions, silencing depends on the continuous presence of the PREs and the PcGs tethered to them.

Although much progress has been made towards understanding the mechanism of PcG silencing, much remains to be learned. It is still not understood how the PcG proteins are targeted to the PREs beacuse Pho binding sites alone do not make a PRE. ChIp studies may also help elucidate additional DNA binding PcG proteins.Also there may be many additional PcG complexes and PRC1, PRC2 and PhoRC may represent only the most stable or most abundant PcG complexes. Also it seems likely that these findings of PcG silencing in Drosophila will prove important in inderstanding the mechanism of PcG silencing in mammals, given the recent discovery that mammalian PcG proteins seem to be acting as global repressors of developmental control genes in embryonic stem cells.


MULLER, J., & KASSIS, J. (2006). Polycomb response elements and targeting of Polycomb group proteins in Drosophila Current Opinion in Genetics & Development, 16 (5), 476-484 DOI: 10.1016/j.gde.2006.08.005

Saturday, 22 August 2009

Evidence for Systemic Spread prior to the Establishment of Primary Tumors

ResearchBlogging.orgIt is widely believed that metastasis is a late event in cancer progression. This view is based on several clinical and experimental observations. First, most cancer patients die from metastasis and not the primary disease. Second, early surgery is often the only cure. Third, somatic genetic changes accumulate during local progression (Fearon and Vogelstein, 1990) which was extrapolated to systemic progression. Fourth, repeated rounds of in vivo selection led to cell lines with increased metastasis formation (Kang et. al 2003; Minn et. al 2005). However, in a recent study titled Systemic Spread is an Early Step in Breast Cancer, published in Cell in 2008, Hüsemann, Geigl, Schubert et. al report that tumor cells can disseminate systemically from the earliest epithelial alterations in HER-2 and PyMT transgenic mice and from ductal carcinoma in situ in women. 

The mouse model they work with is the BALB/c mice transgenic for the activated rat HER-2/neu gene (BALB-Neu T mice). This model mimics progression and gene expression profiles of human breast cancer. Females hemizygous for the rat HER-2 gene under control of the MMTV (mouse mammary tumor virus) promoter develop  invasive mammary cancer while their HER-2 negative siblings (wild-type BALB/c mice) remain tumor free. Typically, in the BALB-Neu T model, the mammary epithelia starts to express the oncogene at about weeks 3 to 4 (which coincides with the onset of puberty). Hyperplasia can be detected microscopically at weeks 7-9. Progress to in situ  carcinomas occurs between weeks 14-18 and at weeks 23-30, invasive cancers become apparent (see Fig. 1a)

Fig. 1a; Left Panel: Whole mount of the mammary gland at week 9 showing absence of tumor in branching ductal tree; Middle Panel: Histological Section at week 9 showing side buds displaying the morphology of atypical ductal hyperplasia; Right Panel: Histology of invasive cancer at week 30 (Adapted from Hüsemann, Geigl, Schubert et. al (2008))

The principle goal was to determine when cells expressing the HER-2 transgene disseminate. The authors chose the lung and bone marrow as the sites to look out for metastasis since the HER-2 receptor is not expressed in either of these two organs. In addition to HER, they also used anti-cytokeratin (CK) antibodies to check whether disseminated cells were epithelial in origin. Surprisingly in BALB-Neu T mice, CK+ and HER-2+ cells became detectable 4-9 weeks when meticulous analysis could only detect atypical ductal hyperplasia (ADH) (see Fig. 1b) 

Fig. 1b; Left Panel: Increase in tumor area at primary site. Triangles indicate mean value, whiskers indicate 95% confidence interval and solid line indicates best fitted curve; Right Panel: Number of CK+ cells (red dots) and HER+ cells (blue dots) per 5000 bone marrow cells. Triangles, whiskers and solid line indicate same. Note the presence of disseminated cells in the bone marrow even when the primary tumor area is zero.(Adapted from Hüsemann, Geigl, Schubert et. al (2008)) 

Further, lung micrometastasis from mammary tissue was confirmed by demonstration of mammary-specific alpha casein and lactalbumin transcripts (see Fig. 1c)

Fig. 1c; Left Panel: Lung micrometastasis at week 27, detected using anti-HER-2 antibody;Right Panel: Eight HER-2 + and two samples of normal lung tissues, analyzed for mammary gland specific transcripts. "+" indicates normal mammary gland and "-" indicates mock control (Adapted from Hüsemann, Geigl, Schubert et. al (2008)) 

Despite exponential growth at the primary tumor site, the number of CK+ cells and HER-2+cells in the bone marrow rose marginally over the course of time. Cells singly positive for HER-2 and CK were not congruent. Not all tumor cells expressed both markers (see Fig. 1d)

Fig. 1d; Left Panel: CK+, HER-2- cell; Middle Panel: CK-, HER-2+ cell; Right Panel: CK+, HER-2+ cell (Adapted from Hüsemann, Geigl, Schubert et. al (2008))

This suggests, either the existence of heterogeneous tumor cell populations that disseminate to distant sites or different cellular states of the disseminated tumor cells (DTCs)

More Evidence for Early Dissemination

Could the HER-2+, CK+ cells have disseminated from extra-mammary tissues expressing the transgene? The authors took mammary gland fragments from 3-12 week old transgenic mice (displaying only atypical ductal hyperplasia) and transplanted the them into 3 week old wild-type siblings.  Wild type bone marrow was screened at different time points (see Fig. 2)

Fig. 2; Left Panel: Week 14 post-transplantation, CK+ cells in bone marrow of wild-type siblings; Right Panel: HER-2+ cells in the bone marrow (Adapted from Hüsemann, Geigl, Schubert et. al (2008))

Although CK+ and HER-2+ cells were observed in the bone marrow, the levels were lower than transgenic BALB-Neu T mice, but were definitely above rare false positives. Additionally, just as in the transgenic mice, there was no significant increase in disseminated cells from ADH stages to invasive cancer, in the wild-type siblings. 

The malignant nature of these DTCs was established by Comparative Genomics Hybridization (see Fig. 3)

Fig. 3; Top Panel: Karyogram (left) and CGH profile (right) of sorted mouse metaphase of a single HER-2+ cell disseminated from a transplanted mammary gland and subsequently isolated from bone marrow of a recipient wild-type mouse; Bottom Panel: Sorting of mouse metaphase of a single leukocyte in a balanced CGH profile; Green and red bars indicate genomic gains and losses respectively. (Adapted from Hüsemann, Geigl, Schubert et. al (2008))

Evidence that early disseminated cells can grow into metastases

The onset of metastasis relative to primary tumor growth was assessed . Histological sections of the lungs were analyzed and micrometastases could be detected from weeks 20 to 21 onward, a time point at which mostly in-situ carcinomas are present at the primary sites (see Fig. 4)

Fig. 4; Progression of lung metastasis in BALB-Neu T mice, with and without the removal of the primary tumor. Left Panel: Increase in tumor area over time in BALB-Neu T mice (same as Fig. 1b); Right Panel: Size of the largest lung metastasis detected in individual mice. Blue squares indicate average size of metastases from non-operated mice at various time points and red triangles indicate average size from operated animals at 10-15 weeks after surgery. Whiskers indicate 95% confidence intervals (Adapted from Hüsemann, Geigl, Schubert et. al (2008))

Since metastases need time to grow, their increase in size paralleling that of the primary lesion supports the conclusion that, at least in some cases, founder cells of metastasis had disseminated earlier and had started to proliferate.

Some other experiments conducted by the group, demonstrate that bone marrow disseminated cells that do not grow into metastases can be released from growth arrest. Also, there does not appear to be an association between the number of disseminated cells and the stage of the tumor. Large tumors do not necessarily seed more.

Overall the paper presents some compelling evidence for the occurrence of dissemination, even before the primary tumor is established. This would indeed force the field to rethink it's current strategies in combating cancer.


Hüsemann, Y., Geigl, J., Schubert, F., Musiani, P., Meyer, M., Burghart, E., Forni, G., Eils, R., Fehm, T., & Riethmüller, G. (2008). Systemic Spread Is an Early Step in Breast Cancer Cancer Cell, 13 (1), 58-68 DOI: 10.1016/j.ccr.2007.12.003


Designed and Designoid Objects

Part 2 of the 5 part Royal Institution Christmas Lectures for Children, 1991. With sheer clarity and brilliance, Richard Dawkins puts forward the concept of objects that are designed and "look designed"

Friday, 21 August 2009

Look Mom, Three Hands -The Classical Rubber Hand Illusion Most illusions are not only fun to experience but are also interesting to study in depth for what they can reveal about perceptual purposes. One of the most interesting illusions discovered in recent times in the rubber hand illusion. It was first reported in a paper in Nature in 1998, titled "Rubber hands 'feel' touch that eyes see" by Matthew Botvinick and Jonathan Cohen.  So how does it work? A subject is seated with the left harm resting on a table while a standing screen is positioned besides the arm to hide it from the subject's view. A life-sized rubber model of a left hand and arm is placed on the table, directly in front of the patient. The experimenter uses two paintbrushes to stroke the rubber hand and the real hidden hand, synchronizing the timing of strokes as closely as possible.

In the original study by Botvinick and Cohen, subjects were asked to complete a two part questionnaire that asked for an open description of their experience and also to affirm or deny the occurrence of nine specific perceptual effects (see Fig. 1).

Fig. 1: Questionnaire includes nine statements presented in random order. Subjects indicated their response on a seven-step visual analogue scale ranging from 'agree strongly (+++)' to 'disagree strongly (---)'. Points indicate mean response and bars indicate response range. Underlined questions show a significant tendency to evoke an affirmative response (Adapted from Botvinick and Cohen (1998))

Most subjects indicated that they seemed to feel the touch not of the hidden brush, but that of the viewed brush, as though the rubber hand had sensed the touch. It was hypothesized that the illusion may arise due to a spurious reconciliation of visual and tactile inputs while distorting position sense (proprioception). In a second experiment, subjects were exposed to the illusion for a prolonged period and were then probed for distortion in proprioceptive information. Before and after the viewing period, subjects completed a series of three intermanual reaches. With eyes closed, the right index finger was drawn along a straight edge, until it was judged to be aligned with the index finger of the hidden left hand. The authors found that the subjects' reaches after experiencing the illusion were displaced towards the rubber hand, the magnitude of displacement varying in proportion to the reported duration of the illusion (see Fig. 2)

Fig. 2: Results of reaching experiment. x-axis indicates the percentage of 30-min viewing period during which the illusion was experienced. The y-axis indicates displacement of the three reaches made after the viewing period from the three made before. Data is fitted with a least-squares regression line (adapted from Botvinick and Cohen (1998)) 

In more systematic explorations by Tsakiris and Haggard (2005), it was shown that the drift, while indicating the position of the real left hand, in the direction of the fake rubber hand not only depends upon synchrony of strokes, but also on the position of the rubber hand (spatial congruency) as well as visual characteristics of the hand (body top-down effects) (see Fig. 3 for experimental setup)

Fig. 3: Participants saw in different conditions (a) a rubber hand in a congruent position, (b) a rubber hand in  incongruent position, or (c) a wooden stick. The participant's left hand was out of view for the whole duration of the experiment (Adapted from Tsakiris and Haggard (2005))

The congruent posture elicits the maximum proprioceptive drift while having the rubber hand in an incongruent position or replacing the rubber hand by a wooden stick does not (see Fig. 4).

Fig. 4: Mean proprioceptive drift towards the rubber hand. Error bars indicate standard error. Asterix indicates significant difference between synchronous and asynchronous stimulation (Adapted from Tsakiris and Haggard (2005))

The body is distinguished from other objects as belonging to the self by participating in inter-modal perceptual correlations. In the experiments of Botvinick and Cohen, subjects who referred the tactile sensation to the rubber hand also reported experiencing the rubber hand as belonging to themselves. Indeed eight out of ten subjects employed terms of ownership in the free descriptions (Botvinick and Cohen (1998))

The rubber hand illusion presents a very intriguing case. It shows that certain forms of inter-modal correlations may be sufficient for self attribution even in the face of contradicting signals from other sensory modalities.


Botvinick M, & Cohen J (1998). Rubber hands 'feel' touch that eyes see. Nature, 391 (6669) PMID: 9486643

Tsakiris M, & Haggard P (2005). The rubber hand illusion revisited: visuotactile integration and self-attribution. Journal of experimental psychology. Human perception and performance, 31 (1), 80-91 PMID: 15709864

Wednesday, 19 August 2009

Role of polycomb and trithorax in developmental regulation

During development, patterns of differential gene expression, defining determined state of cells, needs to be maintained over several generations. A set of genes called polycomb and trithorax group genes (PcG and trxG) respectively have been identified in Drosophila which seem to exert such a memory function. In Drosophila, the earliest development is controlled by the expression of maternal genes while the final segment specificty is governed by the master regulator homeobox (Hox) genes. The hox genes encode various transcription factors which regulate many downstream genes. These factors have to be expressed in appropriate patterns during development. The expression of Hox genes, is regulated initially by activators and repressors encoded by the gap and pair ruled genes. These factors however, have a very short half life and hence decay once the Hox gene expression is initialized. It is during this period, that the PcG and trxG proteins recognize the transcriptionally repressed and active states (respectively) of the Hox genes and maintain their expression throughout development.
The polycomb and trithorax genes therefore seem to be acting antagonistically. Polycomb genes are involved in chromatin based gene silencing while trithorax group genes counteract this silencing effect to maintain gene activity. Also since they maintain the Hox gene expression throughout the process of development they may serve as molecular memory systems central to the process of development.
The exact mechanism by which these genes act is still not clear. However a pathway has been proposed which recruits these genes to their target sites on the DNA called Polycomb and Trithorax Response Elements(PREs and TREs). Once at their sites they regulate transcription by modulating the chromatin structure, in particular, via post translational modification of histones and by regulating the three dimensional organization of the TREs and PREs.
Various questions, as to the exact mechanism by which the PcG genes initially recognize the repressed state of their target genes (Hox) remain unanswered as of now. Also how these genes maintain this silenced state of chromatin and transmit the same through many cycles of cell division is intriguing too.

In the next post we shall look at the post translational histone modifications that happen once these proteins reach their target sites on the DNA.

Sunday, 16 August 2009

The Art of Evolving

Irreducible complexity is an argument that was first put forward by Michael Behe, a biochemist and proponent of intelligent design. Behe himself defines an irreducibly complex system as acollection of well-matched and interacting parts that on the whole, contribute to the functioning of the system, but the removal of any one part could cause the system to cease functioning. Most proponents of intelligent design and creationism like to argue in favor of irreducible complexity by using the eye as an example.  Even on the surface, the argument appears ridiculous and ignorant. Consider an individual suffering from a cataract. The crystalline lens of the person is affected to varying degrees (from slight to complete opacity). Equally, the vision too is affected to varying degrees. A slight cataract may prevent the individual from seeing as clearly as an individual with no cataract, but he or she can still see.

So what does evolution have to say? Well, having an eye is not a jackpot. It obviously isn't a case of black and white. There are shades of gray. Most evolutionary biologists argue that having something is better than having nothing. Having 10% of a complete eye is better than having no eye. or having 51% of an eye is better than having 49% of an eye. So how could the eye have evolved? Obviously, the earliest ancestors had no eyes (in the way that we know what an eye is), but they may have had a single layer of photo-sensitive cells (see Fig. 1a). Now the only thing such a layer of cells would have been able to do is to detect light and dark.  Not too much, but could've still assisted the animal when it had to detect if there were any predators around in the neighborhood. Euglena, for example, has flat patches of photoreceptors.

Now the next stage in evolution would have led to an indentation being formed in the photoreceptor layer, which gradually deepened to give rise to a cup-shaped structure (see Fig. 1b). This was better. Now this animal could not only distinguish light from dark, but also the direction from which the light was coming, since there is a shadow formed. Cup- shaped light sensitive spots are seen in Planaria.  As evolution proceeded, the cup got deeper and deeper and with even the slightest increase in depth, the animal was able to make out, with ever increasing accuracy, the direction of light. Later, the cup may have started to close at the other end, ultimately forming a roughly spherical structure with a hole at one end (see Fig. 1c). This structure would have been equivalent to a pinhole camera. One animal that does have a pinhole camera for an eye is Nautilus, a sea mollusk. Now the resolution of a pinhole camera depends on the size of the aperture. Smaller the aperture, sharper the image. But if the aperture is small, then the amount of light passing in would also be less and hence the image would be dim. Hence, in a pinhole camera, one is always compromised over the other.

In any case, Nautilus had a pinhole camera where the aperture was open. Sea water could constantly flow in and out. So as the process continued, the aperture would have been covered with a thin layer of transparent material (which wouldn't have contributed much at first except for protecting the eye), which may have been the earliest form of the cornea (see Fig. 1d). The cornea, as we know it today, has a refractive power of about 43 diopters and does contribute to focussing light (along with the lens). Gradually as evolution went on, the cornea from being just a sheet of transparent protective material may have gained some refractive properties, allowing sharper images to be formed (even when the size of the aperture was large). Although the cornea contributes the most to the focussing power of the eye, it's focus is fixed. What "tunes" the focus in response to objects at different distances is the curvature of the lens. And thus, was born the lens (see Fig 1e)!

We see how evolution beautifully explains the gradual formation of a complex structure and puts to rest two arguments, one being that of irreducible complexity and the other being that of the Watchmaker.

Unconscious Cognition 2-Going Beyond Zero Awareness

ResearchBlogging.orgIn the first part "Unconscious Cognition 1- Simple Dissociation", we established two sets of assumptions for the zero awareness criterion. The exhaustiveness assumption where the direct measure D is a strictly monotonic function of conscious information c and a weakly monotonic function of unconscious information u and the indirect measure I is a weakly monotonic function of both c and u. Under these conditions, D(c,u)=0 necessitates c=0 and I(c,u)>0 implies I(0,u)>0 which in turn implies u>0. In such a case, I is a perfect measure of unconscious processing. The second assumption was the exclusiveness assumption, where I  is an exclusive weakly monotonic function of u alone, and is unaffected by c. And so, the exclusiveness assumption abolishes the need for a direct measure altogether 

An interesting way to circumvent these  assumptions is to let awareness vary over experimental conditions. It may then be possible to establish a double dissociation, which consists of finding an experimental manipulation that changes and I in opposite directions (see. Fig 2a). In particular, any pair of experimental conditions that leads  to opposite ordering of data points in direct and indirect measures  gives evidence for double dissociation. One example could be a priming experiment with two (or more) masking conditions where the priming effect (indirect measure) increases while prime identification (direct measure) performance decreases over experimental conditions. It is obvious that two measures of information going in opposite directions cannot be monotonically driven by a single information source (see Schmidt and Vorberg (2006) for a formal proof). 

Double dissociations have surprising features. Firstly, they require D to be non-constant. In order to obtain a double dissociation, variations in awareness over experimental conditions must occur so that there is a non-zero awareness of the prime under atleast some conditions. Also, the assumptions are less stringent. The only assumption we make here is that both D and I are weakly monotonic in c (see Fig 2b). One can even drop the weak monotonicity assumption on u, allowing for c and u to produce arbitrary interactive effects on D and I for instance, c and u could be mutually inhibitory).

An example of Simple Dissociation

There are numerous experiments demonstrating simple dissociation. In one such experiment (Vorberg, Mattler, Heinecke, Schmidt and Schwarzbach (2003, 2004)), participants were asked to make speeded keypress responses to the direction of an arrow-shaped masking stimulus that was preceded by an arrow-shaped prime. The mask has a dual purpose. It acts as the target of the response and at the same time, it reduces the visibility of the prime by metacontrast masking (a form of visual backward masking). As the stimulus onset asynchrony (SOA) between prime and mask increased, the priming effect (indirect measure) also increased with primes pointing in the same direction as the mask shortening the response times while primes pointing in the opposite direction lengthening them. strikingly, this priming response was independent of visual awareness of the prime.  This was determined by using stimulus conditions that produced different time-courses of metacontrast masking. Participants were unable to perform better than chance when asked to point the direction of the prime.

 An example of Double Dissociation

In a second experiment by the same group, all four pairings of short-duration (14 ms) and long-duration (42 ms) primes and masks were compared, yielding very different kinds of masking functions. When 14 ms primes were combined with 42 ms masks (14:42), the prime identification performance was low and increased only slightly with SOA. When mask duration was reduced to 14 ms (14:14), performance was better. When a 42 ms prime was paired with a 14 ms mask (42:14), performance was nearly perfect. But the 42:14 condition  yielded an effect called type-B masking where prime identification performance markedly decreases with prime-mask SOA, then increases again, while the priming effect only increases monotonically all throughout producing a strong double dissociation.


Schmidt, T. (2007). Measuring unconscious cognition: Beyond the zero-awareness criterion Advances in Cognitive Psychology, 3 (1), 275-287 DOI: 10.2478/v10053-008-0030-3

Saturday, 15 August 2009

Unconscious Cognition 1-Simple Dissociation

ResearchBlogging.orgAttempts to demonstrate unconscious processing of visual stimuli are very old and riddled in controversies, but the controversy does not so much concern the existence of unconscious processing (most researchers seem to be convinced of this), but rather the question of how to demonstrate unconscious processing in a given experiment. If one needs to demonstrate it, one has to make sure that a critical stimulus was completely outside of awareness (the so called zero-awareness criterion). Schmidt (2007) proposes two lines of attack for establishing unconscious processing beyond the zero-awareness criterion. The paper deals with different types of dissociation between measures of awareness and measures of processing.

Simple Dissociations

To demonstrate that a critical stimulus was processed unconsciously, one usually produces some dissociation between different behavioral measures of performance. This is done by comparing two measures obtained from different tasks. One measure (called the direct measure, D) signals the observer's awareness of a critical stimulus. For example, consider a visual perception experiment, where the task is to detect the offset of a vernier (a vernier is simply a set of two vertical lines, one below the other, where the lower line can be offset either to the right or left of the upper line). However, before the vernier is shown (let's say for 25 msec), a prime is flashed for a short period of time (say 15 msec). This prime could be, for example, arrows pointing to the right or left. A forced-choice prime discrimination task ("Was the arrow to the right or left" ) would measure if the observer was aware of the stimulus, and hence would comprise the direct measure. The second measure (called the indirect measure, I) would indicate that the primes themselves are not consciously detected, but they are involved in a priming effect, and hence affect the reaction times of responding to whether the target vernier is offset to the right or left. For example, if the prime and vernier are both congruent (i.e. the arrow points to the right and the vernier is also offset to the right) and if the subject cannot consciously detect the direction of the prime but the reaction times during the congruent cases are always shorter than the reaction times during the incongruent cases (prime pointing to the right but vernier offset to the left), then this would comprise the indirect measure.

Schmidt and Vorberg (2006) examined the assumptions required by the zero-awareness criterion and other approaches. They start by assuming that the direct and indirect measures may depend on two sources of information labeled conscious (c)and unconscious (u). In other words, D = D(c,u) and I = I(c,u), where information is non-negative. The dependency is weakly monotonic, which means that if any  type of information increases, the measures can only increase or remain constant. An additional constraint is that if D and I are to be modeled as functions of c and u, then both D and I must be functions of the same underlying conscious and unconscious information. Thus the direct and indirect tasks must be designed to use identical stimuliidentical responses and identical stimulus-response mappings. Schmidt gives an example of this mismatch in the study by Dehaene et. al (1998), where the indirect task was to determine as quickly as possible if a target digit was numerically larger or smaller than 5. The target was preceded by a masked prime digit. The response times were much shorter if the prime was consistent with the target (i.e. both were less than 5 or greater than 5) than when the two were inconsistent (one of them was larger than 5 and the other was smaller). The optimal direct task in this experiment would have been to ask the subject "Was the prime greater than or lesser than 5"  because then, both measures would've been tapping into the same source of information (conscious or unconscious). Instead, Dehaene et. al chose two different direct tasks where the subject was asked to detect primes against an empty background, and the second where the subject was asked to discriminate primes from random strings of letter, none of which addressed the critical question of whether the prime was larger or smaller than 5.

Given that D-I mismatch can be avoided, how can the null model of only conscious processing be disproved. The traditional way is the zero-awareness criterion, which produces what is called a simple dissociation of direct and indirect measure (zero D in presence of non-zero I, see Fig. 1a). In other words, I(c,u)> 0, implies that either c > 0, or u > 0, or c and u > 0. But, does, D(c,u) = 0, imply c = 0  and hence I(c,u) > 0 only because u > 0 ? No, because, D is a weakly monotonic function of c. It is possible that under this assumption, c did change but D was not sensitive enough to pick up this change. To resolve this problem, one must make the stronger assumption (see Fig. 1b) that D is a strictly monotonic function of c, which means that D can detect any changes in c, no matter how small. Hence, D(c,u) = 0 implies c = 0, irrespective of whether u is zero or not, since D is weakly monotonic on u anyway. And now, if  I(c,u) > o, then, I(0,u) > 0 which finally implies u > 0.

The exhaustiveness assumption is important. In it's absence, one can always argue that it is only conscious processing c, that is occurring and while I is sensitive enough to detect it (I > 0), D is not (D=0). Finally, there is an alternative set of assumptions that abolishes the need for an direct measure altogether. This is when the indirect measure can be assumed to be an exclusive monotonic function of u and is unaffected by c.  In this case, I(c,u) = I(u) > 0 implies u > 0 directly (see Fig. 1c)  

In the next post, we shall look at double dissociation and beyond

Schmidt, T. (2007). Measuring unconscious cognition: Beyond the zero-awareness criterion Advances in Cognitive Psychology, 3 (1), 275-287 DOI: 10.2478/v10053-008-0030-3