Traditional meta-analysis aggregates the results of several studies and weighs them according to the precision (inverse of variance) of the study, and if there is between-study heterogeneity (random effects model), the precision of the group of studies.
However, each study may also be a biased estimate of the desired outcome to be measured. RE meta-analysis treats all studies the same in this way: the greater heterogeneity among studies, the more each study is uniformly penalized. My interest, and recent statistical developments, point instead to estimating the bias of each study and correcting for it in two ways: 1) assigning the weight to each study according to its accuracy, and 2) attempting to correct for bias.
Now that this framework is clear, the question is: How do we estimate bias and correct for it? I am particularly interested in people's subjective judgments of bias. Can they tell whether a certain experiment or piece of evidence will be biased, and if so, how do they integrate this information into judgments and decisions? There are also empirical methods of estimating bias, such as quality scores (e.g., proper randomization) and meta-epidemiology.