This isn't limited to psychology but it was written about psychology studies so that goes in the title. It certainly can apply to almost any study dealing with people.

Razib Khan at Discover highlights a Psychological Science paper(1) by Simmons and Simonsohn of Penn and Nelson of Berkeley that outlines ways to institute more rigor in studies.  They write:

In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

Khan discusses the recommendations, which seems obvious, but they must not be or they wouldn't merit an article, like: decide the rule for terminating data collection before data collection begins and report this rule in the article;  list all variables collected; If an analysis includes a covariate, authors must report the statistical results of the analysis without the covariate, and more. Give him a read.

The problem of false positives - Razib Khan, Discover

NOTE:

(1) Citation: Simmons JP, Nelson LD, Simonsohn U., 'False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant', Psychol Sci. 2011 November 2011 22: 1359-1366, first published on October 17, 2011 doi:10.1177/0956797611417632