Excellent piece from Daniel Drezner. Below is his recommendation. Read the full discussion here.
1) If you can't read the abstract, don't bother with the paper. Most smart people, including academics, don't like to admit when they don't understand something that they read. This provides an opening for those who purposefully write obscurant or jargon-filled papers. If you're befuddled after reading the paper abstract, don't bother with the paper -- a poorly-worded abstract is the first sign of bad writing. And bad academic writing is commonly linked to bad analytic reasoning.
2) It's not the publication, it's the citation count. If you're trying to determine the relative importance of a paper, enter it into Google Scholar and check out the citation count. The more a paper is cited, the greater its weight among those in the know. Now, this doesn't always hold -- sometimes a paper is cited along the lines of, "My findings clearly demonstrate that Drezner's (2007) argument was, like, total horses**t." Still, for papers that are more than a few years old, the citaion hit count is a useful metric.
3) Yes, peer review is better. Nothing Megan McArdle wrote is incorrect. That said, peer review does provide some useful functions, so the reader doesn't have to. If nothing else, it's a useful signal that the author thought it could pass muster with critical colleagues. Now, there are times when a researcher will bypass peer review to get something published sooner. That said, in international relations, scholars who publish in non-refereed journals usually have a version of the paper intended for peer review.
4) Do you see a strawman? It's a causally complex world out there. Any researcher who doesn't test an argument against viable alternatives isn't really interested in whether he's right or not -- he just wants to back up his gut instincts. A "strawman" is when an author takes the most extreme caricature of the opposing argument as the viable alternative. If the rival arguments sound absurd when you read about them in the paper, it's probably because the author has no interest in presenting the sane version of them. Which means you can ignore the paper.
5) Are the author's conclusions the only possible conclusions to draw? Sometimes a paper can rest on solid theory and evidence, but then jump to policy conclusions that seem a bit of a stretch (click here for one example). If you can reason out different policy conclusions from the theory and data, then don't take the author's conclusions at face value. To use some jargon, sometimes a paper's positivist conclusions are sound, even if the normative conclusions derived from the positive ones are a bit wobbly.
6) Can you falsify the author's argument? Conduct this exercise when you're done reading a research paper -- can you picture the findings that would force the author to say, "you know what, I can't explain this away -- it turns out my hypothesis was wrong"? If you can't picture that, then you can discard what you're reading a a piece of agitprop rather than a piece of research.
7) Fraudulent papers will still get through the cracks. Trust is a public good that permeates all scholarship and reportage. Peer reviewers assume that the author is not making up the data or plagiarizing someone else's idea. We assume this because if we didn't, peer review would be virtually impossible. Every once in a while, an unethical author or a reporter will exploit that trust and publish something that's a load of crap. The good news on this front is that the people who do can't stop themselves from doing it on a regular basis, and eventually they make a mistake. So the previous rules of thumb don't always work. The publishing system is imperfect -- but "imperfect" does not mean the same thing as "fatally flawed."