So, you’ve conducted a systematic review of the evidence on your topic, and you’re not sure whether you can—or should—combine the data quantitatively. It’s a common question, and there’s no “right” answer. However, the following considerations can help guide you toward the best decision.

How many studies?
One of the most frequently asked questions in this field is, “How many studies do I need to perform a meta-analysis?” Unfortunately, there is no straight answer, and it’s often more a question of how many participants or events are included, rather than how many studies. Technically, a meta-analysis requires only two studies, but such a small meta-analysis would be fraught with potential problems—some very serious.

Was your review truly systematic?
Sure, you can perform a meta-analysis using whatever studies you happened to come across, but you will be at a very high risk of biased results! Appropriate meta-analysis requires a thorough, rigorous search for data—both published and unpublished—and careful review and data abstraction. If you haven’t searched multiple databases and the gray literature, you should not attempt to summarize the body of evidence for a topic.

Effect measures and types of data
Before you go to the grocery store, you probably think about what dishes you plan to make and you make your selections accordingly. For example, if you’re going to buy apples, the variety you choose may depend on whether you’re baking a pie or putting them in a fruit salad. Before you undertake a meta-analysis, it is important to decide on the most appropriate effect size you want to report. For dichotomous data, it will usually be a ratio, but what kind: odds ratio, hazard ratio, relative risk, etc.? For continuous data, can you compute a weighted mean difference with your available data, or do you need to standardize the mean difference because your included studies report different (but similar) measures of the same construct?

Study designs
In the supermarket, you cannot put both apples and bananas into a single bag to weigh and price, because apples and bananas are fundamentally different. Similarly, different study designs have very different qualities (think about the many differences between a case-control study and a randomized controlled trial!), and it is often unwise to pool different study designs in a single meta-analysis. Some specific study designs can be pooled with little to no concern (combining prospective and retrospective cohort studies is often fine), but it is very rare that pooling randomized controlled trials with observational studies would be acceptable.

Heterogeneity
Even when you are buying only apples, there are varieties that differ in price, taste, firmness, and other qualities. In a meta-analysis, even when your included studies are the same design and report the same effect measures, there are always underlying differences in the study population (some studies might include only women while others include both sexes), conduct (co-interventions may vary greatly across studies of the same primary intervention), and reporting. Those differences together are called “heterogeneity,” and too much of it can yield results that may be misleading. Statistics exist that can identify the presence and level of heterogeneity in a meta-analysis. One way to address heterogeneity is to perform sensitivity analyses or meta-regressions.

Risk of bias
Just as you wouldn’t put a damaged apple into your shopping bag with the high-quality apples, including studies at high risk of bias in a meta-analysis is not advised, as results can be very misleading. At the very least, high risk of bias studies should be separated from lower risk studies in a sensitivity analysis or meta-regression.

Making decisions at the start of your project (your “recipe”) will help you determine whether meta-analysis is appropriate and the above considerations will help ensure you end up with a high-quality result.