MetaReview

How to Do a Meta-Analysis: Complete Step-by-Step Tutorial

A practical 9-step guide to conducting a meta-analysis, from defining your research question to publishing your results. No coding required.

What Is a Meta-Analysis?

A meta-analysis is a statistical method that combines the quantitative results of multiple independent studies addressing the same research question into a single pooled estimate. It sits at the top of the evidence hierarchy in evidence-based medicine, above individual randomized controlled trials (RCTs), cohort studies, and expert opinions.

Meta-analysis is not the same as a systematic review. A systematic review is the broader research process of identifying, evaluating, and synthesizing all relevant evidence on a topic. A meta-analysis is the statistical technique used within a systematic review to quantitatively combine results. You can have a systematic review without a meta-analysis (a narrative synthesis), but you should never perform a meta-analysis without a rigorous systematic review underpinning it.

When Should You Do a Meta-Analysis?

When NOT to do a meta-analysis: If the included studies measure fundamentally different constructs (comparing apples to oranges), use different outcome definitions that cannot be reconciled, or show extreme heterogeneity (I² > 90%) with no identifiable explanation, a narrative synthesis is more appropriate than forcing a numerical pooling.

The Evidence Hierarchy

LevelEvidence TypeStrength
1Systematic reviews and meta-analyses of RCTsHighest
2Individual randomized controlled trialsHigh
3Cohort studiesModerate
4Case-control studiesModerate-Low
5Case series / Case reportsLow
6Expert opinion / EditorialsLowest
Key takeaway: A well-conducted meta-analysis provides the highest level of evidence because it synthesizes all available data, increases statistical power, improves precision of effect estimates, and can resolve conflicting findings across individual studies.

Table of Contents: 9 Steps to a Complete Meta-Analysis

  1. Define Your Research Question (PICO Framework)
  2. Develop a Literature Search Strategy
  3. Study Selection and Screening
  4. Data Extraction
  5. Choose the Right Effect Size
  6. Statistical Analysis (Models, Heterogeneity, Forest Plots)
  7. Assess Publication Bias
  8. Sensitivity Analysis
  9. Report Results Following PRISMA 2020
  10. Free Tools for Meta-Analysis
  11. Common Mistakes to Avoid
  12. Frequently Asked Questions
1

Define Your Research Question (PICO Framework)

Every meta-analysis begins with a clearly formulated research question. The PICO framework is the gold standard for structuring clinical questions:

ElementMeaningExample
P (Population)Who are the patients or participants?Adults with type 2 diabetes mellitus
I (Intervention)What treatment or exposure is being studied?GLP-1 receptor agonists (semaglutide, liraglutide)
C (Comparator)What is the control or alternative?Placebo or standard care
O (Outcome)What outcome are you measuring?Change in HbA1c, body weight, adverse events
Example PICO question: "In adults with type 2 diabetes (P), does treatment with GLP-1 receptor agonists (I) compared to placebo (C) lead to greater reduction in HbA1c (O)?"

Setting Inclusion and Exclusion Criteria

Your PICO question directly determines your eligibility criteria. Before searching the literature, define these precisely:

Register Your Protocol

Before starting your search, register your protocol on PROSPERO (International Prospective Register of Systematic Reviews). Registration demonstrates that your review was planned before results were known, reducing the risk of outcome reporting bias. PROSPERO registration is free and increasingly required by journals.

Important: PROSPERO registration must be completed before data extraction begins. If you register after analysis, it provides no protection against reporting bias, and reviewers will note this.
2

Develop a Literature Search Strategy

A comprehensive, reproducible search strategy is the backbone of any meta-analysis. The goal is high sensitivity (recall) -- it is better to retrieve too many irrelevant articles than to miss relevant ones.

Which Databases to Search

At minimum, search these three databases:

Depending on your topic, also consider: Web of Science, Scopus, PsycINFO (psychology), CINAHL (nursing), ClinicalTrials.gov (unpublished trial data), and conference proceedings.

Building Your Search String

Translate each PICO element into search terms. Combine synonyms with OR and PICO elements with AND:

Example PubMed search:
("diabetes mellitus, type 2"[MeSH] OR "type 2 diabetes" OR "T2DM") AND ("GLP-1 receptor agonists"[MeSH] OR "glucagon-like peptide-1" OR "semaglutide" OR "liraglutide" OR "dulaglutide") AND ("randomized controlled trial"[pt] OR "controlled clinical trial"[pt])

Search Tips for Better Results

Document Everything

Record the exact search string, database, date of search, and number of results for each database. PRISMA 2020 requires this level of transparency, and reviewers will ask for it.

MetaReview tip: MetaReview has a built-in PubMed search that lets you search by keywords, date range, article type, and language directly within the tool. You can send search results straight to your screening pipeline.
3

Study Selection and Screening

Following the PRISMA 2020 flow diagram, study selection proceeds in distinct phases:

Phase 1: Deduplication

After searching multiple databases, you will have duplicate records. Use reference management software (Zotero, EndNote, or MetaReview's built-in deduplication) to identify and remove duplicates. Typically, 20-40% of combined results are duplicates.

Phase 2: Title and Abstract Screening

Rapidly screen each unique record based on its title and abstract against your inclusion criteria. At this stage, be inclusive -- if in doubt, keep it for full-text review. Two independent reviewers should screen all records separately.

Phase 3: Full-Text Review

Retrieve the full text of all potentially eligible articles. Read each one carefully against your complete inclusion/exclusion criteria. Record the specific reason for excluding each article (PRISMA 2020 requirement).

Inter-Rater Agreement

Calculate Cohen's kappa coefficient to quantify agreement between the two reviewers:

Kappa ValueLevel of Agreement
< 0.20Poor
0.21 - 0.40Fair
0.41 - 0.60Moderate
0.61 - 0.80Substantial
0.81 - 1.00Almost perfect

Disagreements should be resolved through discussion or by consulting a third reviewer. Aim for kappa ≥ 0.80.

MetaReview tip: MetaReview offers AI-powered screening using PICO keyword matching and large language model (LLM) deep screening. It can screen hundreds of abstracts in minutes, providing inclusion/exclusion recommendations with confidence scores -- dramatically reducing screening time while maintaining quality.
4

Data Extraction

Data extraction is where you systematically pull the quantitative and qualitative information needed from each included study. Accuracy here is critical -- errors in data extraction propagate directly into your meta-analysis results.

What to Extract

Your extraction form should capture:

Outcome TypeData to ExtractExample
Binary (dichotomous)Events and total N, for both intervention and control groupsDeaths: 15/200 (treatment) vs 30/198 (control)
ContinuousMean, standard deviation (SD), and N for both groupsHbA1c change: -1.2 (SD 0.8, n=150) vs -0.4 (SD 0.7, n=148)
Time-to-event (survival)Hazard ratio (HR), 95% CI, or data to reconstruct themHR = 0.72 (95% CI: 0.58-0.89)

Double Extraction

Two reviewers should independently extract data from every study. After extraction, compare the results and resolve any discrepancies. This catches transcription errors, misread tables, and misinterpreted outcome definitions. Studies have shown that single-reviewer extraction has an error rate of 10-30%.

Handling Missing Data

Warning: Never fabricate or impute data without documenting the method used. If critical effect size data is truly unavailable and cannot be calculated, the study may need to be excluded from the quantitative synthesis (but should still be described narratively).
MetaReview tip: MetaReview's PDF data extraction feature can automatically extract effect size data (events, means, SDs, sample sizes) from uploaded research papers, reducing manual entry and potential transcription errors.
5

Choose the Right Effect Size

The effect size measure you choose determines how results are combined and interpreted. Choosing the wrong effect size is one of the most common mistakes in meta-analysis. Here is a decision framework:

Decision Tree

Is your outcome binary (yes/no) or continuous (numerical)?

Effect Size Comparison Table

Effect SizeData TypeWhen to UseNull ValueInterpretation Example
OR (Odds Ratio)BinaryCase-control studies; logistic regression outputs1.0OR = 2.5: The odds of the event are 2.5 times higher in the intervention group
RR (Risk Ratio)BinaryRCTs and cohort studies (preferred over OR)1.0RR = 0.70: 30% relative risk reduction in the intervention group
MD (Mean Difference)ContinuousSame outcome scale across all studies0MD = -5.3 mmHg: Blood pressure is 5.3 mmHg lower in the intervention group
SMD (Standardized Mean Difference)ContinuousDifferent scales measuring the same construct0SMD = -0.50: A medium effect favoring the intervention (Cohen's conventions)
HR (Hazard Ratio)Time-to-eventSurvival analysis, Cox regression data1.0HR = 0.65: 35% reduction in the instantaneous hazard of the event
Common pitfall: Do not mix different effect size types in the same meta-analysis. If some studies report OR and others report RR, you must either convert them to a common metric (possible under certain conditions) or choose one type and recalculate from raw data where available.

For a deeper dive into effect size selection, including formulas and conversion methods, see our dedicated guide: Choosing Effect Sizes: OR, RR, MD, SMD Guide.

6

Statistical Analysis

This is the computational core of your meta-analysis. Three key decisions must be made: the analytical model, heterogeneity assessment, and how to visualize results.

Fixed-Effect vs. Random-Effects Model

FeatureFixed-Effect ModelRandom-Effects Model
AssumptionAll studies estimate the same single true effectEach study estimates its own true effect; these effects follow a distribution
Source of variationWithin-study sampling error onlyWithin-study error + between-study variance (τ²)
WeightingBased on study precision (inverse variance)Adjusted weights that account for between-study heterogeneity
Confidence intervalsNarrower (can be falsely precise if heterogeneity exists)Wider (more conservative, typically more realistic)
When to useStudies are clinically and methodologically homogeneous; I² < 25%Studies differ in populations, settings, or methods (most real-world scenarios)
Practical advice: The random-effects model (DerSimonian-Laird method) is the default choice for most meta-analyses because studies in practice almost always differ in their populations, settings, and methods. Use the fixed-effect model only when you have strong reason to believe all studies are estimating exactly the same underlying effect.

Understanding Heterogeneity

Heterogeneity refers to variability in study results beyond what is expected from sampling error alone. Three key statistics quantify it:

StatisticWhat It MeasuresInterpretation
Percentage of total variability due to true heterogeneity0-25% low, 25-50% moderate, 50-75% substantial, >75% considerable
Cochran's QWhether observed differences in results are compatible with chance alonep < 0.10 suggests significant heterogeneity (uses a liberal threshold because the test has low power)
τ² (tau-squared)Absolute between-study varianceExpressed in the same units as the effect size squared; larger values mean more heterogeneity

Reading a Forest Plot

The forest plot is the signature visualization of a meta-analysis. Here is how to read one:

Subgroup Analysis

When heterogeneity is substantial (I² > 50%), subgroup analysis can help identify sources. Divide studies into groups based on pre-specified characteristics:

Use the Q-between test (interaction test) to determine if the effect truly differs between subgroups (p < 0.05).

Caution: Subgroup analyses should be pre-specified in your protocol, not generated after seeing the data. Post-hoc subgroup analyses are exploratory and should be labeled as such. Too many subgroup analyses increase the risk of false-positive findings.
MetaReview tip: MetaReview supports all four effect sizes (OR, RR, MD, SMD), both fixed-effect and random-effects models, and automatically calculates I², Q test, and τ². It generates publication-quality forest plots, subgroup forest plots, and funnel plots -- all without writing a single line of code.
7

Assess Publication Bias

Publication bias occurs because studies with positive or statistically significant results are more likely to be published than those with null or negative findings. This means the available literature may overestimate the true effect, and your meta-analysis could inherit that bias.

Funnel Plot

A funnel plot graphs each study's effect size (x-axis) against its precision, typically standard error (y-axis, inverted). In the absence of publication bias:

Asymmetry in the funnel plot -- typically a gap in the bottom-right or bottom-left corner -- suggests that small studies with unfavorable results may be missing.

Statistical Tests for Publication Bias

TestMethodWhen to UseSignificance Threshold
Egger's testLinear regression of effect size on standard errorContinuous outcomes (MD, SMD); works well with ≥10 studiesp < 0.10
Begg's testRank correlation between effect size and varianceBinary outcomes (OR, RR); less powerful than Egger'sp < 0.10
Peter's testRegression of effect size on inverse of total sample sizeBinary outcomes; less affected by mathematical coupling than Egger'sp < 0.10

Trim-and-Fill Method

If publication bias is detected, the trim-and-fill method provides an adjusted estimate. It works by:

  1. Identifying asymmetrically unmatched studies on the funnel plot
  2. "Trimming" them and recalculating the pooled effect
  3. "Filling" in the hypothetically missing studies on the opposite side
  4. Recalculating the pooled estimate with the augmented dataset

The adjusted estimate shows what the pooled effect might be if publication bias were absent. A large shift from the original estimate is concerning.

Limitation: Funnel plot asymmetry can be caused by factors other than publication bias, including genuine heterogeneity, methodological differences between small and large studies, or chance. Always interpret publication bias tests alongside clinical and methodological context. Formal tests require at least 10 studies to have adequate power.
8

Sensitivity Analysis

Sensitivity analysis tests the robustness of your meta-analysis results. The question it answers: "Would the conclusions change if we made different analytical decisions?"

Leave-One-Out Analysis

The most common sensitivity analysis method. Procedure:

  1. Remove one study from the meta-analysis
  2. Recalculate the pooled effect with the remaining studies
  3. Repeat for every study
  4. Compare all results to the original pooled estimate

If removing any single study causes the pooled effect to change direction (e.g., from significant to non-significant, or from favoring intervention to favoring control), that study is influential and must be discussed explicitly.

Other Sensitivity Analyses

MetaReview tip: MetaReview includes built-in leave-one-out sensitivity analysis that automatically highlights any study whose removal causes a directional change in the pooled result -- making it easy to spot influential studies at a glance.
9

Report Results Following PRISMA 2020

The PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement provides a 27-item checklist for transparent reporting. Most medical journals require PRISMA compliance.

Key Sections of a Meta-Analysis Manuscript

  1. Introduction: Rationale, objectives, PICO question
  2. Methods: Protocol registration, eligibility criteria, databases searched, search strategy, screening process, data extraction methods, effect size and model choices, heterogeneity assessment approach, sensitivity and subgroup analyses planned
  3. Results:
    • PRISMA flow diagram (identification, screening, eligibility, included)
    • Characteristics of included studies (Table 1)
    • Risk of bias assessment
    • Pooled effect size (95% CI), p-value
    • Heterogeneity statistics (I², Q, τ²)
    • Forest plot (main analysis)
    • Subgroup analyses with forest plots
    • Sensitivity analysis results
    • Publication bias (funnel plot + Egger's test)
  4. Discussion: Summary of evidence, comparison with existing literature, strengths and limitations, implications for practice and research

Writing the Results Paragraph: Template

Here is a standard template for reporting your primary meta-analysis result:

"A total of k studies involving n participants were included in the meta-analysis. Using a random-effects model, [intervention] was associated with a significantly [higher/lower] [outcome] compared to [comparator] (OR/RR/MD = X.XX, 95% CI: X.XX to X.XX, p = X.XXX). Substantial heterogeneity was observed across studies (I² = XX%, Q = XX.XX, p = X.XXX, τ² = X.XX). Visual inspection of the funnel plot and Egger's regression test (p = X.XX) suggested no significant publication bias."

PRISMA 2020 Checklist Highlights

SectionKey Items to Report
TitleIdentify the report as a systematic review, meta-analysis, or both
RegistrationRegistration number and registry name (e.g., PROSPERO CRD42025xxxxx)
Search strategyFull search strings for all databases (typically in a supplementary file)
Study selectionPRISMA flow diagram with numbers at each stage
Effect measuresSpecify effect measure (OR, RR, MD, SMD, HR) and why it was chosen
Synthesis methodsModel (fixed/random), software used, method for pooling
Certainty assessmentGRADE framework for overall quality of evidence (optional but recommended)

For a complete PRISMA 2020 flow diagram guide, see: PRISMA 2020 Flow Diagram Guide.

MetaReview tip: MetaReview automatically generates a publication-ready results paragraph in English covering main analysis, subgroup results, and sensitivity analysis conclusions -- saving significant writing time.

Free Tools for Meta-Analysis

Choosing the right software can make or break your meta-analysis experience. Here is an honest comparison of the main options available today:

FeatureMetaReviewRevMan (Cochrane)R (meta/metafor)StataCovidence
PriceFreeFree (Cochrane authors) / PaidFreePaid ($$$)Paid ($$)
InstallationNone (browser-based)Desktop download requiredInstall R + packagesDesktop licenseNone (browser-based)
Coding requiredNoNoYes (R scripts)Yes (do-files)No
Effect sizesOR, RR, MD, SMDOR, RR, MD, SMDAll types + customAll types + customNo statistical analysis
Forest plotYes (SVG, publication-quality)YesYes (customizable)Yes (customizable)No
Funnel plotYesYesYesYesNo
Subgroup analysisYesYesYesYesNo
Sensitivity analysisLeave-one-outLimitedFull suiteFull suiteNo
Literature searchBuilt-in PubMed searchCochrane LibraryNoNoImport only
AI screeningYes (LLM-powered)NoNoNoNo
PDF data extractionYes (AI-powered)NoNoNoNo
Auto-generated results textYesNoNoNoNo
Best forResearchers who want an all-in-one free toolCochrane review authorsStatisticians who want full controlBiostatisticians with Stata accessScreening and collaboration only
Our recommendation: If you want to go from research question to forest plot without installing software, writing code, or paying for a license, MetaReview is the best free option available. It covers the entire meta-analysis workflow in one browser tab: literature search, AI-powered screening, PDF data extraction, statistical analysis, forest plot generation, and auto-generated results text.

For a detailed feature-by-feature comparison, see: Meta-Analysis Software Comparison.

Common Mistakes to Avoid

After reviewing thousands of published meta-analyses and their peer review feedback, these are the most frequent errors that lead to rejection or revision requests:

1. Mixing Incompatible Effect Sizes

Combining OR from one study with RR from another without proper conversion produces meaningless pooled estimates. Always convert to a common metric or recalculate from raw data.

2. Ignoring Heterogeneity

Reporting a pooled effect with I² = 85% and no attempt to explore or explain the heterogeneity is a red flag for reviewers. High heterogeneity demands subgroup analysis, meta-regression, or a narrative approach.

3. Cherry-Picking Studies

Excluding studies without pre-specified, transparent criteria is scientific misconduct. Every exclusion must be documented with a clear reason. This is why protocol registration on PROSPERO matters.

4. Not Registering Your Protocol

Without prospective registration, reviewers cannot verify that your methods, outcomes, and analyses were not changed after seeing the results. PROSPERO registration takes 30 minutes and prevents months of reviewer questions.

5. Using Fixed-Effect Model When Random-Effects Is Appropriate

If studies come from different populations, settings, and time periods, a fixed-effect model will underestimate the uncertainty. When in doubt, use random-effects.

6. Insufficient Database Coverage

Searching only PubMed is not sufficient. Cochrane recommends at least three databases. Missing Embase alone can mean missing 20-30% of relevant studies.

7. No Sensitivity Analysis

Failing to perform and report sensitivity analysis (at minimum, leave-one-out) leaves your conclusions unverified. Reviewers expect to see evidence that results are robust.

8. Post-Hoc Subgroup Analyses Presented as Confirmatory

Subgroup analyses not specified in the protocol should be explicitly labeled as exploratory. Treating data-driven subgroups as definitive findings is misleading.

9. Ignoring Publication Bias With Fewer Than 10 Studies

Formal tests (Egger's, Begg's) lack statistical power with fewer than 10 studies. Acknowledge this limitation rather than claiming "no publication bias detected" based on an underpowered test.

10. Extracting Unadjusted Instead of Adjusted Estimates

For observational studies, always prefer the most adjusted (multivariable) estimates. Unadjusted estimates may be confounded and produce biased pooled results.

Bottom line: Most of these mistakes are preventable with careful planning, protocol registration, and adherence to PRISMA 2020 guidelines. A well-designed protocol written before any data collection begins is the single best safeguard against all of these errors.

Start Your Meta-Analysis Now

MetaReview is a free online tool. Go from data entry to a publication-quality forest plot in under 5 minutes. No installation, no coding, no cost.

Open MetaReview - It's Free

Stay Updated

Get notified about new features and meta-analysis tips.

No spam. Unsubscribe anytime.

Frequently Asked Questions

What is the difference between a systematic review and a meta-analysis?

A systematic review is the entire process of systematically identifying, evaluating, and synthesizing all relevant research on a topic. It follows a structured protocol with explicit inclusion/exclusion criteria. A meta-analysis is specifically the statistical method used within a systematic review to quantitatively pool results from multiple studies into a single effect estimate. You can conduct a systematic review without a meta-analysis (presenting a narrative synthesis), but a meta-analysis should always be embedded within a systematic review framework. Think of systematic review as the research method and meta-analysis as the statistical technique.

How many studies do I need for a meta-analysis?

There is no absolute minimum, but practical considerations matter. With 2 studies, you can technically compute a pooled estimate, but the result will be driven almost entirely by sample size differences and provides limited insight. With 5 or more studies, heterogeneity statistics (I², Q) become more meaningful. With 10 or more studies, you can reliably perform publication bias tests (Egger's, Begg's) and funnel plot analysis. Most reviewers consider 5 studies a reasonable minimum for a credible meta-analysis, and will accept fewer only if the topic is narrow and the studies are high-quality.

What software can I use for meta-analysis for free?

MetaReview is a completely free, browser-based meta-analysis tool that requires no installation, no account, and no coding knowledge. It supports OR, RR, MD, and SMD effect sizes, fixed and random-effects models, forest plots, funnel plots, subgroup analysis, leave-one-out sensitivity analysis, and auto-generated results paragraphs. Other free options include the R statistical language with the "meta" and "metafor" packages, which are powerful but require programming skills. RevMan is free for Cochrane review authors but requires desktop installation. OpenMeta-Analyst is another free option but is no longer actively maintained.

How do I interpret a forest plot?

A forest plot displays each study as a row. The square represents the study's effect estimate (e.g., OR, RR, or MD), with the square size proportional to the study's weight. The horizontal line through the square is the 95% confidence interval. The diamond at the bottom represents the pooled (combined) effect. A vertical reference line shows the null effect (1.0 for ratio measures like OR/RR, or 0 for difference measures like MD/SMD). If a study's confidence interval crosses this null line, that study alone did not find a statistically significant effect. If the diamond does not touch the null line, the pooled result is statistically significant.

What does I-squared heterogeneity mean?

I² tells you what percentage of the observed variation across study results is due to genuine differences between studies (true heterogeneity) rather than random sampling variation. An I² of 0% means all variation is due to chance; an I² of 75% means three-quarters of the observed variability reflects true differences in underlying effects. The Cochrane Handbook provides rough benchmarks: 0-40% might not be important, 30-60% may represent moderate heterogeneity, 50-90% may represent substantial heterogeneity, and 75-100% indicates considerable heterogeneity. When I² is high, explore sources through subgroup analysis or meta-regression rather than simply reporting the pooled estimate.

Can I do a meta-analysis without knowing statistics or coding?

Yes. Point-and-click tools like MetaReview are designed for researchers who do not have programming or advanced biostatistics training. You enter your extracted data (event counts, sample sizes, means, standard deviations), select your effect size type and model, and the tool computes everything: pooled estimates, confidence intervals, heterogeneity statistics, forest plots, funnel plots, and sensitivity analyses. That said, understanding what these statistics mean and how to interpret them is essential for writing a defensible manuscript. We recommend reading the relevant chapters of the Cochrane Handbook for Systematic Reviews even if you use a no-code tool.

How long does it take to complete a meta-analysis?

A realistic timeline for a focused meta-analysis is 3 to 12 months from protocol registration to manuscript submission. Protocol development and PROSPERO registration takes 1-2 weeks. The literature search typically takes 1-3 weeks. Screening can take 2-8 weeks depending on volume (tools like MetaReview's AI screening can compress this significantly). Data extraction takes 2-6 weeks for 15-30 studies. Quality assessment takes 1-2 weeks. Statistical analysis and figure generation can be done in 1-3 days using the right tools. Writing the manuscript takes 2-4 weeks. Peer review and revisions add another 2-6 months. The most common bottleneck is screening and data extraction, which together account for roughly half of the total time.

What is the PRISMA checklist?

PRISMA stands for Preferred Reporting Items for Systematic Reviews and Meta-Analyses. The PRISMA 2020 update consists of a 27-item checklist covering everything that should be reported in a systematic review or meta-analysis: title, abstract, rationale, objectives, protocol registration, eligibility criteria, information sources, search strategy, selection process, data extraction, effect measures, synthesis methods, risk of bias, results of syntheses, reporting biases, certainty of evidence, and conclusions. It also includes a standardized flow diagram template. Most biomedical journals require authors to submit a completed PRISMA checklist alongside their manuscript. The checklist is freely available at prisma-statement.org.