Life cycle assessments have become the currency of sustainability claims. Companies wave them like badges of honor, declaring their products greener than competitors based on complex studies most people never read closely.
But LCAs are not objective truth machines. They're models built on thousands of choices—what to include, what to exclude, how to measure, where to draw boundaries. Two perfectly valid LCAs of the same product can reach opposite conclusions depending on these methodological decisions.
This doesn't mean LCAs are useless. It means they require interpretation skills. Understanding how to read them critically separates genuine sustainability insights from sophisticated greenwashing. Here's how to develop that eye.
Assumption Archaeology
Every LCA buries its most consequential decisions in methodology sections that few readers examine. The conclusions you see on page one were largely predetermined by choices made on page forty.
System boundaries define what gets counted. Does a packaging LCA include the energy to transport materials to the factory? The manufacturing equipment itself? End-of-life processing? Narrower boundaries tend to favor incumbent products, while broader ones can reveal hidden impacts. When an LCA suspiciously excludes a phase where a product performs poorly, that's a red flag.
Allocation methods determine how impacts get divided among co-products. When a refinery produces gasoline and asphalt simultaneously, how do you split the environmental burden? Mass-based allocation, economic allocation, and system expansion can yield wildly different results. The choice often reflects what answer the study's sponsors preferred.
Data sources matter enormously. Primary data from actual facilities differs from industry averages, which differ from generic databases. Check whether data represents the specific product being assessed or broader categories. A study using decade-old generic data for a rapidly evolving technology tells you little about current reality.
TakeawayThe methodology section is where LCAs are won or lost. If you haven't read it, you haven't actually read the study—you've just read its marketing materials.
Uncertainty Recognition
LCA results look precise. A product might claim 2.3 kg CO2-equivalent per unit. That specificity suggests scientific certainty. It's usually an illusion.
Sensitivity analyses test how results change when key assumptions vary. A robust LCA reports these prominently. If switching from one reasonable allocation method to another flips the comparative ranking between products, the original conclusion wasn't very solid. Look for studies that acknowledge this uncertainty honestly rather than burying it in appendices.
The language of comparison reveals a lot. Phrases like "significantly lower impact" or "clearly superior" should be backed by statistical analysis showing the difference exceeds the uncertainty range. If Product A scores 2.3 ± 0.8 and Product B scores 2.1 ± 0.9, claiming B is definitively better is misleading. Their confidence intervals overlap substantially.
Pay attention to what impact categories show clear differences versus which ones are too close to call. A product might genuinely have lower carbon emissions while having higher water consumption with substantial uncertainty in both directions. Selective reporting of only favorable categories is common.
TakeawayTreat single-number results as the middle of a range, not fixed facts. The width of that range often matters more than the number itself.
Comparative Validity
The most common LCA misuse is comparing apples to oranges while insisting they're both fruit. Methodological differences between studies can make comparisons meaningless, even when both studies are individually sound.
Functional unit alignment is essential. Comparing one study measuring "impact per kilogram of material" with another measuring "impact per year of service life" produces nonsense. A lightweight material that wears out quickly might look good per kilogram but terrible per decade of use. Ensure the functional units answer the same question about the same service.
Scope consistency matters equally. If Study A includes end-of-life recycling credits while Study B stops at the factory gate, comparing their totals is invalid. You'd need to either add the missing scope to Study B or remove it from Study A—which requires access to disaggregated data that isn't always available.
Geographic and temporal mismatches create further problems. An LCA using German electricity grid data doesn't translate directly to Chinese manufacturing contexts. A study from 2015 doesn't reflect current renewable energy penetration. The most defensible comparisons come from studies explicitly designed to compare the options in question, using identical boundaries, data sources, and allocation methods throughout.
TakeawayTwo LCAs can only be compared if they were designed to be compared. Different studies of similar products are starting points for investigation, not ready-made answers.
Reading LCAs critically isn't about dismissing them. It's about extracting the insights they actually support rather than the broader claims often draped over them.
Start with the methodology. Check the boundaries, allocation choices, and data sources. Read the sensitivity analysis to understand the confidence level. Before comparing studies, verify their methods align enough to make comparison meaningful.
Most sustainability professionals don't need to conduct LCAs. But everyone making decisions based on them needs to read them as arguments rather than verdicts—constructed cases that deserve scrutiny before acceptance.