Every year, millions of people spend money on astrology apps, homeopathic remedies, and crystal healing. Meanwhile, scientists develop vaccines, predict eclipses, and discover new particles. Most of us feel there's something fundamentally different between these activities—but articulating exactly what that difference is proves surprisingly tricky.

Philosophers call this the demarcation problem: where do we draw the line between genuine science and pseudoscience? The answer matters enormously. It shapes what we teach in schools, what treatments doctors recommend, and which policies governments adopt. Yet after a century of debate, no simple criterion has emerged. Understanding why reveals something profound about how science actually works.

Falsifiability Criterion: Why Popper's Test Captures Something Essential But Proves Insufficient

In the 1930s, philosopher Karl Popper proposed an elegant solution: real science makes risky predictions that could prove it wrong. Einstein's general relativity predicted that starlight would bend around the sun by a precise amount. If observations had shown otherwise, the theory would have been falsified. Compare this to Freudian psychoanalysis, which Popper noticed could explain any behavior after the fact but never stuck its neck out with risky forecasts.

This falsifiability criterion captures something genuinely important. Good scientific theories don't just accommodate whatever happens—they rule things out. A theory that explains everything actually explains nothing, because it gives us no way to distinguish a world where it's true from one where it's false. The best science constantly invites refutation.

Yet falsifiability alone can't do all the work we need. Astrology makes testable predictions—horoscopes regularly fail—but astrologers don't abandon the practice. They add auxiliary hypotheses or blame the practitioner. Scientists sometimes do this too. When Uranus didn't move as Newton's laws predicted, astronomers didn't reject Newtonian physics—they postulated a new planet, Neptune, and found it. The logic of falsification is the same in both cases, yet one seems legitimate and one doesn't. Something else must be going on.

Takeaway

A theory that can't be proven wrong isn't brave—it's empty. The willingness to make predictions that could fail is what gives scientific claims their bite.

Methodological Features: How Practices Like Peer Review and Experimentation Characterize Science

Perhaps science is defined not by a single logical criterion but by a cluster of methodological features. Real sciences use controlled experiments that isolate variables. They express relationships mathematically, allowing precise predictions. They submit findings to peer review, where other experts scrutinize methods and conclusions. They replicate results across independent laboratories.

These practices create a system of checks that pseudosciences typically lack. When a pharmaceutical company claims a new drug works, regulators demand double-blind trials with placebo controls and statistical analysis. When an astrologer claims planetary positions affect personality, no comparable rigor applies. The methodological infrastructure makes a difference.

But even this cluster approach faces problems. Some unquestionably scientific fields—like evolutionary biology or cosmology—can't run controlled experiments in the usual sense. Darwin couldn't replay evolution under laboratory conditions. Cosmologists can't create universes to test hypotheses. These sciences rely on different methods: natural experiments, historical traces, mathematical modeling. Meanwhile, some pseudosciences enthusiastically adopt scientific trappings—parapsychologists conduct controlled trials, creationists publish in their own 'peer-reviewed' journals. The presence of methodological features doesn't automatically confer legitimacy.

Takeaway

Science isn't defined by any single method but by an ecosystem of practices—experimentation, peer review, replication—that work together to catch errors and expose wishful thinking.

Social Practices: Why Scientific Communities Matter as Much as Logical Criteria

Here's a different angle: maybe what makes science scientific isn't just the logic or the methods, but the community that produces it. Sociologists and philosophers like Thomas Kuhn noticed that science happens within institutions—universities, journals, funding agencies, professional societies—that enforce norms and standards. These communities reward certain behaviors: sharing data, acknowledging errors, building on others' work, subjecting claims to collective scrutiny.

This social dimension explains something the logical criteria miss: why the same method can be legitimate in one context and suspicious in another. When astronomers postulated Neptune to save Newtonian predictions, they were operating within a community with a track record of success, established standards of evidence, and accountability mechanisms. The hypothesis was published, scrutinized, and ultimately vindicated by observation. Astrologers making similar moves operate in a community that lacks these checks.

The social view has unsettling implications. It suggests science is partly defined by who does it and under what institutional conditions, not just by abstract logical properties. But this also rings true. A lone genius proposing revolutionary ideas in a garage isn't doing science yet—not until the ideas enter the community, face criticism, and survive testing. Science is a collective enterprise, and its authority derives partly from that collectivity.

Takeaway

Science isn't just a method—it's a social practice. The institutions, norms, and communities that surround scientific work are part of what makes it trustworthy.

The demarcation problem resists simple solutions because science isn't one thing. It's falsifiable claims, yes, but also experimental controls, peer review, mathematical precision, and communities committed to self-correction. These features cluster together in genuine science and come apart in pseudoscience.

Perhaps that's the deepest insight: there's no magic formula that instantly separates real science from pretenders. Instead, we must look at the whole package—the predictions, the methods, the institutions, and the attitudes. Science earns its authority not through any single criterion but through the hard, ongoing work of getting reality to push back.