I want to begin by admitting that I’m an amateur when it comes to epistemology. I do have a master’s degree in philosophy, but epistemology wasn’t my area of focus. Some of you reading this will know more about the subject than I do. And to be honest, I’m a little nervous about the comments. There’s a good chance that if you engage with what I’m about to say in any real depth, I won’t understand you and it will be my fault that I don’t.
Okay, with that admission out of the way…
We've long assumed that knowledge requires three criteria: (1) belief, (2) truth, and (3) justification. In other words, to know something is to believe it, for it to be true, and to have good reason for believing it. That’s the classical definition: justified true belief (JTB).
And just real quick, if you’re wondering why knowledge can’t be defined by just the first two criteria, it’s because believing something that happens to be true is more like getting lucky than knowledge. Imagine I say it’s raining in Adelaide, but I have no reason for thinking so. I didn’t check my weather app or ask anyone who lives there. If it turns out that it is raining, I was right, but only by chance. That’s not knowledge. To genuinely know something, you need more than belief and truth, you need a reason for thinking it’s true. You need justification.
Okay …
Along Comes Gettier
Now, for a long time, this three-part definition held up well. But then, in 1963, Edmund Gettier came along and broke everything in three pages. You can read that paper here.
Gettier presented scenarios where someone has a belief that is both true and justified, yet we still hesitate to call it knowledge. Why? Because the belief turns out to be true by accident.
One of the most well-known examples (though not from Gettier himself but often used to illustrate his point) is the case of the stopped clock. A man glances at a clock that has stopped working, sees that it says 2:00, and forms the belief that it is 2:00. And it just so happens to be 2:00. His belief is true. He used a normally reliable method, checking the time on a clock. And yet, the method failed. The belief was correct purely by coincidence.
Can We Save “Knowledge”
Now, some have tried to save the classical definition by saying, “Well, that wasn’t really justified. The clock was broken, so the belief was faulty from the start.” But that kind of move just shifts the problem. If we start redefining justification every time we hit a weird case, we risk making it so strict that it no longer resembles what anyone would call a “justified belief.”
Others, like Alvin Goldman, proposed ditching the concept of justification entirely. Maybe knowledge isn’t about having reasons, but about using processes that generally lead to truth. This is called reliabilism: if your belief comes from a trustworthy process (like vision, memory, or scientific inference) it counts as knowledge.
But again, the clock case poses a problem. Even if the process is usually reliable, it clearly failed here. So are we back to calling this knowledge, even though it was true by luck?
Still others have suggested that knowledge is less about having the right reasons or processes, and more about the person doing the knowing. This is what’s known as virtue epistemology: the idea that knowledge is a kind of intellectual success rooted in intellectual virtue: careful thinking, honesty, openness to evidence. On this view, knowing isn’t about checking boxes; it’s about doing something well. Like an archer hitting the bullseyes, not by accident, but through skill.
That’s compelling. But even here, questions linger. How do we measure intellectual virtue? And isn’t it still possible to do everything right and end up wrong—or to be wrong for the right reasons and still, somehow, stumble into truth?
An (Initially) Unsettling Realization
Which brings me to a more unsettling thought.
If a belief like “it’s 2:00” can be true, feel justified, come from a reliable process, and still be the product of a broken clock—what else might we be getting wrong without realizing it? Maybe the deeper problem is that we can always be deceived. Even our best faculties (sight, memory, reason etc.) can betray us. And if that’s the case, maybe knowledge (at least in the strong, philosophical sense) is impossible. Or if not impossible, impossible to know if and when you have it.
David Hume once said, “A wise man proportions his belief to the evidence.” That strikes me as a sane and honest approach. The question isn’t whether I can be absolutely certain about what I believe, but whether I have good reasons for believing it—and whether I’m open to changing my mind if those reasons fall apart.
Some might find it unsettling—even scandalous—that we can’t achieve a God’s-eye view of the world. But honestly, what’s strange isn’t that we can’t see things with perfect clarity. It’s that we ever thought we should.
Maybe that’s why I find myself leaning toward fallibilism—the view that we can still know things, even while admitting we might be wrong. That kind of knowledge isn’t rigid or absolute, but humble and revisable. And that, to me, feels much closer to the way real life works.
So no, I’m not sure we need to cling too tightly to the word knowledge, at least not in the abstract, capital-K sense. What matters more is the posture we take toward the truth. That we pursue it carefully, honestly, and with a readiness to revise our beliefs when the evidence calls for it.
At least, that’s what I think I know.