When we ask what it means to live a good life, we are engaging in one of the oldest human traditions. This inquiry typically splits into two distinct branches: practical ethics and meta-ethics. Practical ethics deals with the 'what'—is it right to eat meat, or should we support euthanasia? Meta-ethics, however, is the more challenging, foundational layer that asks 'what is good' in the first place. Without a clear definition of our terms, we are essentially trying to play a game of football where half the players think they can use their hands and the other half believe only feet are allowed.
Alex O'Connor
highlights that most people operate on broad intuitions. We feel that certain things are right or wrong, but these intuitions often crumble under scrutiny. If we define 'good' as the maximization of well-being, we must then answer why well-being matters more than any other metric. If we can't ground these definitions, we find ourselves talking past one another. The goal of ethical study isn't just to win arguments; it is to build a consistent framework that can withstand the most rigorous mental stress tests.
Objective Truth versus Subjective Preference
A primary friction point in modern thought is the tension between Objective Ethics
and subjective morality. To claim that morality is objective is to say that certain actions are wrong regardless of what anyone thinks about them. Even if a regime like Nazi Germany had won the war and convinced the entire world that their actions were righteous, an objectivist would argue those actions remained fundamentally evil. This implies a universal truth that exists outside of human opinion.
Finding the 'anchor' for this objectivity is where things get difficult. Historically, religion provided this anchor through Divine Command Theory
, suggesting that morality is grounded in the authority of a supernatural being. However, secular philosophers like Sam Harris
attempt to ground objectivity in the landscape of well-being. The challenge, as noted by critics like Jordan Peterson
, is that even if we all prefer well-being, that preference alone doesn't necessarily make it an 'objective' truth in the same way gravity is a truth. If morality is purely subjective—a matter of personal or cultural taste—we lose the ability to meaningfully condemn atrocities, as we've reduced moral horror to a mere difference in opinion.
The Consequentialist Trap: When Outcomes Dictate Rightness
Many of us are closeted utilitarians. We believe the right action is the one that produces the best results. This is Consequentialism
. On the surface, it seems rational: why wouldn't we want to minimize suffering and maximize pleasure? However, this path leads to the 'Rash Doctor' problem. Imagine a doctor chooses a treatment with a 99% chance of death because the 1% chance of success offers 100% health, whereas the alternative offers 85% health with 99% certainty. If the doctor gambles and wins, did he do the 'right' thing?
A pure consequentialist might say yes because the outcome was better. But our intuition screams that the doctor was reckless. This forces us to move toward probabilistic utilitarianism, where we judge actions based on their expected outcomes rather than their actual ones. But even then, we run into the 'Utility Monster' or the problem of the minority. If the suffering of one person produces immense pleasure for ten others, does the math check out? Most of us recoil at this, suggesting that there must be something more to morality than just a ledger of pleasure and pain.
Deontology and the Power of the Rule
When consequentialism fails our intuition, we often turn to Deontology
, a framework most famously championed by Immanuel Kant
. Deontology argues that some actions are inherently right or wrong, regardless of their consequences. Murder is wrong because it violates a moral rule, not because it makes people sad. This provides a shield against the 'tyranny of the majority.'
However, deontology has its own pitfalls. If it is always wrong to lie, are you obligated to tell a murderer the location of their victim? This rigidness often forces philosophers to create 'Rule Utilitarianism'—a hybrid where we follow rules that, if generally adopted, would maximize well-being. We are constantly descending a 'tree of exceptions,' refining our theories every time a new thought experiment exposes a flaw. This iterative process is how we move from primitive impulses to a sophisticated moral compass.
The Ghost in the Machine: Free Will and Responsibility
Perhaps the most unsettling aspect of ethics is its dependence on Free Will
. Most of us believe that you can only be held morally responsible for something if you could have acted otherwise. If you are pushed and knock someone onto a train track, you aren't a murderer because you had no choice. But what if free will is an illusion? If our actions are the result of prior causes—biological and environmental—then the traditional concept of moral responsibility begins to evaporate.
Harry Frankfurt
challenged this with his famous cases. Imagine a neuroscientist installs a chip in your brain that will force you to vote for Candidate A if you try to vote for Candidate B. If you choose Candidate A on your own, the chip does nothing. You couldn't have acted otherwise, yet you seem responsible for your choice. These 'Frankfurt Cases' suggest that responsibility might be tied to intent rather than the ability to choose differently. This has massive implications for how we view justice and personal growth. If we are 'meat computers,' we may need to shift our focus from retribution to rehabilitation.
Knowledge and the Gettier Problem
Before we can act on what is good, we must know what is true. But what is knowledge? For centuries, it was defined as 'Justified True Belief.' If you believe it's raining, and it is actually raining, and you saw it through a window, you 'know' it's raining. Then came Edmund Gettier
, who destroyed this definition with a two-page paper.
He proposed cases where someone has a justified true belief that is only true by luck. Imagine seeing a girl bobbing over a hedge and believing she is on a horse. You are justified in this belief. It turns out she is on her father's shoulders, but there is a horse standing in the field behind her. Your belief ('there is a girl and a horse over there') is true and justified, but you didn't really 'know' it. This matters because it shows that even our most 'rational' conclusions can be built on shaky foundations. In the realm of personal growth, we must constantly ask: do i know this to be true, or am i just lucky that my assumptions haven't failed me yet?
Bridging the Gap: From Armchair to Action
The ultimate test of any ethical theory is not how it sounds in a pub, but how it changes your behavior. Peter Singer
provides a brutal wake-up call with his 'Drowning Child' analogy. If you would ruin 30-pound shoes to save a child from a shallow pond, why wouldn't you give 30 pounds to save a child from malaria? There is no moral difference between distance and directness, yet we treat them as worlds apart.
Living in alignment with our discoveries is the hallmark of a resilient mindset. O'Connor's own transition to Veganism
serves as a case study. Once he realized he could not find a logical rebuttal for the suffering of animals, he was forced to change his life. As Albert Camus
suggested, once we determine something to be true, it must determine our actions. If we ignore our own moral conclusions because they are inconvenient, we are essentially cheating ourselves. Growth happens when we close the gap between what we know and what we do.