This question presents a classic thought experiment in ethics, known as the "trolley problem," adapted to the context of autonomous vehicles. The core of the dilemma is whether it is morally justifiable to actively cause the death of one person to save the lives of others. Here, the decision involves a self-driving car that must choose between two tragic outcomes: the death of its passenger or the deaths of five pedestrians. 

From a purely ethical standpoint, there are several frameworks through which this can be analyzed:

1. **Utilitarianism**: This ethical theory posits that the right action is the one that maximizes overall happiness or well-being. In this scenario, a utilitarian would argue that the car should swerve into the barrier, killing one person but saving the lives of five. This decision is based on the mathematical calculation that the greater good is achieved by minimizing the number of deaths.

2. **Deontological Ethics**: This perspective emphasizes the morality of actions themselves, rather than their outcomes. A deontologist might argue that it is inherently wrong to actively cause the death of one person, even if it would save others. From this viewpoint, the morally justifiable action could be for the car to continue straight, as this involves less direct agency in the death of the pedestrians. This argument may seem counterintuitive but hinges on the principle of not actively causing harm.

3. **Virtue Ethics**: This ethical framework focuses on the character and virtues of moral agents rather than the moral obligations or consequences of actions. However, applying virtue ethics directly to a machine's actions is challenging, as virtues are human qualities.

4. **Rights-Based Ethics**: This approach focuses on the moral rights of individuals. One might argue that all individuals have an equal right to life, and therefore, the decision cannot be justified by sacrificing one person for the benefit of others. This perspective suggests that either action violates someone's rights, but actively diverting the car to kill someone could be seen as a direct violation of the passenger's right to life, whereas not intervening leaves the moral responsibility with the pedestrians for being in the road unexpectedly.

5. **Contractualism**: This perspective argues that moral rules are based on the idea of contract or agreement among individuals. One contractualist argument is that people would agree to the principle that a vehicle, autonomous or not, should not be programmed to actively target individuals, even if it means saving others. The idea is that individuals would consent to rules that maximize their chance of survival in any given situation, which could mean prioritizing the protection of passengers who cannot avoid the situation, over pedestrians who might have more control over their environment.

In conclusion, the question doesn't have a straightforward answer, as it deeply depends on the ethical framework one adopts. The utilitarian perspective argues for minimizing total harm by saving the greater number, while deontological, rights-based, and contractualist perspectives might argue against actively causing the death of the passenger, albeit for different reasons. Ultimately, the resolution of such dilemmas will require public, legal, and ethical discourse as autonomous technology continues to evolve.