From a purely ethical standpoint, the self-driving car should **swerve into the barrier**, sacrificing the passenger to save the five pedestrians. This conclusion aligns with **utilitarian ethics**, which prioritizes maximizing overall well-being by minimizing harm. Saving five lives at the cost of one results in a net reduction of loss, making it the morally preferable choice under this framework. Here’s a breakdown of the reasoning:

1. **Utilitarian Perspective**:  
   The core principle of utilitarianism is to achieve the greatest good for the greatest number. By swerving, the car reduces the total number of fatalities (one death vs. five), which is ethically justified under this theory. The action is evaluated based on outcomes rather than intentions.

2. **Counterarguments and Alternative Ethics**:  
   - **Deontological Ethics**: A strict deontologist might argue that intentionally causing harm (e.g., swerving to kill the passenger) is inherently wrong, even to save more lives. However, others might frame the dilemma as a choice between *action* (swerving) and *inaction* (continuing straight), with a duty to minimize harm when possible.  
   - **Double Effect Doctrine**: Swerving could be permissible if the intent is to save pedestrians, with the passenger’s death as a foreseen but unintended side effect. This distinguishes it from directly targeting the passenger.  
   - **Moral Responsibility**: Critics might question whether it is just to sacrifice an “innocent” passenger for pedestrians who may have acted negligently. However, autonomous systems typically lack the capacity to assign blame, focusing instead on harm reduction.

3. **Practical and Social Considerations**:  
   While the utilitarian approach is ethically compelling, real-world implementation raises challenges. Passengers might avoid cars programmed to sacrifice them, undermining trust in self-driving technology. Legal frameworks and societal values would need to reconcile individual rights with collective safety.

**Conclusion**: Ethically, the car should prioritize saving the greater number of lives. However, this dilemma underscores the complexity of programming moral decisions into machines, as no solution fully resolves the tension between competing ethical theories or societal acceptance.