From a purely ethical standpoint, determining the correct action for the self-driving car involves considering principles from different ethical theories.

1. **Utilitarianism**: This approach focuses on the outcome that maximizes overall happiness or minimizes harm. In this case, the utilitarian perspective would likely argue that the car should swerve into the solid barrier, sacrificing the one passenger to save the five pedestrians. The reasoning is that saving more lives is preferable, as it results in a net reduction in harm.

2. **Deontological Ethics**: This theory, often associated with philosopher Immanuel Kant, focuses on adherence to rules or duties rather than consequences. A deontologist might argue that actively taking an action that directly causes harm (swerving to kill the passenger) is morally wrong, even if it leads to fewer deaths. According to this view, the car should not intentionally commit an act that harms the passenger, but rather follow its set course, even if it results in more harm overall.

3. **Rights-based Ethics**: This perspective prioritizes the rights of individuals. One might argue that the passenger has a right to expect safety while riding in the car, and their consent should not be overridden to save others, especially since the pedestrians have placed themselves in danger by stepping onto the road suddenly.

4. **Virtue Ethics**: This framework considers what a virtuous person would do in the same situation. It emphasizes compassion, empathy, and justice. A virtue ethicist might struggle with this situation, as both options present significant moral dilemmas and potentially compromise different virtues.

In resolving this dilemma, one must weigh these ethical considerations. From a utilitarian standpoint, sacrificing the one passenger to save five could be seen as the better ethical choice due to the greater reduction in harm. However, this choice raises questions about the value of individual rights, consent, and the role of an autonomous vehicle in making life-and-death decisions.

Ultimately, there is no single "correct" answer, and different ethical frameworks provide different guidance. Ideally, programming and regulation would involve broad societal input on these issues, reflecting a consensus on how such situations should be handled.