Towards AI-augmented sustainability assessments: integrating large language models in the case of product social life cycle assessment
Abstract: Purpose
As the importance of social responsibility is increasingly recognized, demand for robust social life cycle assessment (S-LCA) has increased. This study presents a novel approach leveraging artificial intelligence (AI) to augment expert-led processes and enhance the efficiency, scalability, and accuracy of S-LCA.
Methods
The method utilizes advanced natural language processing (NLP) capabilities and large language models (LLMs) to partially automate the evaluation of social factors such as community engagement, labor practices, and human rights considerations against S-LCA pre-defined criteria. The method is applied in parallel to a standard manual practice of a reference-scale S-LCA on a carton product case study in Finland. Results obtained from the AI-augmented assessments are then compared with those derived from the manual method.
Results and discussion
The comparative analysis reveals a 50% agreement rate between the AI and manual assessment outcomes. We find that three driving factors explain the differences in the remaining outcomes. First, outcomes differed where the human evaluators drew on tacit knowledge unavailable to the AI. Second, the human evaluators inherently weigh negative evidence more heavily than positive evidence. And third, outcomes differed in cases where the assessment of a topic was highly sensitive to stakeholder perspective, and the human and AI evaluators assumed differing perspectives in their assessments. Depending on the factor of difference, in some cases, the AI provided a more objective and fair assessment than the human evaluator, while in others, the human evaluator provided a more contextualized and nuanced assessment than the AI.
Conclusions
This study contributes to the emerging field of AI-supported assessments by presenting a practical framework for integrating LLMs into S-LCA. The findings aim to inform stakeholders, researchers, and policymakers about the potential benefits and limitations of incorporating AI in evaluation processes that traditionally entail a degree of subjective judgement. The insights gained from the comparative analysis provide valuable considerations for the ongoing development and adoption of AI-assisted approaches in S-LCA and similar evaluation contexts.
Recommendations
For future applications of automation in S-LCAs and similar evaluation contexts, we suggest mitigations to minimize differing factors, including priming the AI pipeline with contextual materials and explicitly defining desired nuances in system and task instructions.
External IDs:doi:10.1007/s11367-025-02508-w
Loading