Abstract: The carbon emissions generated during the inference phase of large language models (LLMs) are a growing concern. We propose a method to minimize the number of tokens of input prompts for medical students interacting with LLMs. Using English physical examination course materials, we applied translation and paraphrasing to recommend the most token-efficient prompt. We found that English baseline prompts had fewer tokens than Korean ones, while the paraphrased forms we proposed significantly reduced token counts compared to the baseline prompts.
External IDs:doi:10.3233/shti251219
Loading