Abstract: Large Language Models (LLMs) have gained significant attention for using natural language to generate program code without direct programming efforts, e.g., by using ChatGPT in a dialog-based interaction. In the field of Service-Oriented Computing, the potential of using LLMs’ capabilities is yet to be explored. LLMs may solve significant service composition challenges like automated service discovery or automated service composition by filling the gap between the availability of suitable services, e.g., in a registry, and their actual composition without explicit semantic annotations or modeling. We analyze the classical way of service composition and how LLMs are recently employed in code generation and service composition. As a result, we show that classical solution approaches usually require extensive domain modeling and computationally expensive planning processes, resulting in a long time needed to create the composition. To ground the research on LLMs for service compositions, we identify six representative scenarios of service compositions from the literature and perform experiments with ChatGPT and GPT-4 as a notable, representative application of LLMs. Finally, we frame open research challenges for service composition in the context of LLMs. With this position paper, we emphasize the importance of researching LLMs as the next step of automated service composition.
Loading