Abstract: Assistive technology (AT) is an important real-world context that provides fertile ground for search and recommendation research. Identifying suitable equipment that benefits disabled users is a crucial challenge connected to all 17 UN Sustainable Development Goals. AT information retrieval tools should be comprehensive, flexible and accessible as possible to end users without specialist knowledge. Recent advances in AI/ML support new opportunities to enable users to find AT, facilitating state and international legislative objectives which require inclusive design. This work contributes a new collection for AT retrieval research derived from a production database. We also explore the proper evaluation of a state-of-the-art visual-linguistic model using information gain with multimodality assessed through ablation. Preliminary experiments suggest multimodal representation of AT items for generating text-based features is superior to either text or images alone. In the case of abstractions, such as item “goals”, there is little difference between singular text or image data; extended product descriptions benefit from the saliency of text. Secondly, human assessment exhibits only slight deviation from an LLM’s featurisation of products. Together, these findings provide new resources for downstream application development and benchmarking of future information retrieval exercises for AT.
Loading