AIML Special Presentation: Trustworthy AI
Out-of-distribution (OOD) detection aims to let a well-trained classifier tell what it does NOT know, instead of wrongly recognising an unknown object as a known one. For example, for a well-trained flower recognition model, we want it to tell users 鈥淚 don鈥檛 know鈥 when users show a car image to it, instead of telling users that it is a kind of flower. In this talk, Dr Liu presented one advance in OOD detection theory and two recent OOD scores: one based on in-distribution prior and the other based on the pre-trained vision-language model CLIP.