Responsible AI Research Centre

What is Responsible AI?

According to the (ISO), responsible artificial intelligence (AI) denotes international efforts to align AI with societal values and expectations, including addressing ethical concerns around bias, transparency, and privacy. Responsible AI seeks to ensure that AI is developed and deployed in the interests of everyone, regardless of gender, race, faith, demographic, location, or net worth.

Responsible AI Research Centre

Principles of responsible AI 

Responsible AI is the practice of developing and using AI systems in a way that provides benefits to individuals, groups, and the wider society while minimising the risk of negative consequences. Given their increasing importance in our society and economy, AI systems must be trusted to behave and make decisions in a responsible manner.  

Values and ethics must be hard-wired into the very design of AI from the beginning. 

While there isn’t a fixed, universally agreed-upon set of principles for Responsible AI, the ×îÐÂÌÇÐÄVlogn Department of Industry, Science and Resources has identified to ‘create a foundation for safe and responsible AI use.’ 

Responsible AI Research Centre (RAIR) 

The Responsible AI Research Centre (RAIR) will combine the expertise of the ×îÐÂÌÇÐÄVlogn Institute for Machine Learning (AIML) with CSIRO’s Data61 to attract top research talent to South ×îÐÂÌÇÐÄVlog and establish cutting edge initiatives in responsible AI. Researchers will address responsible AI for both national and international impact.  

RAIR will be a testament to ×îÐÂÌÇÐÄVlog’s already growing reputation as a world leader in the field of responsible AI and AI safety research. The Centre is focussed on four distinct themes: 

Theme 1: Tackling misinformation

This theme will explore how to develop methods that enable attribution of trusted data sources to AI-generated content in order to avoid misinformation and misuse. 

Theme 2: Safe AI in the Real World

Exploring the foundational science questions that underpin how AI interacts with the physical world, linking to areas including robotics. 

Theme 3: Diverse AI

This theme will explore new directions for developing AI systems that can accurately assess their own knowledge limitations and reliably express uncertainty, helping to reduce AI hallucinations.     

Theme 4: AI that can explain its actions

Developing AI that understands cause-and-effect relationships, beyond correlations, particularly in complex and dynamic environments.    

RAIR will also focus on global engagement and the expansion of investment research within ×îÐÂÌÇÐÄVlog, extending to the Asia-Pacific region and beyond.  

Relevant AIML scholarships 

- The Ethics of Healthcare AI supports a full-time PhD student who is interested in pursuing research on the responsible and ethical evaluation of machine learning or artificial intelligence systems in healthcare settings.  

And there will be more RAIR PhD scholarships to come, please check back soon for updates.