Accessibility Tools

  • Content scaling 100%
  • Font size 100%
  • Line height 100%
  • Letter spacing 100%
A tale of two species, by Rashina Hoda
Free Article: No
Contents Category: Commentary
Custom Article Title: A tale of two species
Review Article: Yes
Show Author Link: Yes
Article Title: A tale of two species
Article Subtitle: Balancing new technology and ethical considerations
Online Only: No
Custom Highlight Text:

It was a busy day in February. I was in my office at Monash University, squeezing in some emails with one hand and a quick bite of lunch with the other. Yeah, a typical day for an academic. That’s when I came across an email sent to me by a PhD student from another Australian university who wanted to know about a research paper I had written. They sent me the title of the paper, the abstract, and the author list. 

Related Article Image (300px * 400px):
Alt Tag (Related Article Image): 'A tale of two species', by Rashina Hoda
Featured Image (400px * 250px):
Alt Tag (Featured Image): 'A tale of two species', by Rashina Hoda
Display Review Rating: No

In November 2022 ChatGPT was made available freely. By January 2023, it had gained more than one hundred million users, becoming the fastest growing software application in history.

Besides its impressive natural language processing capabilities and human-like conversational manner, it is also known to be prone to the type of confident false claims I had experienced firsthand, a phenomenon commonly referred to as a ‘hallucination’. Simply put, it will share made-up information.

People have played around with its capabilities and have discovered that it translates better than similar Google and Microsoft products. It can pass the Bar exam, but it can also be used to assist hackers by writing malware and phishing emails at scale, in combination with other AI models.

ChatGPT is an example of this new breed of AI called Generative AI. Unlike search engines such as Google that find and regurgitate existing information, Generative AI focuses on creating new content. Another example is DALL-E, where you can describe what you wish to visualise and it will go ahead and create a high-resolution image for it. We are seeing more and more companies rushing to add these ‘magical genies’ in their own product bottles, and we are seeing their stock prices rise with these announcements.

Despite appearances, AI is not a new concept. Its recent resurgence through software such as ChatGPT has led to much excitement, as well as serious concerns. On the one hand, it promises great advances in areas such as digital health and the accessibility of high-quality personalised education. On the other hand, AI gurus are sharing serious concerns over its unchecked growth.

In May 2023, Geoffrey Hinton, often referred to as one of the godfathers of AI, resigned from Google in order to raise alarm bells about the potential risks of ‘strong AI’ – the type of AI that can truly think for itself instead of simply staying within the confines of what human developers have programmed it to do.

So, what can we do? We need to come up with responsible ways of harnessing the capabilities of these new software systems. That’s right. AI chatbots such as ChatGPT are also fundamentally software systems. AI, like other software, is developed by people in software teams.

My job as a researcher involves studying software teams and designing human-centred software. I like to study how software teams approach the engineering of software systems, including the AI ones. Here are three examples of this work.

First, in one of my research projects, we have been listening to practitioners – those who actually design and develop AI – about what they think and do about its ethical aspects:

  • do they consider ethics when designing AI systems?
  • which ethical principles or guidelines do they follow?
  • are they aware of Australia’s AI Ethics Principles?

We conducted a survey of one hundred AI practitioners, and asked them about their perceptions of and the challenges related to AI ethics. We found that they were most aware of the principles of ‘privacy protection and security’, followed by ‘reliability and safety’. However, only a small percentage were aware of all principles. In terms of reasons, ‘workplace rules and policies’ were by far the most common reason for their awareness. They also reported a number of challenges. Human-related ones were the most common. For example, a lack of knowledge and understanding of ethical AI, subjectivity surrounding ethics, and biased nature of human beings were highlighted as key issues.

While no one will disagree that ethics are important, encouraging software students and industrial developers to think about ethics can be an uphill task. This is primarily because a lesson in responsible AI can easily turn into a boring lecture of the dos and don’ts. To inject some fun into this critical task, we developed an interactive Ethical AI Quiz that software teams can complete to assess their awareness. Because ethics is hardly ever black and white, our Quiz keeps track of ideal and less desirable responses, and provides constructive feedback so that respondents can also learn in the process.

Second is a research project I am leading where we are working with colleagues from health to co-design intelligent software solutions with healthcare practitioners, patients, and carers to improve their virtual healthcare experience. This is where I have experienced firsthand how challenging it is to balance technical opportunities with ethical considerations. It’s easier said than done. Doing ‘the right thing’ in developing responsible AI systems often means that software teams need to prioritise user privacy and security over what may be technically ‘cool’ and possibly simpler to implement.

Finally, there is another project with national and international collaborators where we have reimagined what agile project management should look like. We know that current management approaches focus on creating business value. We propose a framework where AI techniques can be used to boost productivity and effectiveness while also balancing human values and considerations such as employee well-being and ethical practice. We call this combination of human-centred heart and AI-powered mind as ‘Augmented Agile’.

These are some examples of the work I am excited to be leading at the Faculty of IT at Monash University in the area of Responsible AI. There are many more endeavours in this space, including major work being done by research teams at Data61, CSIRO in Australia, and others internationally.

We live in a time when our reliance on technology is turning into dependence, when a sophisticated robot called Sofia has been granted legal citizenship of a country, and when human researchers are willing to share co-authorship rights with AI systems like ChatGPT. We may not be in the midst of an existential crisis because of AI, but we are certainly on the verge of an identity crisis for humanity.

Charles Dickens can never have imagined how pertinent these words would be in the twenty-first century: ‘It is the best of times, it is the worst of times, it is the age of wisdom, it is the age of foolishness … it is the spring of hope, it is the winter of despair.’

Here’s the bottom line: AI is here to stay, and it will only become more powerful. Right now, there is hope, foreboding, and plenty of hype around AI. How responsibly we as humans develop, interact, and co-exist with AI will decide where this tale of two species will lead us.

 

This is an edited version of a talk originally broadcast on ABC Radio National’s Ockham’s Razor.

Comments powered by CComment