Home
opinion

Mayowa Babalola: AI is an ‘ethical time bomb’

Mayowa BabalolaThe West Australian
CommentsComments
Artificial Intelligence illustration.
Camera IconArtificial Intelligence illustration. Credit: Don Lindsay/The West Australian

In boardrooms across Australia, generative AI (the use of artificial intelligence to create new content, such as text, images, music, audio and video) is being hailed as the next big thing, promising to revolutionise how we work, create and innovate.

This powerful tool has the potential to streamline work processes, enhance productivity and unlock new dimensions of creativity. But beneath the hype lies a darker reality that demands urgent attention.

As a professor of business ethics and organisational psychology, I believe there are five critical challenges that could derail the AI revolution — and potentially many businesses.

Firstly, the great job displacement. The automation capabilities of generative AI are reshaping our workforce in profound ways. Bloomberg recently reported that AI is driving more layoffs than many companies are willing to admit.

This trend represents a fundamental shift in our current labour landscape. So it isn’t just about numbers; it’s about people. The psychological impact of the ensuing job insecurity cannot be overstated.

Decades of research in organisational psychology have demonstrated that job insecurity and workplace anxiety can significantly impair individual well-being and can slash productivity by up to 30 per cent.

Leaders must proactively address this challenge and act now to comprehensively reskill and upskill their workforce. In doing so, leaders can create a workforce that is adaptable and resilient in the face of technological change.

Secondly, intellectual property. The content creation capabilities of generative AI raise complex questions about intellectual property rights, as the increasingly sophisticated outputs produced by AI tools blur the lines between original human creation and AI-generated content.

Professor Mayowa Babalola is an organisational psychologist and the Stan Perron Chair in Business Ethics at The University of Western Australia Business School.
Camera IconProfessor Mayowa Babalola is an organisational psychologist and the Stan Perron Chair in Business Ethics at The University of Western Australia Business School. Credit: Supplied

The question then becomes, who owns what? This isn’t a hypothetical question. For instance, in 2023, Getty Images sued an AI company for copyright infringement, opening a Pandora’s box of legal issues.

To address these concerns, organisations across Australia must establish clear ethical guidelines and robust monitoring systems for AI-generated content, or they risk costly legal battles. Ultimately, this is not just a legal necessity; it’s an ethical imperative.

Thirdly, the creativity killer. While generative AI excels in many domains, it still falls short in the nuanced problem-solving and contextual understanding that characterises human creativity.

In this respect, there’s a real risk that over-reliance on AI tools could inadvertently suppress our innate creative capabilities — the very thing that drives innovation.

How can businesses strike the right balance? The answer lies in fostering genuine human-AI collaboration rather than endorsing a culture where we outsource the creative process to AI systems.

Doing so will not only preserve our creative instinct but also allow us to leverage the unique strengths of both human intuition and AI’s computational power.

Fourthly, the ethics timebomb. As AI systems become increasingly involved in decision-making processes that impact human lives, particularly in critical areas such as healthcare, finance and human resources, we face unprecedented ethical challenges.

For context, AI decision-making is already impacting our lives, from loan approvals to medical diagnoses. But when AI gets it wrong, who is accountable?

With the recent EU’s AI Act setting a global precedent, Australian businesses need to get ahead of the curve on AI governance.

Organisations should establish transparent AI decision-making processes, drawing inspiration from existing regulations while adapting them to the unique challenges posed by AI in their various industries.

Moreover, there has never been a more critical time for comprehensive ethics training. Leaders must invest in programs that equip their teams with the knowledge and skills to navigate the complex ethical challenges of AI in the workplace.

And finally, we have the hidden mental health crisis. While it might not seem very intuitive, the rapid integration of AI technologies is creating a high-pressure environment for many employees.

A recent study published in the Journal of Applied Psychology reveals a disturbing trend; the more that employees interact with AI in the pursuit of work goals, the more they experience feelings of loneliness, leading to increased insomnia and even alcohol consumption.

This hidden cost of AI adoption could be the biggest threat to workplace productivity and well-being.

So, leaders have a responsibility to continue to prioritise mental health in this new AI-driven era, which means providing resources and ongoing support for mental well-being and creating safe spaces where employees can express their concerns without fear of reprisal.

Let’s face it; AI is shaking up the workplace in ways we never imagined. But despite the opportunities, we must tread carefully, as the decisions we make now will define the nature of work for future generations.

Leaders also face a critical decision; embrace ethical AI practices or risk being left behind. It’s not just about technology; it’s about creating a future where both humans and AI can thrive and safeguarding our fundamental human values.

Professor Mayowa Babalola is an organisational psychologist and the Stan Perron Chair in Business Ethics at The University of Western Australia Business School.

Get the latest news from thewest.com.au in your inbox.

Sign up for our emails