Skip to content
  • Insights
  • Why ethics matter in the race to develop AI

Why ethics matter in the race to develop AI

Earlier this year, a relatively obscure chatbot garnered significant global attention.

DeepSeek, a Chinese artificial intelligence (AI) startup, achieved international prominence, rapidly ascending app download rankings and impacting US tech stock valuations. This surge was primarily attributed to the release of its latest model, which the company asserted possessed capabilities comparable to technology developed by OpenAI, creators of ChatGPT, while offering substantial cost efficiencies.

Concurrently, amidst this public visibility, political concerns emerged. Mirroring previous scrutiny of Chinese technology platforms such as TikTok and Huawei, Western governments expressed apprehension regarding potential data harvesting for intelligence purposes via the DeepSeek platform.

This concern was particularly pronounced in Australia, where Federal Science Minister Ed Husic articulated outstanding questions about data and privacy management. Consequently, the platform was prohibited on all federal government systems and devices. This decision by political authorities underscores the imperative for rigorous ethical considerations in light of the rapid advancements in AI technology. This is particularly pertinent for business executives, who, while under pressure to adopt AI technologies, must concurrently navigate ethical obligations and mitigate potential risks.

What is AI ethics?

AI ethics is a set of principles and values that guide the development, deployment and use of artificial intelligence (AI) systems to ensure they are aligned with human values and do not cause harm. It's about making sure AI technologies are fair, transparent, accountable and respect human rights.

Why is AI ethics important?

Ethical guidelines are crucial in all aspects of business, particularly when dealing with a new concept or technology. In the rush to implement a game-changing tool such as AI, it is easy (or even tempting) for businesses to overlook moral questions in favour of quick results - but there are inherent dangers to taking that path. Without ethical guidelines, AI can reinforce biases, violate privacy and lead to unfair or harmful outcomes.

At their core, AI ethical principles guide developers, policymakers and businesses in creating platforms that are transparent, fair and aligned with human values, ensuring their responsible and beneficial use in society.

Australia’s AI ethics principles

The issue is so important that the Australian Government has designed ‘8 Artificial Intelligence Ethics Principles’ to ensure the technology is safe, secure and reliable. The principles are:

Australia’s AI Ethics Principles
Australia’s AI Ethics Principles

 

Of course, like any government initiative, the ‘8 Principles’ approach is up for debate. A pilot project involving major Australian businesses highlighted their relevance and underscored the growing expectation for ethical AI practices, with participants feeling they aligned with public expectations. However, the voluntary nature of the principles created a situation that limited their enforceability, and there is now talk that mandatory safeguards may be needed to address risks associated with AI.

7 steps to implement AI ethically

Artificial intelligence is not only here to stay – it is one of the key tools that will shape the business world in the coming years. Organisations cannot wait for governments to mandate ethical principles but instead must lead the way by ensuring they implement the technology in a manner that positively shapes its future and ensures sustainable and inclusive growth.

To help guide your own AI ambitions, here are seven steps that will help harness the power of AI for good.

  1. Develop a code of ethics – “What do we stand for?” is a simple question to ask but not always an easy one to answer. Creating a code of ethics helps establish this and guides AI development by establishing clear principles, be it fairness, transparency or accountability. The code must be developed in collaboration with relevant stakeholders such as employees, customers and industry experts, as this will ensure it reflects the values and needs of all involved. Ultimately, it is all about answering that original question: “What do we stand for?”.

  2. Monitor the AI system – AI systems can deteriorate over time, potentially reinforcing biases and making incorrect decisions. Regular monitoring helps detect and mitigate unintended consequences, while simultaneously ensuring one’s platform remains aligned with societal values. Importantly, it also builds trust with customers by allowing businesses to identify and address errors before they cause reputational or legal harm.

  3. Educate employees – if an organisation’s workforce is its greatest asset, those same employees are arguably the most important factor in implementing AI ethically. Ensuring they understand the risks, responsibilities and best practices associated with the technology enables them to not only make informed decisions but also prevent misuse (both intended and unintended). Invest in training that helps staff identify biases, ensure data privacy and apply fairness principles in AI development and deployment.

  4. Address privacy concerns – data collection has long been a feature of the internet age, but AI introduces new considerations, particularly around scale. AI systems often rely on large volumes of data to perform well, which can raise privacy concerns, especially when data collection practices aren’t communicated. Approaches like encryption, anonymisation, and meaningful user consent help address these concerns. Not only are these practices ethically sound, but they can also enhance a business’s reputation by building trust with users.

  5. Anticipate risks – effective business leaders understand the value of anticipating and managing potential risks, and the same applies to artificial intelligence. While AI systems can deliver significant benefits, they may also inadvertently reflect biases or raise privacy concerns. By proactively assessing these risks and putting thoughtful safeguards in place, organisations can support responsible AI use while building trust and long-term value.

  6. Conduct ethical reviews – the best way forward is to know where you have been, hence the need for businesses to regularly review their AI systems and ensure they align with fairness, transparency and accountability principles. Along with promoting compliance with regulations and societal expectations, ethical reviews encourage continuous improvement and allow businesses to refine AI models and mitigate unintended consequences. 

  7. Partner with ethical providers – for most businesses, their AI journey is going to be largely shaped by their technology partners, so it makes sense to align with providers that share your values. An organisation can invest in all the above steps, but they will count for little if they link with a partner that lacks the same commitment to high ethical standards and compliance. At Probe Group, we have embedded ‘Ethics and data responsibility’ as one of our four pillars of sustainability and know that prioritising strong data governance, privacy protection and responsible data handling practices helps our clients engage with and satisfy their customers ethically and responsibly.


Learn more about how Probe Group prioritises robust governance practices to ensure ethical conduct in all our operations, or find out more in the Probe Group Responsible AI Policy.

Our brands