Earlier this year, a relatively obscure chatbot garnered significant global attention.
DeepSeek, a Chinese artificial intelligence (AI) startup, achieved international prominence, rapidly ascending app download rankings and impacting US tech stock valuations. This surge was primarily attributed to the release of its latest model, which the company asserted possessed capabilities comparable to technology developed by OpenAI, creators of ChatGPT, while offering substantial cost efficiencies.
Concurrently, amidst this public visibility, political concerns emerged. Mirroring previous scrutiny of Chinese technology platforms such as TikTok and Huawei, Western governments expressed apprehension regarding potential data harvesting for intelligence purposes via the DeepSeek platform.
This concern was particularly pronounced in Australia, where Federal Science Minister Ed Husic articulated outstanding questions about data and privacy management. Consequently, the platform was prohibited on all federal government systems and devices. This decision by political authorities underscores the imperative for rigorous ethical considerations in light of the rapid advancements in AI technology. This is particularly pertinent for business executives, who, while under pressure to adopt AI technologies, must concurrently navigate ethical obligations and mitigate potential risks.
AI ethics is a set of principles and values that guide the development, deployment and use of artificial intelligence (AI) systems to ensure they are aligned with human values and do not cause harm. It's about making sure AI technologies are fair, transparent, accountable and respect human rights.
Ethical guidelines are crucial in all aspects of business, particularly when dealing with a new concept or technology. In the rush to implement a game-changing tool such as AI, it is easy (or even tempting) for businesses to overlook moral questions in favour of quick results - but there are inherent dangers to taking that path. Without ethical guidelines, AI can reinforce biases, violate privacy and lead to unfair or harmful outcomes.
At their core, AI ethical principles guide developers, policymakers and businesses in creating platforms that are transparent, fair and aligned with human values, ensuring their responsible and beneficial use in society.
The issue is so important that the Australian Government has designed ‘8 Artificial Intelligence Ethics Principles’ to ensure the technology is safe, secure and reliable. The principles are:
Of course, like any government initiative, the ‘8 Principles’ approach is up for debate. A pilot project involving major Australian businesses highlighted their relevance and underscored the growing expectation for ethical AI practices, with participants feeling they aligned with public expectations. However, the voluntary nature of the principles created a situation that limited their enforceability, and there is now talk that mandatory safeguards may be needed to address risks associated with AI.
Artificial intelligence is not only here to stay – it is one of the key tools that will shape the business world in the coming years. Organisations cannot wait for governments to mandate ethical principles but instead must lead the way by ensuring they implement the technology in a manner that positively shapes its future and ensures sustainable and inclusive growth.
To help guide your own AI ambitions, here are seven steps that will help harness the power of AI for good.
Learn more about how Probe Group prioritises robust governance practices to ensure ethical conduct in all our operations, or find out more in the Probe Group Responsible AI Policy.