
Striking balance: AI's impact on human rights in businessThe month of March sees South Africa observing Human Rights Day, and increasingly we are seeing discussions about dignity and fairness shifting into the modern workplace. At the same time, artificial intelligence (AI) is quickly changing how organisations recruit, manage, and evaluate employees. ![]() Image source: pitinan – 123RF.com While concerns about surveillance and less human oversight are valid, AI is also providing new chances for employees to understand and defend their rights. The challenge is to strike the right balance between technology and fairness. Rethinking decision-making in the age of AIThis tension is most visible in hiring and performance management. AI has enabled organisations to process large volumes of applications with speed and precision, identifying skills and matching candidates to roles more efficiently than ever before. However, this also means that decisions are increasingly influenced by algorithms before a human perspective is applied. As a result, hiring is no longer purely a matter of human judgement, but a hybrid process shaped by both data and discretion. For organisations, the priority should not only be efficiency but ensuring that these systems enhance, rather than dilute equitable decision-making. Once inside the workplace, the influence of AI becomes even more pronounced. From tracking productivity to analysing behavioural patterns, these tools provide organisations with deeper insights into performance and operational trends. Yet this constant visibility can feel intrusive if not carefully managed. More importantly, it signals a broader shift in the employer-employee relationship, from one guided largely by human intuition to one driven by data. To maintain trust, organisations must ensure that this shift does not remove the nuance, empathy, and context that only human judgment can provide. Transparency as the foundation of trustThis is where transparency becomes critical. AI systems are only as fair as the data they are built on, and without careful oversight, they can reinforce or even amplify existing biases. Organisations therefore carry a responsibility to rigorously test, monitor, and refine these systems. Employees, in turn, should have visibility into how decisions are made and be empowered to question outcomes that affect them. Embedding transparency into AI-driven processes is not just a compliance exercise, it is fundamental to protecting workplace rights. At the same time, AI is beginning to shift the balance of power in subtle but important ways. Access to clear legal or HR guidance has not always been readily available to employees, particularly in fast-paced or resource-constrained environments. AI tools are helping to close this gap by enabling individuals to better understand employment contracts, labour laws, and workplace policies. With this knowledge, employees are better equipped to ask informed questions, challenge inconsistencies, and engage more confidently in workplace discussions. In more flexible workforce models, including those supported by temporary employment services (TES), this transparency becomes even more important. Where workers may not always have direct access to internal HR structures, having clear processes, accessible information, and accountable intermediaries can play a meaningful role in ensuring that rights are consistently upheld across different working environments. Shared responsibility in building fair workplacesIn addition, AI is improving how workplace issues are documented. Whether tracking overtime, recording instances of unfair treatment, or noting unsafe conditions, employees can now create clearer, more structured records. This not only strengthens individual cases when disputes arise but also encourages a culture of accountability within organisations. Better documentation, enabled by technology, can ultimately lead to more transparent and consistent outcomes. However, this growing empowerment must be matched with responsible use. AI should serve as a guide rather than a final authority, particularly when sensitive or complex matters are involved. Employees must be mindful of how they use these tools, especially when handling confidential information. Misuse – whether through sharing sensitive data or relying on unverified outputs – can undermine trust and weaken the very protections AI seeks to support. The essential role of employers in ethical AI useEmployers, too, have a critical role to play. Implementing AI systems requires more than technical integration; it demands strong governance, ethical consideration, and ongoing validation. Organisations must ensure that the data informing these systems is credible and that outputs are regularly reviewed. They must also recognise that performance cannot be measured through a single, standardised lens. Different roles, industries, and individuals require context-sensitive evaluation, reinforcing the need for human oversight at every stage. Ultimately, the balance between productivity and privacy must be carefully managed. AI should be positioned as a tool that supports decision-making, not one that replaces it. Decisions that affect people’s careers, livelihoods, and dignity require human judgement, something no algorithm can fully replicate. By keeping people at the centre of these processes, organisations can ensure that efficiency does not come at the cost of fairness. Human Rights Day serves as a timely reminder that workplace rights must evolve alongside technological advancement. AI will continue to shape the future of work, but its true impact will depend on how responsibly it is applied. For organisations, this presents an opportunity: to build workplaces where technology enhances transparency, supports accountability, and reinforces dignity for every worker. When guided by strong ethical frameworks and a commitment to fairness, AI can help create not just smarter organisations, but more just and human-centred ones. About the authorMaureen Phiri is a Director at Oxyon People Solutions |