Artificial intelligence is no longer the exclusive domain of tech behemoths or conglomerates with substantial financial resources. Now, the question is: How can I make AI work for me?
One advantage of consumer-facing chatbots is that by storing and analysing data from previous conversations, machine-learning algorithms can enable them to become smarter at answering tougher and more complex questions in real time.
They can serve as a first line of support, which frees up employees’ time and allows them to focus on more complicated situations that require further escalation and human intervention.
NAVIGATING THE ETHICAL WATERS OF GENAI
While AI-powered tools can become smarter and relatively better with more time, one should always approach with caution. Not everyone has yet jumped onto this tempting bandwagon, and for good reason.
As impressive as AI has proved to be so far, large language models, which most AI-powered applications use, are still rife with problems including copyright issues, bias, and more.
While AI certainly can increase productivity, the content it generates still requires the critical touch of human expertise for review and editing. After all, you do not want your social media pages filled with plagiarised content.
Another key concern is data privacy — for example, entering proprietary information into ChatGPT may put your or your clients’ sensitive information at risk.
Keep in mind that AI is a double-edged sword. Without clear framework and proper ethical training, you could be putting your business at risk.
The Infocomm Media Development Authority has released principles for businesses and AI providers to follow in the Implementation and Self-Assessment Guide for Organisations, which provides suggestions and strategies for fostering trust in AI while maintaining a clear understanding of its usage and limitations.
STAYING AHEAD OF THE CURVE
A big part of using AI responsibly is ensuring that our workforce can keep up with dynamic changes.
The first step in navigating any form of disruption is to establish an organisational culture that is adaptable and responsive to changes. This can be achieved through nurturing an atmosphere for open and transparent communication.
Given that employees will naturally wonder how large-scale decisions will impact them, a resilient and empathetic culture will allow for effective responses to their fears.
Equally important is for management to understand the capabilities and limitations of any new technology. As mentioned, generative AI software carry with them risks of data privacy breaches, inaccuracies, and potential copyright infringements. Putting in place rules and guidelines around the use of AI at work, which include mandatory fact checking and the protection of client information, among others, will help to minimise such risk.
Every organisation, regardless of size and scale, will need to look into reskilling and upskilling initiatives that will prepare their current workforce for such changes. Singapore’s SkillsFuture, for example, has had great results in matching talents with new roles emerging from the advent of GenAI through upskilling programmes.
Another hurdle to overcome is the potential reluctance among workers to invest time in such training efforts — one possible strategy for this is to ensure that workers see benefits to it, in the form of promotions, higher wages, or other tangibles.
With the right training and understanding of AI’s potential, Singapore can cultivate a well-prepared talent pipeline, equipping the current and future workforce with the skills needed to thrive in an ever-changing job market. This will be the key to creating a digital ecosystem where every business can evolve, advancing Singapore’s progress as a unified society.
ABOUT THE AUTHORS:
Laurence Liew is Director of AI Innovation at AI Singapore, and Adviser to SGTech’s AI Chapter. Benjamin Mah is Co-chair of SGTech’s Talent Steering Committee.