The promises and perils of AI

By NZISM Master account

03/12/2024


I’ve been thinking a lot about Artificial Intelligence lately. It has become ubiquitous remarkably quickly. I visited a training facility lately which used AI vision to detect when people had forgotten to put on their PPE. NZISM is using AI tools to summarise meetings, analyse data and assist with various tasks as a matter of course. I have been on the scholarship panel for HASANZ and a surprising number of the applications veered into the uncanny prose valley which suggested strongly that AI had done most of the writing for the applicant (not a good strategy for getting a scholarship).


My view is that artificial intelligence will significantly change most jobs and health and safety is no exception. I think that those who understand and use AI well are likely to have a significant advantage over those who use it badly or not at all.

For this reason, I’ve been keen to get different perspectives on AI in front of NZISM members but with a note of caution attached. It’s easy to get swept away by the promises of massive productivity increases and subcontracting your drudgery to the machines. As professionals it’s important to understand when and how to use AI effectively to complement our skills and knowledge rather than as a substitute.

I want to recommend a recent episode of Dave Provan and Drew Rae’s Safety of Work podcast entitled “Does Chat GPT give good safety advice?”. They discuss an interesting 2023 study in Safety Science by Oveido-Trespalacios et al. called ‘The risks of using ChatGPT to obtain common safety-related information and advice’.

In the 2023 study, nine experts in particular areas of safety asked ChatGPT3.5 for safety advice linked to their areas of expertise (such as mobile phone use while driving, crowd safety, and addressing burnout risks in high pressure jobs). The experts then critiqued the advice given by ChatGPT.

The experts (the authors of the study along with Provan and Rae) sounded useful cautions about the advice given by ChatGPT:

  • The ChatGPT advice given in the study didn’t have significant errors or hallucinations (where an AI makes up facts or data) but the advice given lacked sources and did not call out areas where the evidence is developing or disputed. The false confidence of the model could easily lead users astray.
  • ChatGPT doesn’t appear to give greater weight to ‘better’ data sources like information from regulators or peer-reviewed studies either.
  • Another problem is that the way in which the question is asked influences the answer that ChatGPT gives: Questions that say “what can I do” generated individual-focused answers rather than systematic ones.
  • ChatGPT’s answers also lack cultural-specificity and tend to be US-Centric.

Drew Rae’s advice sums it up neatly:

Policymakers such as your risk managers should refrain from using ChatGPT as a source of expert safety information and advice. Hell, yes. Do not use ChatGPT to write a training course, to write a policy document, or something like that. The lack of traceability, the inability to synthesize knowledge, the inability to spot nuance when there's a debate or conflicting evidence, the inability to update it with recent evidence, not only are you going to be immediately generating stuff that is bad, you'll be creating an ecosystem of data which is going to continue to get worse over time if you were to use it for that. Please don't.

A key feature that distinguishes professionals is the steps they take to ensure they provide the best advice possible. NZISM’s Code of Ethics requires our members to “Provide advice, express an opinion, or make statements in an honest, objective, impartial and efficient way and consider the reasonably foreseeable consequences of that advice” and to “Ensure work carried out by others under their direction is performed competently with honesty and integrity and is accurately reported.”

Other professionals are learning the perils of leaning too heavily on ChatGPT. Lawyers in the US and Canada have been censured by the Courts for filing arguments with case law hallucinated by AI.

Are these problems fundamental and insurmountable? Perhaps not. AI technology is developing exceptionally quickly and the long lead-in time of peer-reviewed research means that it will usually be assessing outdated models (ChatGPT has released both GPT-4 and GPT-4o since the Oveido-Trespalacios study was conducted). Small scale studies suggest that AI is better than doctors at diagnosing illnesses (in controlled settings).

However, until AI can address the problems with the advice it provides, I advise extreme caution. Uncritical use of AI for safety advice is likely to breach your professional advice obligations. We will keep a careful eye on these developments and bring good information to our members.

Ngā mihi

Jeff Sissons

NZISM CEO