Role of AI tools in height safety

AI and machine learning - do they have a role to play in workplace safety?

AI and machine learning tools are currently all the rage – but what role do they have to play when it comes to height safety?

Artificial intelligence (AI) and machine learning (ML) is going through a substantial boom time. Like blockchain technology and the social web before it the hype suggests that AI and ML systems are going to completely upend and rebuild the world around us.

This feverish excitement has, naturally, extended into the world of safety. But it is worth stopping and analysing just what safety is, what these tools are currently capable of, and how that can be put together to effect genuine change.

After all, when it comes to safety – and in particular height safety – getting things right is an absolute must.

What do we mean when we say AI?

Artificial intelligence, or AI, has been around for quite a while. The first recorded use of it in the modern context was in 1956 at Dartmouth College in the United States. Since then, the term has been applied to just about any type of work where a computer has been able to mimic the ways a human behaves.

This could be anything from the computer-controlled characters in a video game to systems that convert spoken words into text, to the Shazam app for your phone that can identify what song is being played on a speaker nearby.

More recently, AI has been used to describe products like ChatGPT and Claude, conversational language models, as well as things like Midjourney and DALL-E that can generate images from a text prompt.

Although these more contemporary systems are very sophisticated and can do a very reasonable job of presenting themselves as having human-like qualities, understanding what underpins the technology helps us make better decisions about how to use the information they produce.

So, what is an AI system exactly?

Most AI systems start life as an empty bucket into which a load of data is poured. What this data is, depends on the purpose the developer intends their AI model to serve. The model breaks all this data down into “tokens”.

When a user enters a prompt those are also broken down into tokens. The model looks at the tokens presented in the prompt and the tokens it has in its bucket already.

It then picks a starting point and produces a series of tokens that are statistically likely to be related to each other, based on what data has been fed into the model.

In doing this, the model creates an output that looks like it makes sense. It will, in most cases these days, make sense. But all it is doing is giving you a series of words (tokens) that are statistically likely to appear next to each other, based on the contents of its bucket of data.

Limitations of AI systems

The biggest limitation that exists in current AI systems is that they do not – and cannot – know anything. Their function is based entirely around the probabilities of one thing appearing alongside another.

An AI system does not know things in the way that the human brain does. There is no understanding of what the output of a system is or means, it is just output based on the parameters of the algorithms and data that exist within the system. There is no additional background information or context. It cannot meaningfully interpret information

That inability to understand – to know – is how we get situations where AI systems recommend things like how to stick toppings onto your pizza with glue or how many rocks should be consumed daily as part of a balanced diet.

It is this lack of understanding and actual knowledge that is why every AI system comes with a note that its output should be independently verified before it is used.

Of course, these examples are ones where it is clear the system has gone awry, but they do serve as a warning when considering the use of AI applications and systems in other areas. Especially those related to safety.

Artificial intelligence and safety

The number of AI tools being promoted within the workplace safety sector is seemingly growing at an exponential rate. And many of them promise to make understanding safety easier or the likelihood of an accident occurring lesser.

Significant levels of care should be taken when considering outsourcing the work of creating safe places and methods of work to an AI tool.

Under the Work Health and Safety Act, a person conducting a business or undertaking (PCBU) still has a duty to identify and manage risks to safety that exist in the workplace. This duty cannot be transferred to another person, nor can it be transferred to a computer-based AI tool. “The computer said it was OK” is unlikely to be a phrase looked upon kindly by an inspector of a workplace safety regulator.

When it comes to using AI tools, it is important that PCBUs understand exactly what goes into producing the outputs these tools provide. Although they are often sold as “smart” and “intelligent” systems with seemingly their own ability to undertake complex analysis, the truth is somewhat removed from that. In building an AI system, decisions have been made by its developers about how it will weigh the importance of certain inputs and data, what information is first put into the bucket and how the output is presented to the end user.

Questions to ask about AI systems

The need to understand what data an AI tool is drawing on when it is putting together its output is, arguably, the most important thing to know when considering using one. Many AI tools spruik the size of their databases. Although fewer are clear in what that database contains.

For example, when it comes to working at height, it is important to know where the data has come from. Is it from your jurisdiction? Does it contain data from other jurisdictions? Does it know when to refer to one and not another? What about the relevant Australian Standards? Are there other standards incorporated into the database as well? Have random blog posts been scraped and included? Who wrote them? Do those blogs contain accurate information? Is there manufacturer data in there? Which manufacturers? Ones that are used locally or international ones?

Is the data being used to create the output in your AI tool relevant to where you are working and what you are doing? That is the key question here.

This is all about understanding what the AI system is drawing on, the next question to ask is about how it is using that data.

How have the AI system and its algorithms have been programmed to weigh and give precedence to which parts of the database?

As discussed earlier in this piece, AI tools like ChatGPT and others are designed to place statistically likely pieces of data together. Just because one thing is likely to follow another does not mean that the output of putting them together is necessarily correct or accurate.

Playing into this are the decisions made by the system’s developers about how the algorithms will pick from the pool of available data when generating an output.

The need to understand how an AI tool is put together in this way is important as it places the output provided into context the PCBU can then apply to their own workplace.

Put another way, if you cannot explain how a decision was made and what it was based on, how can you trust that the decision is the best one for your situation?

There are no shortcuts when it comes to safety

Every PCBU is required by law to ensure they are managing the risks to worker health and safety that exist at their workplace in the most practicable way.

Although the use of AI and machine learning tools can potentially make achieving this easier, it is critically important that when they are being used their functions and limitations be thoroughly understood.

Computers only ever do exactly what they are told, and if they are not told the correct things then workers can find themselves in a hazardous situation that could have been avoided.

Partners in protecting people

Height Safety Engineers are in the business of protecting people working at height and in high-risk situations. Our team have the experience and expertise to provide accurate, reliable and trusted advice on how to safely work at height and in other high-risk environments.

Start your safety journey with your partners in protecting people. Contact HSE by calling 1300 884 978, emailing enquiries@heightsafety.net or by filling out the contact form on this page.

You may also be interested in....

Get in touch with our team
Not sure where to start?

Download our free height safety risk assessment toolkit. Understand your risk areas to improve your site safety.