Automation Bias in RPA: Lesley Case

Grace-Lanser
Community Team
Community Team

Automation Bias in RPA: Lesley Case

(Robotic Process Automation) RPA is a continuously developing industry, with the scope of use cases continuing to expand. However constant innovation comes with the challenge of maintaining a high standard of risk and governance management. 

As a Developer for the NHS, understands how important understanding risks is in automation. Lesley has shared her insight into the risk of Automation Bias and the steps she takes to combat this.

 

 

What is Robotic Process Automation (RPA)?

RPA is a software technology that enables the automation of highly repetitive rule-based processes by emulating human actions on computer systems through keyboard presses and mouse clicks. These automations can be referred to as ‘software robots’, ‘Digital Workers’ or simply as ‘bots’.

RPA has become increasingly popular in recent years and while there is plenty of documentation about the risks of RPA, something that isn’t as highly discussed is the risk of automation bias in RPA. 

 

What is automation bias?

Automation bias is an overreliance on automated systems or technology. When humans favour automation over their own judgement or critical thinking skills it becomes easier to make mistakes. For example accepting a suggestion from a spell-checking programme even if the suggestion is clearly incorrect, or blindly following the GPS and driving your car off a 100-foot cliff

 

What are the risks of automation bias in RPA?

 

 

Automation Complacency

Automation complacency describes how high automation reliability can lead human users to disengage from monitoring the performance of the automations.

While it is best practise for RPA teams to provide reports to end-users detailing what the bot has completed and what is has not, the end-user is ultimately responsible for checking that the completed activities have been processed as expected. There can be an unfounded comfort in expecting a bot to break if something has changed, therefore preventing it from processing incorrectly, but this is simply not the case. It is possible for bots to ‘invisibly break’. A process or computer application can change in such a way that causes the bot to behave differently rather than breaking it. An example of this could be a home number field changing to a mobile number field. The bot can still see the field it is interested in, but the label gives the field a new context that is clear to a human but not to a bot.

Automation Misuse

Automation misuse is an overreliance on automations, or the implementation of an automation when it is inappropriate.

It can be easy to ‘set and forget’ a bot, particularly as there is no visibility of the automation to the end-users. This can result in failures of monitoring the outputs, the assumption that the bot is processing more than it is, or the belief that the automation will never break. 

Out-of-the-loop performance problem

The out-of-the-loop performance problem refers to the loss of knowledge and skills as a result of automation. In RPA this can be particularly seen where 100% of the process is automated or when the human-workforce is no longer performing the work. Robots do break and without the humans to manually complete the process the consequences can be critical. 

 

How can automation bias be mitigated in RPA?

Take a ‘human-centred approach’ that allows the bots to work alongside humans rather than replacing them.

Have open conversations and educate end-users on the capabilities of the bots and set clear expectations.

Put ownership and accountability back onto the end-users by having them validate the work the bot has completed and to continue throughout the bot’s lifespan.

Provide transparency of the bots’ performance through user-friendly reporting. This can help end-users to have visibility of what the bots are doing and how often they’re doing it.

Follow the 80/20 rule and aim to automate the easy 80%. Take careful consideration when looking to automate 100% of the process. By keeping the more complex variations in the hands of the humans, it is possible to ensure that humans keep the required skills and knowledge.

Where bots are making decisions, these should be treated as advice rather than a direct instruction. A human should always be making the final decision. Transparency in how the bot made that decision would enable humans to determine if it is a correct one, or if there is additional context that needs to be considered.

Finally, raise awareness of these potential risks in your RPA solutions by sharing this article. Anything we've missed? Let us know in the comments below!

 

#InsideRPA #Product

@LesleyCase