29-07-25 10:00 AM
As RPA evolves beyond rule-based automation, integrating AI (and especially Agentic AI) is becoming a key differentiator. But many teams hit roadblocks early. Here are 5 common mistakes I’ve seen while combining Blue Prism and AI models like GPT, ML APIs, or custom NLP tools:
1. Thinking AI is a "Magic Box"
AI isn’t plug-and-play. Without clear objectives (e.g., extract entities, classify tickets), teams often get poor results and blame the model.
2. No fall-back mechanism
When an AI model gives low confidence or fails, there’s often no backup logic in the process. Every AI integration should include confidence scoring + rule-based fall-back.
3. Using AI where simple logic works better
If a regex or decision stage can do the job, don’t call an AI model. It adds latency, complexity, and cost unnecessarily.
4. Ignoring data drift
AI models degrade over time. RPA teams rarely monitor performance post-deployment, leading to silent failures.
5. Not involving business users
AI output needs business validation—especially in subjective areas like sentiment or recommendations. Feedback loops are essential.
Have you faced any of these? Or have tips of your own? Let’s discuss. Would love to hear your experiences!
30-07-25 02:07 PM - edited 30-07-25 02:09 PM
@SouravSaha thank you so much for writing this up and sharing! As you know, I've been experimenting with LLMs + RPA so I'll share some of my reactions here:
1. Thinking AI is a "magic box"
I smiled so much when I read this. I think everyone goes into developing with AI with rainbows and sunshine, but when we get into the level of precision required for any automation in production we need to be much more grounded. Working with LLMs in particular requires knowledge of prompting methodologies and best practices, really good use of system instructions and knowledge of how to tweak model tolerances. You can get amazing results once you implement those, but it does take some learning and isn't quite as "magic" as folks expect. Even wizards have to go to school!
2. No fall-back mechanism
YES! I like to see AI as members of a team. They need supervision, performance reviews and improvement plans. Sometimes they make mistakes and need to be coached. And when they make those mistakes, someone needs to know so that they can be addressed. HITL!
3. Using AI instead of simple logic
I have made this mistake. A simple digital worker setup (or in my case, an Excel formula 😂) can sometimes do what AI can do, faster, better and with greater accuracy.
4. Data drift
I haven't encountered this and think it's fascinating. Do you have any resources to share where I can learn more about it?
5. Not involving business users
Yep - this is basic automation best practice and applies to anything impacting business users. Even if you're only developing with RPA, it's so vital. We, as humans, have a responsibility to reduce risk, increase accuracy and build trust - never neglect good comms and proper understanding of who your work will impact.
Love it Sourav, thanks again for contributing 🙂
30-07-25 04:26 PM
Thanks so much for your detailed and insightful response @Michael_S!
Your analogies (especially the wizard school one 😄) perfectly capture the learning curve with LLMs—couldn’t agree more that it’s not as “magic” as it first seems.
Also, totally with you on the Excel formula moment 🤣 — sometimes the simplest tool wins.
As for data drift, really glad you found that point interesting! Here are a few resources I’ve come across: