cancel
Showing results for 
Search instead for 
Did you mean: 

Memory Management in Long-Running Blue Prism Processes

I was working with some long-running Blue Prism processes recently and noticed that memory usage can increase massively over time, which sometimes impacts performance. And that is where memory management becomes so important.

Here are a few best practices that can be performed for better memory management:

  1. Close applications and release system resources when not in use.
  2. Use Cleanup stages inside loops to free memory.
  3. Avoid keeping unnecessary data in memory.
  4. Process data in batches instead of loading everything at once.
  5. Use Work Queues for large datasets rather than holding large collections in memory.
  6. Load only the required range or rows from Excel files.
  7. Split files into manageable chunks to process if possible.
  8. Regularly monitor Digital Worker memory usage during UAT and hypercare.
  9. Restart workers in scheduled downtime if required.

Would love to know how you handle high memory consumption in long-running automations.

Best regards,
Sourav S
Consultant - Automation Developer
WonderBotz
1 REPLY 1

Unless there is a memory leak (which there is not in most versions of Blue Prism), the longer the session runs really should have no impact on memory issues. If the memory is increasing over time, it definitely is something to look into, but that does not sound like a normal issue. We've had sessions run for days straight with no issues. We normally cut it off so it doesn't run that long just in case though. The real problem I've seen is that the session log of a single session being too big can make it impossible to view the session log from inside Blue Prism.

I'll add a few things as well. Note that I think these are only important if you actually run into memory issues. I don't think it's necessary for most automations. I just want to be clear that people reading this stuff should not be doing it all the time. It is not worth the extra effort in complex designs until you actually have a problem or anticipate one based on past issues.

1. In processes and objects, for any large collections, put them on the Main Page/Initialise page and make them global and then re-use those same global collections so that the same collections get overwritten as each new operation is performed on them. This can make readability worse and can make it more likely to have logical issues if you aren't careful, but it is a solid way to reduce memory usage. This is especially important in the Collection Manipulation object and any other object where large collections might be inputted or outputted. You can also then have an action that can be called which specifically clears the collections in the object.

2. Limit the number of object instances that are created by avoiding nested calls to objects if you are having memory issues. To be clear, I'm not saying you should avoid nested object calls all the time; in fact this is a very useful thing to do, but it can be avoided in certain automations.

3. If using surface automation, be careful about how many images get saved into the application model. You would have to do quite a few to cause a memory issue, but it's possible. You can mitigate this by making the images as small as possible. The way I do this is by making the window I'm spying as small as possible first (assuming the application window is resizable), and then I spy the window with surface automation. Of course some apps won't work for this, but I have found that many do. The smaller the image, the smaller amount of memory used by that image when the object is loaded into memory.

4. Finally, there is no reason to have a really long running session. Even just using Scheduler, you can set up the Schedule and tasks in such a way that the automation can run for a while and stop and then start back up again. We use CTWO which makes this super easy. But if you have to only use Scheduler, what you can do is use the Stop After Time input to a process so it doesn't run forever. Let's say you tell it to stop running after 4 hours. Then you set up the task to run every 4.5 hours or something like that. This will cause the memory used to be dumped every 4 hours regardless of how bad the memory management is in the automation.


Dave Morris, 3Ci at Southern Company