cancel
Showing results for 
Search instead for 
Did you mean: 
daniel.leach
Staff
Staff

Automation Use Case: Extracting data from an end-of-life legacy application

Note: This Blueprint is submitted by the SS&C Blue Prism Professional Services team, and documents a project they delivered as part of a paid service. For more information, contact your account team!

 

danielleach_0-1741014885524.png

SS&C Blue Prism Professional Services save business critical patient records before a legacy application was shut down.

  • Products Used: SS&C Blue Prism Cloud
  • Target Department: Outpatients
  • Sector: Healthcare

 

danielleach_1-1741015025710.png

The organization needed to extract 1000s of patient records before a legacy application was shut down within months. This was vital since they might lose these records containing important patient information.

We (the Blue Prism Professional Services team) set out to meticulously document the process, define the scope and success criteria, and ensure that our digital workers would be fully equipped for seamless automation. We worked closely with Subject Matter Experts (SMEs) from the clients’ Digital Transformation Team to get every detail right.

Key concerns were:

  1. Speed of execution - the deadline was tight due to the fast approaching shut down of the legacy application.
  2. Stability – if the automation threw too many exceptions it would have a compounding effect on point 1.

Deploying our best practice design methodology was a must to alleviate these concerns – as well as a judicious use of retries and correct use of conditional waits!

 

danielleach_2-1741015055570.png

The Design Phase:

We kicked things off with an immersive deep-dive session alongside business process SMEs, where we uncovered every step, exception, and scenario the automation must handle. This session is recorded and used to craft the initial Process Definition Document (PDD). One exception we discovered during the deep dive was that certain records could not be extracted via email export – so we agreed that this would be handled manually. Once created, the PDD was then shared with the SMEs for validation, and approval by the client.


Next step – the developer walkthrough. Here, the developer steps into the client environment, executing the documented steps while business SMEs observe in real time. This crucial checkpoint ensures everything is in place—from a properly configured environment to essential build data and credentials. No surprises, no gaps—just a seamless transition to the build phase.


With the walkthrough complete, the developer drafts the Solution Design Document (SDD), marking the official kick-off of the automation build. At this point, the PDD is signed off by both the customer and SS&C Blue Prism Professional Services, locking in the scope of what’s to be delivered.

 

Building the automation

Guided by the signed-off PDD, the build phase kicks off—transforming plans into reality. We don’t just follow a blueprint here; the developer has the flexibility to make smart design choices that maximize efficiency and deliver the best possible outcome for the customer.


One example in this project was choosing the best method of importing a large amount of data. We decided to use OLEDB and work with the data in chunks, so as to avoid overtaxing memory.
As we constructed the automation in SS&C Blue Prism, we continuously refined the Solution Design Document (SDD), ensuring it evolves alongside (and accurately documents) the build, and making sure that anyone working with this automation in the future can understand it easily.


The completed automation undergoes a rigorous peer review by another skilled developer. This quality checkpoint ensures the solution adheres to best practices, coding standards, and optimal efficiency—leaving no room for errors or overlooked details. In this case, peer review uncovered some elements spied using Active Accessibility which were sped up by including the Match Index attribute. This increased the overall speed of the automation and conserved memory.

 

Testing and improving

We divide the testing phase into two key parts: internal/functional testing and customer-led User Acceptance Testing (UAT). This ensures the automation is not only technically sound but also meets real-world business needs before going live.


First, the developer executes a thorough internal test plan, using customer-provided test data to validate every scenario. Each outcome is meticulously logged, defects are identified and fixed, and the process is fine-tuned to ensure reliability and accuracy. Testing revealed that we needed to increase a conditional timeout to allow a popup to appear, as well as ensuring a “close screen” action was added to allow recovery steps from a certain Business Exception.


Once internal testing is complete, the final test results and internal testing plan are handed over to the customer, setting the stage for UAT. We walked the customer’s Center of Excellence (CoE) through the automation in action, providing a detailed overview of how the automation operates, its expected outcomes, and how to run it from the Control Room. The customer was pleased with the finished automation. With this knowledge, the customer is now equipped to conduct UAT, with the developer on standby to provide support if needed.


To keep things on track, daily UAT calls are scheduled, giving the customer team a platform to discuss progress and raise any defects. During testing, a memory leak was found which would accrue over time and, after 9 hours, cause exceptions. Following a thorough investigation using the Log Memory Usage option (found in BP Control room under Resources – Management) it was concluded that the legacy application itself was the cause of the build-up of memory. The workaround solution was simply to log out of the Windows account after every few hours, then back in again.

Once User Acceptance Testing is successfully completed, the customer gave the final sign-off, marking the official closure of the testing phase. With everything validated and approved, it's time for the most exciting part — GO-LIVE! 🎉

 

Moving to production

The focus now is on deploying, validating, and seamlessly integrating the automation into the production environment.


The developer creates a release package and imports the automation into the production environment, ensuring a smooth and controlled deployment.


Next, a comprehensive environment check is carried out to confirm that everything is configured correctly. This includes:

Ensuring digital workers are properly set up
Verifying access to file shares, credentials, and applications
Making any necessary environmental adjustments

With the automation in place, key scenarios are executed to ensure that everything runs smoothly in the live production environment. This is a crucial step, as even minor differences between development and production versions of applications could impact performance. Any required tweaks are made on the spot to guarantee optimal functionality.


Once the automation is validated by business process SMEs, the daily volume ramp-up begins. The developer works closely with the business to define scheduling strategies and ensure that the automation is scaled up effectively.


Before you know it, the automation is running at full speed, and we have officially entered Business-as-Usual (BAU)! 🎉 At this stage, the automation is handed over to the customer’s CoE, with SS&C Blue Prism Professional Services providing Hypercare for the first week to ensure a smooth transition and immediate support for any early optimizations.

And just like that—mission accomplished! The automation is now fully operational, delivering tangible business benefits.

 

danielleach_3-1741015313142.png

The main challenge we faced was dealing with such a huge dataset. Reading in 90k rows from Excel would have caused memory issues so we handled this in two ways:

  • Firstly, we changed the Excel document to a CSV which meant less metadata to handle and more options for importing the data.

  • Secondly, we used an OLEDB action to import the data which meant we did not have to open the document (thereby adding it to memory unnecessarily). Also, we were able to do it in chunks so as not to have to deal with all of it at once. I created a loop which would take a chunk of 200 rows, then add it to a collection then take another chunk. The Digital Worker would then update a separate config file created specifically to record the row number it was up to.

Our discovery exercise at the start was important since it revealed a more efficient way of navigating to each file within the legacy application. If the robot had followed the way the team were doing it manually, expanding a menu tree of options, it would have meant much more difficult spying and logic.

 

danielleach_4-1741015389297.png

The team was able to extract the information ahead of the deadline, which mean the integrity of the data was maintained for the department.

 

danielleach_5-1741015410610.png
  • If this automation use case inspired you: Reward the author by clicking the Like button
  • If you would like to know more: Reply to this post and ask your question
  • If you're still looking for examples of intelligent automation use cases: Browse or search our use case library
Version history
Last update:
7 hours ago
Updated by: