30-10-19 12:53 AM
We've been doing RPA for a couple of years now and we have a decent number of automated processes. The one thing that I seem to notice across all of the production runs is that there are tons of error messages. As I see it, unless the error is due to the true unexpected unavailability of some dependent application (something that is or should be very rare), error messages indicate a problem that needs to likely needs to be fixed by changing Blue Prism code. I don't care whether it is extending the wait time before or after performing a step, waiting/checking for an item's existence before attempting to interact with it or just an unexpected application scenario such as a popup that was never seen before, all of these things require a code change.
Sometimes coders try things in a loop X number of times before they give up. Again leaving out the possibility of a system being offline unexpectedly, is there ever a valid reason to do this instead of adjusting the wait times where you check for the existence of an element? If re-trying X number of times is sometimes genuinely necessary and I doubt that this is the case with our generally light processes that consume little resources on the BOT, I would think that a well written process should only log an error if the final attempt failed.
Instead, I see errors everywhere and it seems, at least for RPA, to be accepted as the norm.
Does RPA really need to be like this? Do I need to realign my expectations or is this achievable with good code that's been reworked until it is bullet proof.
instead of code that is being reworked and improved, I just see a growing base of BOTs making our Prod environment come to a crawl while the errors continue to grow. The performance of the Control Room is so bad, no one can even see them except me, since I query the db directly-the only way to be able to see logs without the Blue Prism going belly up when you access the Control Room.
That said, the Admin team is slowly trying to identify and reduce excessive logging but they still haven't address an large and fast growing log table, well over 100 M records. it's unfortunate that BP runs into memory issues if you try to archive more than 2 handfuls or error messages at a time. I think we should delete them right in the database and be done with it.
I think we are growing too fast and on a very shaky foundation, both the tool and probably, our code, too, based largely on all of the errors that I see from the product and our processes.
Your thoughts?
30-10-19 11:49 AM
30-10-19 03:48 PM
31-10-19 05:23 AM
26-11-19 09:15 PM
27-11-19 04:38 PM