17-02-24 08:48 AM
Good morning all,
We are currently working on a project that requires us to spy a website and download files from it. Everything worked fine until the automation began to fail. On close inspection, we observed that the web interface had changed (i.e., the website has been redesigned by the owner), and the location where the BOT used to click and download files was moved to a different section.
In addition, Google Chrome (the browser with which this automation was built) has recently been updated (about twice now), changing how downloads are handled, eventually breaking our automation.
These two incidents made us re-spy the new interface to get our automation up and running again.
Now, my questions are:
1. What is/are the best approach(es) to automating a website that constantly changes due to UI and browser improvements/updates?
2. Is there an alternative to creating an end-to-end automation of a webpage?
What we want to achieve is a solution that does not easily break, and is robust to the dynamics of webpage UIs and browser upgrades. Or should we reconsider the aspects currently being automated?
We are open to suggestions. Your kind feedback is deeply appreciated.
Thanks and best regards,
------------------------------
Kingsley David
------------------------------
28-02-24 05:02 PM
We had similar issues in one of our projects. As Micheal suggested, using relative xpaths works wonders. As a RPA developer, if HTML CSS is something new to you can go through the below mentioned video for a quick tutorial on how to write relative xpaths.
(7) XPath Tutorial for RPA Developers - YouTube
------------------------------
Susamay Halder Consultant
Consultant
Bruce Power
+1(437)217-1086
------------------------------
16-05-24 06:07 PM
Don't know why the image in my reply above is marked as hidden but I can't figure out a way to unhide it so I am reposting it: