Hi Team, The Problem: Our team maintain ODI for particular processes and a global ODI for all processes. However we face challenge updating these documents as developer spend time on development and testing the automations. As a result these documents are not always up to date and sometimes cause issues with duplicity of objects being developed. The Idea: I would like to request an ODI tab where we can go and see all the objects that are developed for an given application. This will allow us to actively reuse these already developed objects. For this to work we should ask for application name while developing the modeler. So that all the objects are grouped under the same application on the ODI tab, further it should be easy to check the actions and their input output for each tab along with their description. Do let me know if you have doubts or any other solution that can help us. Thanks
... View more
the new idea is to enable to determine the type of field for each column in the table like: text, Date, number, upload, list. and enable to create Regex to the fields in the table.
I need this idea, because in one of process that I work, I need to create form for request medical insurance, and in this form, the person can add any number from his family without certain limitation. and we can use the table for this because no limitation for the number of rows that we can add. but we can not add regex or type for the fields in table.
... View more
When debugging in Process Studio (or Object Studio), it's rather tedious to go to the debug speed toolbar button, open it and drag the speed slider to a new position. It would be great to have a keyboard shortcut (e.g. CTRL+ and CTRL-) to speed up or slow down.
... View more
This is probably somewhere in the connection of the Process and the Control Room, but it would be nice to set some 'Startup' parameters as required (must be populated before the Process can be started).
... View more
Use Case: We have automations against the Epic EHR application. They claim to NEVER have any downtimes, but they often had "silent pauses", or somesuch, which seem to be just as disruptive to our processes.
Idea: Have a 'Pause' option, alongside the 'Request Stop' option. The 'Pause' option would have a start and stop DateTime parameter. When executed, the Process, upon reaching the Stop/Stop Time check would hold until the Pause parameters expired. The exact mechanics in the Process could be addressed in a couple of different ways.
So, rather than a full Stop/Restart, the Process is able to pick back up where it left off.
... View more
Status:
Under Consideration
Submitted on
10-03-23
03:12 PM
Submitted by
stepher
on
10-03-23
03:12 PM
In our 'Path to Prod', there have been many instances when we have needed to upload an update version of a Document Type and the Document Form Definition (especially in our QA and Prod instances). Currently, if the new and current form have the same name, Decipher creates a copy. I would like the option to overwrite/replace the current form.
... View more
When Session Variable is changed during runtime .Options to be provided to retain its value . This will be helpful in a scenario when we are working on process which run more than once in a day and need output of previous run .
... View more
It is in fact the age of the dark background, and while maybe that is a step too far for Blue Prism though I would love it, I would at least love to have something that isn't the white light of Blue Prism studio/control room burning into my retinas ~5 hours a day. Some ability to control background would be nice, even just to mute the white light a little. Even better would be a good dark mode
... View more
There are already ideas for centrally managing licenses.
Additionally it would be nice to have one central management view on all RR regardless to what environment they are connected.
Another idea, related to the one above, is to have the possibility to change configuration of the RR. This might be connection definitions and LoginService configuration.
If one central control room for all RR in all environments is not feasible in near future, it would be still nice to be able to configure RR at least central within one environment.
... View more
Currently (at least for v6.9) a BP environment is basically identified by the connection name. The connection name is a free text that can be changed anytime.
When working with multiple teams in multiple environments it is important to exactly know what environment we are working in and talking about.
Avoiding ambiguous names like 'My BP 123', 'old/new BP', ... might be avoided to some degree with naming conventions and using scripts instead of manual configuration.
But still, changing the name of the connection or keeping the name but changing port, can lead to confusion.
It would be nice to have the possibility to specify a name to identify a BP instance / environment and that is stored in the DB. This can the be displayed in the client. This name could also be added during archiving and reduces the risk of corrupting the target archiving structure.
... View more
We are using 6.9 and more recent versions of BP might have different possibilities.
In 6.9 the configuration of Archiving is limited to specify a target directory.
There is no option to define the preferred RR the archiving should run.
There is no option to specify the Windows account that should be used
And there is no possibility to specify the date/time when automatic archiving should start.
This causes certain challenges in a multi RR, multi BP instance, multi Windows account environment.
.) The target directory is only set per RR. We have to set the directory on each RR to ensure the files are saved in the same directory.
Each BP instance / environment has to have its own target directory otherwise restore will very likely fail or will be wrong. When moving a RR from one environment to another environment also the local archiving directory has to be set manually local on the machine.
.) Preferably, all archiving files should be saved in the same central location and not local on each RR. This can be achieved by having a network share mapped on all RR.
Since we cannot know what user is currently logged on to a RR when archiving kicks in, all robot Windows accounts have to have the share mapped and have to have full access to those folders.
This rises security concerns as logs needed for audit purposes can be altered by anyone who happen to have access to robot Windows accounts.
.) There is no information and no check if the target directory is the correct one since there is no information that would uniquely and easily identify the environment.
In case archiving was done to the wrong target directory, correcting the error needs huge amount of manual effort. (basically going through all directory tree as RR is at the very end of the structure:
- Year - Month - Day - Process name - Runtime Resource name
)
It is always much more difficult to configure something like archiving on workplaces as they tend to be more volatile than servers. The best solution would be to have archiving executed as part of the application server service.
But also the RR side way of archiving could be greatly improved by:
storing target directory not RR local but centrally in repository
being able to specify the RR that should be used
being able to create schedules (intervals, start times, ...) to run archiving
Another general improvement would be to add environment information to the archiving folder / files. Either more like for informational purpose when connection name is used or some new information as not-changeable name for the BP instance / repository. (I will create another idea for this identifier)
... View more
The level of success a company will have from an RPA investment is very much dependent on how the bots are deployed. If the creation and delivery of bots is slow, error prone and unreliable, it will not be possible to harness the potential of Blueprism Capabilities here We have tried couple of POCs to achieve real benefits of CICD for RPA but could not get expected outcomes and solution was additional overhead Possible CICD Implementation we tried : 1. Using Jenkins and GitHub (As mentioned DX Document) 2. Using Azure Pipelines and Azure Reports (Azure DevOps) 3. Creating Blueprism Bot itself for Pipeline and SharePoint as Repository Blueprism should find inbuild better solution to achieve CICD for RPA beneficial making release management more flexible (eg. by allowing package creation outside BP using AutomateC command) My Linkedin post for getting inputs from industry: https://www.linkedin.com/posts/jigneshjk_cicd-rpa-automation-activity-6710137382743621632-lOaj
... View more
It would be great if Blue Prism can provide one or two months free trial or license for it's cloud products. Because most of us we don't know how it looks and how it works. Every organization does not have license for cloud products. SO it would be great if we can see it in future.
... View more
Right now if you wish to publish a process, you have to open it and then have to double click on Info stage. After that you have to tick the checkbox " Publish to control room ". There can be submenu option provided in order to publish it without opening it. For ex- In the Process tree , right Click on any process and click publish
... View more
Status:
Duplicate
Submitted on
06-08-19
12:51 PM
Submitted by
ChristopherJank
on
06-08-19
12:51 PM
I want the Option to have a Button to cklick at Import to set all radio Buttons to "Don't Import this business object" Its a pain if you want to import just one Object from a FullBackup and need to cklick 800 times on the radio button for no import
... View more
Currently it is possible to enable logs for all stages only via Edit Menu Menu : Edit > All Stages > Enable logging. Idea/Suggestion. It would be helpful (time saving) , when its possible to do the same for a porcess, via command prompt using AutomateC.exe. like ... AutomateC.exe /ProcessName /enablelogs To enable / disable logs for lot of processes at once, especially in different test environments, command line operations are simple and time saving.
... View more
Status:
Needs More Info
Submitted on
16-11-19
05:57 PM
Submitted by
Murali_KrishnaB
on
16-11-19
05:57 PM
We are using BluePrism v6.4.2 currently and as per our observation we found that Event Logs from Event viewer has same Event ID i.e. 0(zero) for different level of error levels(Critical,Error, Warnings and information). It would be really helpful if the Event ID are different for different log level to configure easily for monitoring tools like BMC Patrol, Splunk and other monitoring tools to set-up automated alerts
... View more
In order to be compliant with GDPR we need to delete personal data in the Blue Prism database after a given time. But, we want to keep the queue item itself for statistical use in our benefit realization model. As of now, my understanding is that you can't change the data in an item without locking the item. And you can't lock items that are marked as completed/exception. It would be nice if there was a built in function that let us update/delete the data field in a completed/exception work queue item so that we don't need to use SQL on the database or delete the queue item completely using the internal work queue object.
... View more
Currently the maximum limit of tabs (i.e. documents that can be opened inside the CV) is set to 15. The initial thought was to optimize the performance when processing attachments in runtime prior to us introducing the node clustering feature in HTML CV 1.4.1 to support higher volume of transactions. We currently do not have an option to change the max no. of attachments that can be opened. Comment from PS team:- Document Splitter (based on the same Snowbound Virtual Viewer code) does allow the number of tabs to be varied. This was at the explicit request of Lincoln Financial, who partially funded that effort. BUT, IMO many more than 15 tabs quickly becomes unmanageable from a user perspective. I believe that the count of "15" was directly ported from the desktop CV code.
... View more
Currently the only way to rename a resource pool is to delete the old pool and then create a new pool with the new name and then add all the resources to it. It would be great to have a functionality to directly rename the already created Resource Pool.
... View more