cancel
Showing results for 
Search instead for 
Did you mean: 
Walter.Koller
Level 11
Status: New

I am writing based on our experiences with BP v6.9 and some of my points might already been resolved and improved in more recent version.

I think I already created at least one idea on this topic but since I am again spending significant amount of time, I compile a list of possible improvements (again).

.) Make automateC /archive behave like UI archive

automateC does not archive _debug sessions, UI does

automateC skips resources with AttributeID=0, UI does

.) Make archiving less random

Current archiving approach is to randomly take one resource does seems to be idle (not sure if upcoming schedules are considered when choosing so they are not blocked/cancelled).

I assume there has to be an active Windows session (ie logged on) to run? That means all resources and all possibly used uses have to have access to the archive folder. (Logs are normally considered being sensitive and should not be accessible easily and any modification must be prevented... normally).

It cannot be ensured that the correct archiving target directory is used as this definition is per machine and not per connection. eg I am logging in with userA with connectionA setting archive target: folderA. When userB is logged in to the same machine using connectionB, those logs will then be archived to folderA instead to folderB. All resources have to be set manually to use the correct target folder. If not then set manually and explicitly, the target folder could also be some directory in MyDocuments which is user related and when using several logins finding the archives again is time consuming. (eg 10 resources with 12 users results in quite some possible combinations where those logs could be)

The 'one machine one archiving target' also makes setting up 'archiving nodes' impossible. The idea would be to provide one dedicated resource with one specific special user being used for archiving of several BP instances. But the used resource is random and it is not possible to run archiving with different targets on the same machine.

.) Make archiving a server task or enable it for use in processes

I think the most common approach for archiving is that the server is taking care of it (no need for login, one defined machine doing archive, ...).

Alternatively it would help a lot to be able to create a process that triggers archiving so we are able to specify the when, where what ourselves. (Currently this is not possible since automateC behaves differently to archiving in UI).

.) Make BP aware of archiving

In the meaning of, show when, where and what did archiving do. Currently there is no easy way to even see if archiving took place. Archiving should be visible in the logs / Control.

.) Provide feedback during archiving

When we start archiving we don't know anything of the progress, time remaining, if it got stuck, ... It would be great to have any kind of feedback in UI and automateC because we are now waiting for hours in hope of archiving might still work and might finish correctly

.) Improve archiving performance

Singe days can take minutes or even hours to be archived depending on the size of the table and amount of logs being processed. During this time process execution is slowed down significantly. We saw process steps taking 4x the time when some archiving tasks are running in the background. 

.) Make archiving more reliable

The more processes/days/... are selected to be archived in the same session, the more likely it is there will be a (timeout) error. This is strange since logs are archived at the most granular level and there should be no difference if I starting archiving 30x with one day each or 1x with 30 days, but obviously there is.

Also more detailed error description would be helpful to understand why we ran into timeout and how we can avoid this in future

.) Provide means to archive audit logs

Audit logs are more important than execution logs. Although they use much less space they will grow over time and there is no feature to clean up those logs in an organized way. We would have to run manual delete SQL scripts... which our IT security department does not like so much.