01-12-23 02:32 PM
Hello, we have witnessed that on some occasions a bottleneck is occurring when Machine Learning has been turned on for a week or more. Are there recommended specs for the app server when ML is active?
We were aware that performance would be slightly slower due to ML but we weren't expecting the memory spike and bottleneck. If this were to happen again, what is the recommended course of action to move documents through the IDP steps?
The bottleneck occurred at the step "ClassVerify performed - ready to Capture" and the only thing we could do was wait for the documents to proceed to the next step "Capture".
Thanks
05-12-23 08:22 AM
Hi Ben,
The additional ML functionality will increase the processing time, this will depend on size of the model, number of DFD fields, amount of DFD configuration and available resource.
I'm sure you will have already reviewed the sizing guide, so my advice would be to add further Capture clients on a new server. After using a high spec server, the other way to increase Decipher's performance is to expand horizontally. Decipher can intelligently manage processing documents across multiple servers, allowing you to add multiple automation clients to flex with your environment needs.
Thanks