20-06-23 02:54 PM
Hi,
We are interested in NLP features of Decipher and now noticed it requires physical NVidia GPU to run.
This will be a bit of a problem since server normally don't come with dedicated GPU especially as sophisticated as NVidia since most server are not supposed to render anything.
Additionally about all of our servers are virtual and we currently running a 'into the cloud' program. So it will be difficult to argue for a physical machine with dedicated GPU.
Even our workplaces don't have dedicated GPU since integrated GPU are sufficient for daily work.
Is there any option to get NLP to run in a virtual environment?
Any future plans to implement to add a non-GPU version?
Our robots will run somewhere in the basement and have all night time to 'think' of how to put the documents in structure we can use.
22-06-23 10:40 AM
Hi Walter,
I use a virtual server on Azure Labs with a GPU and it works well, so it is possible, but will depend on what set up you're using. It's worth noting you'll need a GPU on all servers where the Data Capture Client is installed.
Due to the complexity of the NLP algorithm it's unlikely we'll ever be able to support it on a CPU. I think most document processing NLP services need a GPU, just due to how the differences in the chip construction. A CPU could do it, but it would takes substantially longer.
Thanks