Hi Tom,
I have a process that works 75,000 plus records each day in one queue, I initially ran into the same issues as you. The key for my process was to reduce the amount of columns in the data capture before the data makes it to the collection. The collection is where your memory errors generally occur. Collections cannot handle working with large amounts of data because they are not SQL driven like the work queue.
My process also captures the 75,000 records in increments of 5000. I perform some initial data cleansing (several loop stages) as well before sending it on to the queue. Reducing the columns was the first step. We also recently discovered that the log file and work queues on the BP server needed to be cleaned and reduced and put on a maintenance schedule. We also added more RAM to our BP server to help with all of the data ""crunching"".
However, adding all 5000 to the queue at once has never been a problem. That problem may be an issue with resource limitations on your workstation.