About handling remote file stream, you (Johan) gave us a few interesting pointers.
Using a FileTransfer object, whose reference would be held on the server side in a ConcurrentHashMap and pushing bytes repeatedly by chunks (always using the reference key), will work of course.
Still I'm wondering what happens when x clients are requesting the same method at the same time...
If the method takes some time to process, wouldn't it be better to return immediately and have a thread per client (preferably in a pool) that would handle the long process? Because otherwise I guess that since file handling over the internet can potentially take some time to process, (for example if one client is pushing a big file), the others will have to wait or will get a timeout (one or the other? that's what I wanted to know when I asked if you were queuing the calls).
The thing is that I already wrote an implementation, where I was managing a Thread pool in the plugin Remote service on the server, but this worked because each thread was receiving its own RemoteInputStream and was able to return one result object. I'm handling threads on the server side because the plugin is designed to handle lots of documents sent by concurrent users. This worked well because the client and the thread were directly connected to each other (using the Remote Stream) once the process has started. Now that I know that RemoteInputStream is not an option since it will not use the tunnel, I'm looking for alternatives.
With your idea, the client has to repeatedly call the server to push more bytes, but I don't know how I can do that with threads...
Because the client will always need to call the Remote service (that you have exported) which is the unique entry point to the thread pool, but I don't want this entry point (the server plugin) to be the one doing the actual work. I would like the thread to be doing the process. But somehow I have trouble merging the two ideas together.
Any thoughts?