News:

Masm32 SDK description, downloads and other helpful links
Message to All Guests

Main Menu

Sending multiple files over a single TCP connection

Started by 2B||!2B, October 13, 2020, 08:53:15 AM

Previous topic - Next topic

2B||!2B

I would like to send multiple files at the same time. Normally, a distinct connection per file would be the approach. This is how FTP works.
But if several files were to be sent over a single socket, how would I'd go with this?
Because of the fact that receiving of file chunk has to be paused until the chunk finishes written on disk, this would create a problem if the hard drive is chocked then other messages would slow down too such as chat messages.

Example:
recv called with 8Kb buffer -> received 8Kb of file -> Async WriteFile -> wait until WriteFile finishes writing the chunk on disk -> recv again.

In the example above, if WriteFile would take long time to finish for whatever reason, this would create unnecessary delay for other messages sent over the same socket because recv is paused until that particular chunk is written on disk.

Any suggestions, ideas?

hutch--

You could try sending it as a ZIP file or a TAR ball like the Unix guys. If you don't want to send them separately like normal, you need some technique to differentiate between the files, a ZIP or similar file will do that for you.

2B||!2B

hutch,
Thanks for that. However, that still doesn't solve the issue i mentioned. Should i just open a new socket per file? is this a good approach?

hutch--

You have multiple choices here, send them serially one after the other, depending on the bandwidth you have you can try multiple sockets but there will probably be no gain as the bandwidth will simply be shared between multiple sockets. However you transfer multiple files across a network, you still need some mechanism to distinguish each file from the following file.

Generally a ZIP of TAR type file will be more efficient as it both compresses the data and does not have the spacing losses between each file and for large data that is how it is usually done. I would be looking for a technique where the compression/decompression is done on the fly.

Vortex

QuoteaPLib v1.1.1 - compression library

Compression library based on the algorithm used in aPACK. Includes both 32-bit and 64-bit libraries, and source code for decompression in C and asm.

http://ibsensoftware.com/download.html

2B||!2B

Thanks hutch and Vortex.
You are right, the send of multiple files will also depend on bandwidth. But the main goal is not to improve speed but is to keep on receiving(other data) while file send is in process because the socket itself will pause recv until write on disk finishes. So if opened new socket, it will only serve this file and will be handled by a different thread.
Very informative and good method of compression.
Does this lib provide any better compression ratio than RtlCompressBuffer api?

jj2007

Quote from: 2B||!2B on October 17, 2020, 01:41:46 PMDoes this lib provide any better compression ratio than RtlCompressBuffer api?

Certainly, RtlCompressBuffer is notably inefficient. I use the aplib in UnzipFiles, because it's tiny (5.5k) and fast.

DebugBSD

Quote from: 2B||!2B on October 17, 2020, 01:41:46 PM
Thanks hutch and Vortex.
You are right, the send of multiple files will also depend on bandwidth. But the main goal is not to improve speed but is to keep on receiving(other data) while file send is in process because the socket itself will pause recv until write on disk finishes. So if opened new socket, it will only serve this file and will be handled by a different thread.
Very informative and good method of compression.
Does this lib provide any better compression ratio than RtlCompressBuffer api?

I think the best method is using some kind of finiste state machine where you send different files through the same socket without the need of close / (re)open the connection between client and server. You only need to split files in the same size (using a page size) and send all of them through the same socket using some kind of Round Robin scheduling algorithm. In that case you will need to implement some kind of small protocol where you check the availability of all pages and to be able to construct all files when every page is received in the client.

Have a nice day!
Guille
Happy Hacking!