News:

Masm32 SDK description, downloads and other helpful links
Message to All Guests
NB: Posting URL's See here: Posted URL Change

Main Menu

Explorer "copy files"

Started by KeepingRealBusy, September 18, 2013, 01:57:22 PM

Previous topic - Next topic

Antariy

Quote from: KeepingRealBusy on September 20, 2013, 01:46:17 PM
I thought that too, until it started copying the larger files (at the end of the file list), but the speed did not increase very much ( from 15.6 KBS to about 18.5 KBS), whereas when copying the 131072 files all the same size (6,152 KB) the copy was running at 56 KBS. I just don't know.

Ah... then I don't know the reason, too. Just a thought maybe the system cache "overrun" - it was flooded by previous files, so Windows did not hurried to flush it... BTW, you can check that with the program in my signature - RAM Clear. When you do copy of much files, and get a slowdown somewhere in the middle - look on the indicator of the free memory, if it's low, then it obviously the caching problems of the lazy flusher. Just "for fun" you may try to start an "optimization" of, let's say, half of a memory, and don't interrupt the copy process, all things will seem to "hang" probably, but after optimization the copy should go much faster for the some time - until the system cache will be flooded again. Explorer uses strange "try to buffer in memory the entire things that a lot bigger than memory" strategy ::)
(Just a note: RAM Clear needs to be swithed into Expert mode and then into Advanced algorithm mode, to free more than 1,5-2 GB of RAM)

Antariy

Quote from: KeepingRealBusy on September 20, 2013, 02:04:26 PM
Win7 seems to copy the files sequentially, at least the files have increasing file names and they are displayed in order as the copy proceeds (at least some of the names are displayed - happens too rapidly).

Then probably the problem is in too greedy and lazy bufferization that Explorer implements.

Magnum

What if you zipped all those files and sent one big one and then decompressed it ?

I am going to test that out myself to see because I am curious.

Andy

Linux is an option too, if you have it.

I found a lot of things are faster using it.

Take care,
                   Andy

Ubuntu-mate-18.04-desktop-amd64

http://www.goodnewsnetwork.org

KeepingRealBusy

Quote from: Magnum on September 21, 2013, 12:10:49 AM
What if you zipped all those files and sent one big one and then decompressed it ?

I am going to test that out myself to see because I am curious.

Andy

Linux is an option too, if you have it.

I found a lot of things are faster using it.

Andy,

Thank you for the suggestions.

Two or more  things come to mind. There would be a huge time involved in both zipping the files, and then later in unzipping them, remember we are talking about 500 + GB of data. I do not know if winzip even supports such a large zip, and also I do not know what Win7 uses in its "send to compressed folder" function and whether it would support such large inputs. Then finally, how does the unzip function really work? Will it create files with the known file size and then copy the content, or will it create the file and then start copying one huge block after another until the file is all copied. This can lead to fragmentation.

I do not have Linux, and do not plan to learn to use it.

Dave.

KeepingRealBusy

Quote from: jj2007 on September 20, 2013, 10:08:11 AM
Quote from: KeepingRealBusy on September 20, 2013, 08:25:51 AMso the other half is stuck with Office  2003.

Which is better than more recent versions IMHO ;-)

My office nerds tell me that Linux is a lot faster in copying files than Windows Explorer. Have you thought of writing your own file copy routine? Fat buffer for reading in two gigs of files etc? With so many files, it might be worth a try.

http://superuser.com/questions/213954/why-is-windows-explorer-a-lot-slower-than-filezilla-when-doing-ftp-transfers

How about good old Command-Line Xcopy? With S: being the source and T: the target:
xcopy /K /R /E /I /S /C /H /G /X /Y s:\*.* t:\

Or, even better: http://www.raymond.cc/blog/12-file-copy-software-tested-for-fastest-transfer-speed/2/

Scroll down to see "The Results and Findings". It seems FastCopy would be good for your case.

JJ,

I downloaded FastCopy and started examining the source code, and several things popped out. They create a file structure for all files all at once - I do not know if windows can support 131072 open files all at the same time. Then it tries to copy the files all at the same time with multiple threads. The copies are made (for any single file) with multiple block transfers, growing each file with each block. With multiple files being copied at the same time, this can only cause massive fragmentation.

No thank you.

Dave.

jj2007

Dave,
I didn't dig into the source code, sorry. However, the forums I studied put FastCopy on rank 1, followed by TeraCopy (the most popular tool). Among the strong points, low resource use was frequently mentioned.

Wilders:
Quote
The fastest for me is FastCopy and also, important, it doesn't make fragmented files like TeraCopy v2 and BurstCopy (tested with Defraggler)

KeepingRealBusy

JJ,

Maybe I have to study the code some more, to see if they somehow can create a full sized file before copying the content.  Maybe by creating the file, then seeking to the last BYTE, and writing a 0, then seeking to the front and starting the copy. Hard to follow the CPP code, especially with the unreadable "commentary".

Dave.

Magnum

Dave,

What is the MTBF for your drives ?

My thoughts on fragmentation.

I used to defrag once a week.

I went to once a month and did not notice any decrease in speed of my system.

Dave, you might not want to read below the line.

-------------------------------------------------------------------

For those who have been, or may be considering Linux.



Quote:
Originally Posted by yancek View Post
"they don't get fragmented until a drive is almost full."

False.

They get fragmented (though admittedly not *that much* as in windows). They key point are I/O schedulers, that reorder operations in a smart manner. So the whole point is not the lack of fragmentation, but the smart scheduling of the I/O which prevents all the stress associated typically to fragmented devices making the performance penalty pointless. However this can vary from fs to fs. Reiserfs has known problems with fragmentation, but the rest of the fs's should be fine unless the disk is almost full as you say. Even fat will perform ok on linux, unlike windows xp and previous versions (know nothing about later ones).

Quote:
If you just google "why you don't need to defragment linux" you will get 216,000 hits like this one:
Most of them telling you the wrong argument I assume, like the one you linked. I stumped with that same thread long ago and posted a correction below, signed as "Jesgue" so you can search for that in the thread to see an extended explanation about elevators or i/o schedulers.

Being that said, you usually don't defragment linux. You could always back a partition up, then format it and restore the backup, which is like a poor man's defrag. That's not too appealing, I know.
Take care,
                   Andy

Ubuntu-mate-18.04-desktop-amd64

http://www.goodnewsnetwork.org

KeepingRealBusy

Andy,

I don't generally defrag very often, but in the case of these files, I wanted the fastest access possible. According to the Win7 tools, none of my drives exceed 3% fragmentation, the 4TB drives < 1%.

My main objective with this thread was to rant about my stupidity in the initial copy files fiasco, but Dave supplied me with a useful hint that I will follow up on, i.e. "Are these Seagate 4 TB drives safe to connect to my old XP system"?

Dave.

KeepingRealBusy

Quote from: dedndave on September 18, 2013, 05:44:40 PM
being as the drives are > ~2 tb, they are GUID Partition Table (GPT) formatted, rather than MBR
XP doesn't support GPT directly, although there are drivers available
but, that may explain the defrag issue

i will give you a heads-up....
don't connect a GPT drive to an XP system without GPT support
it is possible that you lose a bunch of data
and, you may have trouble getting it to be a 4 tb drive, again   :redface:

Dave,

You were right, only supported on XP SP3 according to the Website (the packaging only said XP).

Dave.

dedndave

that's odd
i run XP SP3, and i had to install a driver to support GPT drives
perhaps what they mean is - they provide a driver, and it requires XP SP3 ?

KeepingRealBusy

Dave.

I think that is what they mean.

I have no need to do this, I already have several pairs of "too small" backup drives (750 GB) that will work quite well on the XP system.

Dave.

KeepingRealBusy

I am looking into downloading and using UltraDefrag.

Anyone know anything about it?

Dave.

dedndave

i use Puran Defrag
that's on a 32-bit system
i am very happy with it

Antariy

Quote from: KeepingRealBusy on September 21, 2013, 05:10:31 AM
Maybe I have to study the code some more, to see if they somehow can create a full sized file before copying the content.  Maybe by creating the file, then seeking to the last BYTE, and writing a 0, then seeking to the front and starting the copy. Hard to follow the CPP code, especially with the unreadable "commentary".

SetFilePointer and SetEndOfFile maybe?

Also, the though, Dave, maybe the slowdown while copying caused by Explorer files list in a window of a destination? Did you try to close the window of a folder where you targeted files to? I.e., when there are much files copied, Explorer gets more memory to save the listview where the files are displayed, as well as it constantly updates the information about files presented in the window. When there are much files, Explorer may probably take much resources for that target (memory loading and disk requests).