Mr. Ghilser
As usual, thank you for your rapid feed-back.
But I didn't really understood your message. Are you expecting for everyone to ask the same thing? How could it happen? Are the user supposed to discuss on the forum and agree on an unique solution before you implement it? I don't really know, I'm quite new here.
However, I think that most people have the same need, on the contrary. While you propose a very good list of duplicated files, it takes so much time to select manually the files. That's the big problem, don't you think?
Most of the user are doing manually a task that takes
hours and could be easily automatised, in humble opinion. While they select by hand the files to delete, I believe that they take some decisions, and then apply these decision to each group of files. At least, that's how I do. (Or, to be honest, as I try to do, because when there are hundreds of files, it's quite impossible to do.)
These decisions seem quite simple. When there are 4 or 5 directories containing the same file, we can choose one of them, and try to remember our decision for the following groups of duplicates. You could solve this by asking the user which is he's preferred directory, isn't it? When it's in the same directory, it's a bit harder, it's often the longest name, but indeed, it's the most meaningful name that we choose. OK, you're right, in this case, it's quite impossible to implement a 100% reliable method to choose the best filename, since the shortest could be the most meaningful.
Let's imagine that you have 2 identical pictures, on called by the place "haiti.jpg" and the other one with the date "12july2003.jpg" : both information are useful, and no program could ever guess that it should delete one file, and to rename the other one with "haiti_12july2003.jpg".
This is just an example to say that it's just impossible to write a perfect algorithm. So if we wait for a perfect algorithm, I guess we are condemned to manually select hundreds of duplicates for the years to come
The fact is that a lot of users just want a simple thing: to quickly delete duplicates, to make free space on their hard drive, without loosing time or data.
(It could be the case for pictures or so, that you've copied to a portable device, and then again on your hard drive because you're not sure that you still have it, ...) For this, you've already made half of the work (quickly finding duplicates) , and the second half should be to quickly select files in the list that you show, I think.
Personally, I think that
TC should not have a fully automatic functionality to delete duplicated files. In fact, TC should only
help to make a selection faster in the duplicated files list. Even if I think that TC is a very reliable tool, I would never let TC delete my files without being able to have a look before at what it will delete.
That's why, once we have obtained the duplicated files list (alt-F7 > search > feed to listbox), a supplementary tool to (un)select files in that list would be incredibly helpful. Of course, the most powerful option(*) of this tool would be to ensure that there is always at least 1 file left selected (or unselected) in each group.
(*) it's an option, maybe showed as a checkbox, because some user could want to (un)select all files in a group.
Other (un)selecting options could use the date, regular expressions or TC plugins information (exif, image width, ...)
So it would
not be about deleting, it would be about
(un)selecting files in groups (and ensure 1 file (un)selected in each group).
Once done, we are able to fine-tune the selection manually if we want to, and then it's
our responsibility to push the DEL key or not.
That's what should resolve problem of a lot of users, in my humble opinion. It doesn't involve security problems, and it's possible to implement.
Thank you
