Compare directories (on diff. drives) by content

English support forum

Moderators: white, Hacker, petermad, Stefan2

mmm
Member
Member
Posts: 120
Joined: 2020-08-10, 12:32 UTC

Re: Compare directories (on diff. drives) by content

Post by *mmm »

georgeb wrote: ↑2023-01-09, 12:25 UTC
mmm wrote: ↑2023-01-09, 09:24 UTC
I requested the same Sync Dirs enhancement here:
viewtopic.php?p=417266&hilit=mmm+ghisler#p417266
The excerpt above needs my clarification:
The external link does not call for "consolidate files from many folders into one" being discussed in this thread.
It is a simple enhancement request for implementing an "Ignore names" option in Sync Dirs; driving factor being disk cleanup.

Hope it helps,
mmm
algol
Senior Member
Senior Member
Posts: 448
Joined: 2007-07-31, 14:45 UTC

Re: Compare directories (on diff. drives) by content

Post by *algol »

mmm wrote: 2023-01-10, 10:39 UTCThe excerpt above needs my clarification:
The external link does not call for "consolidate files from many folders into one" being discussed in this thread.
It is a simple enhancement request for implementing an "Ignore names" option in Sync Dirs; driving factor being disk cleanup.
Thanks for setting the record straight. I had no intention to misrepresent your earlier request.

Although an " 'Ignore names' option in Sync Dirs" would practically mean searching for duplicates in moved/renamed locations as well. So other than requesting such a feature - which I wholeheartedly support - did you also come up with any suggestions of your own of how to represent those duplicates found in "Sync Dirs" thereafter making them distinguishable from the other files for independent further processing ("cleanup")? How should these binary duplicates by content, regardless of their names, in your opinion be integrated into the current groups "Unique Left"/"Unique Right"?

Suffice it to say that even if you didn't care about "consolidate files from many folders into one" the proposal by "@georgeb" as discussed in this thread - in some kind of a more universal approach - would be perfectly capable of handling both situations ("cleanup" AND "consolidation") in one comprehensive environment - the basics of which are already laid out in the "Sync Dirs"-tool in its current form.
mmm
Member
Member
Posts: 120
Joined: 2020-08-10, 12:32 UTC

Re: Compare directories (on diff. drives) by content

Post by *mmm »

Algol,
I am not going to participate in the implementation talk prior to seeing an "Approved" flag raised by TC ladies.

Best,
mmm
algol
Senior Member
Senior Member
Posts: 448
Joined: 2007-07-31, 14:45 UTC

Re: Compare directories (on diff. drives) by content

Post by *algol »

mmm wrote: 2023-01-11, 05:18 UTC I am not going to participate in the implementation talk prior to seeing an "Approved" flag raised by TC ladies.
...raised by TC ladies??? :? Not sure what that is supposed to mean? :?
HalbschuhTouri
Junior Member
Junior Member
Posts: 61
Joined: 2023-01-20, 09:33 UTC

Re: Compare directories (on diff. drives) by content

Post by *HalbschuhTouri »

Hello, I'm new to this forum. However the suggestions about "SyncDirs" debated in this thread sound like a brilliant concept to me to deal with renamed or moved files in there. I've been looking for that same kind of functionality for a very long time.

And to be honest I'm kind of wondering why the author of TC, Mr. Ghisler, seemingly wouldn't care to comment on such a technically mature proposal contrary to his otherwise expressed willingness to help on even minor occasions?
HalbschuhTouri
Junior Member
Junior Member
Posts: 61
Joined: 2023-01-20, 09:33 UTC

Re: Compare directories (on diff. drives) by content

Post by *HalbschuhTouri »

No relevant comments any more? Really? This proposal has been far too good for that thread to sink into oblivion.
georgeb
Senior Member
Senior Member
Posts: 250
Joined: 2021-04-30, 13:25 UTC

Re: Compare directories (on diff. drives) by content

Post by *georgeb »

algol wrote: 2023-01-10, 17:18 UTCSuffice it to say that even if you didn't care about "consolidate files from many folders into one" the proposal by "@georgeb" as discussed in this thread - in some kind of a more universal approach - would be perfectly capable of handling both situations ("cleanup" AND "consolidation") in one comprehensive environment - the basics of which are already laid out in the "Sync Dirs"-tool in its current form.
Thank you. This is exactly what my proposal was all about!
oko
Senior Member
Senior Member
Posts: 200
Joined: 2007-05-03, 16:22 UTC

Re: Compare directories (on diff. drives) by content

Post by *oko »

The theme of sync-dir and find duplicates is interesting. I use both functions. I admit I haven't read or understand absolutely everything because there is so much and sometimes complicated, but I have read some here and in other posts.

I'll put some of my thoughts here, maybe someone will find something inspiring in it.

1/ The syncdir tool is for synchronizing. Its purpose is not to find files or duplicates, but to compare and reconcile two sides. The 1:1 principle is exactly what is to be followed.

2/ Since syncdir is principally comparing, not searching, the term "a duplicate" is not used, but "equal/not equal". Probably the word "unique" should not be used, since some people tend to create names like real-unique and non-real-unique. Therefore, perhaps a better term from a comparative perspective would be paired/unpaired (non-pair). What has a pair is either equal or unequal. What does not have a pair (from that perspective it is unique, alone, single) cannot be compared whether it is equal or unequal, but it is also an important state for synchronization.
Syncdir has its criteria (name, date attributes, content) for what it considers unique and what not unique. But some people use their own criteria for determining what is unique, hence misinterpretations occur. When a file has at least one property different, it cannot be considered non-real-unique, not-true-unique. For example, if it has the same content but does not have the same name, it is unique according to the syncdir criteria. According to other criteria, it could be described as e.g. content-none-unique.

3/ Perhaps a "ignore filenames" option could be added to syncdir (with option to turn this option on or off), but it still has to be kept 1:1 concept, so if syncdir can not assign/pair identical files by name, it would compare files by size/content and mark files identical in content but not identical in name as none-identical, but with a different (new) marker, not a red crossed out equals sign. And it would also add a new filter button to hide/show them from view. The files would be next to each other (on both side in the same row), and marking them for copying from right to left or left to right for those files would mean copying only the name, not the whole file. If there were multiple identical files in a folder, only one would be paired with the other by syncdir and the others would be handled as now (as unique).
I know this would not solve some situations, but it would solve some. It would be a bit of a step forward. E.g. a renamed file would be able to be resolved by syncdir, which it can't today. It would also make it easier to see the folders where the name changes were made, and possibly by what renaming concept.

4/ I have a feeling that the reason why some want to get a find-duplicate function into syncdir is the userfriendly appearance/layout of syncdir. Therefore, it should be considered whether it would be better to improve the appearance of the find-duplicates results window, or to mimic the appearance of syncdir. Apparently the look/layout of syncdir is more natural to many users than the look of find-duplicates. So syncdir and find-duplicates could remain separate functions. Maybe if find-duplicates were separated from find-files as a separate window with the appearance of syncdir (e.g. named as "find-dups"), in future maybe the right connectors could be found to link find-dups and sync-dirs. Today it is hard to imagine graphically and functionally how the duplicates assignment in syncdir should work, but later if such a graphical representation would already be in a separate find-dups function, better ideas would come up on how to link sync-dir with find-dups.

5/ Find-duplicates can find duplicates in any location (and combination of locations) and select files according to various criteria to do with them what is desired. But if none of these options in find-duplicates suit someone, because their criteria are not fixed or cannot be automated and thus they need to manually see and asses the results of the search ( e.g. deside which to select) , no software solution will help, not even implementing it in syncdir. Just improve the appearance of find-duplicates to make manual assesment as pleasant as possible for this activity (orientation, simplicity, clarity, etc.).

Translated with DeepL.com (free version)
georgeb
Senior Member
Senior Member
Posts: 250
Joined: 2021-04-30, 13:25 UTC

Re: Compare directories (on diff. drives) by content

Post by *georgeb »

oko wrote: 2024-01-13, 23:09 UTC 1/ The syncdir tool is for synchronizing. Its purpose is not to find files or duplicates, but to compare and reconcile two sides. The 1:1 principle is exactly what is to be followed.
At least its primary purpose is not about searching. But - as I see it - IF binary duplicates exist among the files to be reconciled and whether or not to keep all of them in the end-/final-version or if there are wrongly-moved and/or misspelled/wrongly-renamed copies among them that rather ought to be removed during reconciliation is ABSOLUTELY ESSENTIAL.
oko wrote: 2024-01-13, 23:09 UTC 2/ What has a pair is either equal or unequal. What does not have a pair (from that perspective it is unique, alone, single) cannot be compared whether it is equal or unequal, but it is also an important state for synchronization.
Syncdir has its criteria (name, date attributes, content) for what it considers unique and what not unique.
Sorry, but I totally stand with my differentiation of "truly-unique" vs. "pseudo-unique" as "true uniqueness" in the end (and from a logical rather than only formalistic point-of-view) can only be a question of binary content. Let me give you an example: if 2 twin-sisters decided to order exactly the same car each and on arrival both vehicles would of course receive individual license-plates (=names) - can we now say that these are both "truly-unique" (singular) vehicles? As I see it - of course not. A license-plate at best makes them "singular" in a very formalistic sense while in reality they remain non-singular, identical copies of the very same model.

And yet the real problem in our file-structure-case is that there may well exist even completely identical, equal pairs which are not even recognized as such by the current "SyncDirs"-tool - because they may happen to be located someplace else and thus are erroneously deemed "unique" (or in other terms erroneously called "singular") by the current "SyncDirs"-tool
oko wrote: 2024-01-13, 23:09 UTC 3/ Perhaps a "ignore filenames" option could be added to syncdir (with option to turn this option on or off), but it still has to be kept 1:1 concept, so if syncdir can not assign/pair identical files by name, it would compare files by size/content and mark files identical in content but not identical in name as none-identical, but with a different (new) marker, not a red crossed out equals sign. And it would also add a new filter button to hide/show them from view. The files would be next to each other (on both side in the same row), and marking them for copying from right to left or left to right for those files would mean copying only the name, not the whole file. If there were multiple identical files in a folder, only one would be paired with the other by syncdir and the others would be handled as now (as unique).
And this is exactly where a binary-duplicates-search (only necessary between the prima-vista "unique" columns on both sides) inevitably comes in through the back-door! I have no intention to REPLACE the already existing binary-duplicates-search from <Alt>F7 - for general purposes other than reconciliation - by moving that capability entirely to "SyncDirs" - I just would argue in favor of introducing that same algorithm (as it already exists) to now being performed only between the two columns prima-vista deemed "unique" by current "SyncDirs".

To spare you reading through this whole thread I'd like to point you to a concise summary of my whole concept which I just happened to re-publish yesterday as a comment to a new but similar, most recent thread:
viewtopic.php?p=447920#p447920

What I would have completely to disagree with is your notion of handling multiple identical files by pairing only 2 of them and treating the others as "unique" - which by now we would KNOW EXACTLY THAT THEY ARE NOT!

My approach would rather be to have a right-click-context-menu-option when clicking on any of those names within the (separately selectable) group of binary-duplicates (left/right) that would open a new (pop-up?-)-page ONLY showing ALL the binary duplicates OF ONE PARTICULAR FILE, yet again within the proven view-mode-concept of "SyncDirs", allowing for individual inspection and individual selection for further steps thereafter.
oko wrote: 2024-01-13, 23:09 UTC 4/ I have a feeling that the reason why some want to get a find-duplicate function into syncdir is the userfriendly appearance/layout of syncdir. Therefore, it should be considered whether it would be better to improve the appearance of the find-duplicates results window, or to mimic the appearance of syncdir. Apparently the look/layout of syncdir is more natural to many users than the look of find-duplicates. So syncdir and find-duplicates could remain separate functions.
Yes, it is certainly true that the "SyncDirs"-way of representation of results in a structured, tree-/path-oriented manner is next to optimal, especially as it facilitates further individual inspection and selection of the results, including looking at images or listening to sounds before making a final decision what to do with those files next or even offering the possibility to delete some of those (unwanted, redundant?) copies right in place, thereby offering a level of versatility and flexibility far above and beyond the possibilities of the <Alt>F7-find-duplicates results window. But having that said doesn't mean the <Alt>F7-approach would have to go. It sure has its merits as long as the number of pairs/triples of duplicates found doesn't go into the thousands and as long as the desired/sought-after duplicates can be selected by a rather formalistic approach (using Num+ folder-selection) and do not need any individual inspection which that latter representation is unable to offer.
oko wrote: 2024-01-13, 23:09 UTC 5/ Find-duplicates can find duplicates in any location (and combination of locations) and select files according to various criteria to do with them what is desired. But if none of these options in find-duplicates suit someone, because their criteria are not fixed or cannot be automated and thus they need to manually see and asses the results of the search ( e.g. deside which to select) , no software solution will help...
No objection here, except that IF individual inspection is (at least in parts) needed before making a final decision then the constraints-exposing representation of results in the "SyncDirs"-mode, albeit unable to provide automated decisions, can be a huge step in speeding up the decision-making-process by the remarkable versatility it has to offer.
User avatar
Dalai
Power Member
Power Member
Posts: 9364
Joined: 2005-01-28, 22:17 UTC
Location: Meiningen (Südthüringen)

Re: Compare directories (on diff. drives) by content

Post by *Dalai »

oko wrote: 2024-01-13, 23:09 UTC3/ Perhaps a "ignore filenames" option could be added to syncdir (with option to turn this option on or off), but it still has to be kept 1:1 concept [...]
How would that work? Let's assume that I have a file called 1.txt on the left side and three files called 1.txt, 2.txt and 3.txt on the right side, all of them having the same contents, dates, attributes and so on. How is that still a 1:1 relation when the names are ignored? It's not, it's a 1:n relation. And as soon as you break out of the 1:1 relation, comparing files gets really difficult because you would have to compare every file from one side with every file on the other side. Doing this by comparing file contents is ... pretty absurd is maybe the best way I can put it right now.

2georgeb
Not sure if it was in a discussion with you or someone else. In another topic I think I wrote that "unique" in this context can be thought of as "single". You could edit the TC language file (or create one overriding TC's internal language) and replace the word "unique" with "single" or something to avoid this confusion. If it's that important to you maybe make a suggestion to get this term changed in TC itself.

Regards
Dalai
#101164 Personal licence
Ryzen 5 2600, 16 GiB RAM, ASUS Prime X370-A, Win7 x64

Plugins: Services2, Startups, CertificateInfo, SignatureInfo, LineBreakInfo - Download-Mirror
oko
Senior Member
Senior Member
Posts: 200
Joined: 2007-05-03, 16:22 UTC

Re: Compare directories (on diff. drives) by content

Post by *oko »

to georgeb

What is true-unique and pseudo-unique for you?
the same content but different filename
the same content and same filename but different path (location)
the same content but different filename and different path (location)
the same content but different date
or so on.

In your example:
Nobody wants to get rid of copies of cars to remain only one model of car series in the world or in the town, so if I come to one sister I do not want to search for other copies of her car :) I mention it only that there are no such motivations than in tc :) .
The two cars mentioned are unique, it can not be considered as the same (equal). If one of car has accident police or assurence service knows exactly which car of two, it can be identified because has one of properties different (ev. number), so it is unique (not pseudo-unique). You put your own criteria which is binary content and you ignore filename (filename is not your criteria), therefore unique for you is not unique for syncdir. Of course you can stand with your naming of levels of uniqeness.

If file was removed, it has logical meaning/reason to be there (location is important property of file), so the file is unique (by its path) despite of the fact that on the other disk the twin file is in original location. It is not "erroneously deemed "unique" ", it is unique. But if you limit criteria then you can say it is pseudo-unique, but it seems like you question uniqueness, therefore content-not-unique, content-identical or so is clearer terms.

Syncdir has better userfriendly layout than find-duplicates but still has limitations to show result of duplicates search. The new window (as you mentioned) is needed. Vertical layout of files in find-files is sometimes better than beside layout in syncdir. There is maybe the tree structure missing to see at duplicates to visualize where duplicates are. For now you can use Ctrl+Arrow in feedlist (right or left according where the feedlist is) to show file in oposite window in its real location and structure. So you can walk through feedlist with up/down arrow and press ctrl+right/left arrow on items you want to manually inspect.
oko
Senior Member
Senior Member
Posts: 200
Joined: 2007-05-03, 16:22 UTC

Re: Compare directories (on diff. drives) by content

Post by *oko »

Dalai wrote: 2024-01-14, 11:23 UTC
oko wrote: 2024-01-13, 23:09 UTC3/ Perhaps a "ignore filenames" option could be added to syncdir (with option to turn this option on or off), but it still has to be kept 1:1 concept [...]
How would that work? Let's assume that I have a file called 1.txt on the left side and three files called 1.txt, 2.txt and 3.txt on the right side, all of them having the same contents, dates, attributes and so on. How is that still a 1:1 relation when the names are ignored? It's not, it's a 1:n relation. And as soon as you break out of the 1:1 relation, comparing files gets really difficult because you would have to compare every file from one side with every file on the other side. Doing this by comparing file contents is ... pretty absurd is maybe the best way I can put it right now.
Note that if there is no file with the same filename then tc will search file with the same size. So in your example, the file 1.txt would pair with oposite 1.txt and the files 2. and 3 would be considered as right-unique (as it is now). But if there would not be file 1.txt, only 2.txt and 3.txt on the right side, tc would pair 1.txt on left with 2.txt on right (the 3.txt file would be considerd as right-unique). But there would be neither equal sign nor not-equal sign but new color and sign (or no sign but empty field) to user know, that he must deside if copy from right or left. As regards searching, tc searches in compared folder only, not in all sync folders and compares only files with the same size and it stops comparing if first content-identical file is found. 1:1 relation would be not broken anyway.
Last edited by oko on 2024-01-14, 12:47 UTC, edited 1 time in total.
georgeb
Senior Member
Senior Member
Posts: 250
Joined: 2021-04-30, 13:25 UTC

Re: Compare directories (on diff. drives) by content

Post by *georgeb »

Dalai wrote: 2024-01-14, 11:23 UTC How would that work? Let's assume that I have a file called 1.txt on the left side and three files called 1.txt, 2.txt and 3.txt on the right side, all of them having the same contents, dates, attributes and so on. How is that still a 1:1 relation when the names are ignored? It's not, it's a 1:n relation. And as soon as you break out of the 1:1 relation, comparing files gets really difficult because you would have to compare every file from one side with every file on the other side. Doing this by comparing file contents is ... pretty absurd is maybe the best way I can put it right now.
If I may chime in here, too: not so absurd as you may think. The binary-duplicates-search - already narrowed down to the 2 branches of "unique" files (after the current "SyncDirs"-definition) would reveal these 4 files from your example as being duplicates from the binary-contents-point-of-view and therefore (after my concept) move all 4 of them to the 2 newly introduced categories of "only LOCALLY unique (left/right) - with duplicates somewhere else". And, yes, that would be a 1:n-relation. So what's wrong about that if it factually existts anyway? The point is that (after my concept) only those 4 identical files could be displayed together in a new, single and concise (pop-up?) window AND THEN THE USER GETS TO DECIDE. Perhaps these are all INTENTIONAL copies and should all rightfully remain in the revised, final version of data - perhaps however they should not - as now being identified as misplaced/misspelled stray-documents. Whereas without identifying them as binary duplicates the user wouldn't even know about the special situation concerning those 4 files and therefore gets nothing to decide at all as he would still be UNAWARE of those existing duplicates.

And if you really don't care about duplicates in your synchronization-effort at all - then simply omit that "second-level-scrutiny".

2georgeb
Not sure if it was in a discussion with you or someone else. In another topic I think I wrote that "unique" in this context can be thought of as "single". You could edit the TC language file (or create one overriding TC's internal language) and replace the word "unique" with "single" or something to avoid this confusion. If it's that important to you maybe make a suggestion to get this term changed in TC itself.[/quote]

@Dalai
to make that clear once and for all: I couldn't care less about pure terminology. If you now call those "unique" or "single" (when in fact they are neither) is pure semantics and therefore (almost) irrelevant. What I am all in about here is being able to IDENTIFY THE TRUE NATURE of those files under consideration! If those categories would simply be renamed to get rid of the looming "possibly-duplicates-question" - that pretty much reminds me of a well-known "strategy" among 5-year-olds - that is to hold their hands before there facees while playing "hide&seek" and calling out "can't see me now"!
georgeb
Senior Member
Senior Member
Posts: 250
Joined: 2021-04-30, 13:25 UTC

Re: Compare directories (on diff. drives) by content

Post by *georgeb »

oko wrote: 2024-01-14, 12:20 UTC to georgeb

What is true-unique and pseudo-unique for you?
the same content but different filename
the same content and same filename but different path (location)
the same content but different filename and different path (location)
the same content but different date
or so on.
I would consider ALL OF THOSE EXAMPLES as "pseudo-unique" as according to my definition "truly unique" would really mean what it says, that is a file having no binary duplicate whatsoever someplace else. It is purely contents that really matters.
oko wrote: 2024-01-14, 12:20 UTC In your example:
Nobody wants to get rid of copies of cars to remain only one model of car series in the world or in the town, so if I come to one sister I do not want to search for other copies of her car :) I mention it only that there are no such motivations than in tc :) .
The two cars mentioned are unique, it can not be considered as the same (equal). If one of car has accident police or assurance service knows exactly which car of two, it can be identified because has one of properties different (ev. number), so it is unique (not pseudo-unique). You put your own criteria which is binary content and you ignore filename (filename is not your criteria), therefore unique for you is not unique for syncdir. Of course you can stand with your naming of levels of uniqueness.
Sure, for police-identification it is important to identify the one copy of that model that has been involved in an accident and no one would ever think of deleting (scrapping) one car just because another-one exists. It is a clear-cut-case of an INTENTIONAL copy. But to decide whether it's intentional or not I have to know about the existence of the other copy first.
oko wrote: 2024-01-14, 12:20 UTC If file was removed, it has logical meaning/reason to be there (location is important property of file), so the file is unique (by its path) despite of the fact that on the other disk the twin file is in original location. It is not "erroneously deemed "unique" ", it is unique. But if you limit criteria then you can say it is pseudo-unique, but it seems like you question uniqueness, therefore content-not-unique, content-identical or so is clearer terms.
I initially encountered this problem when having/trying to reconcile/consolidate series of scientific measurements resulting from distributed stations around the globe where it is of crucial importance that every data-record is shown (AND COUNTED) only ONCE in its proper position (according to systematic nomenclature). Misplaced or wrongfully named copies of that very same data-record would then be counted multiple times (at least twice) and could therefore distort the final result of that measurement-series in its entirety. But a more commonplace and therefore better understandable example for the average-user would be the consolidation of music-archives on different sources.

Now there might exist good reasons for keeping different versions of the same title (perhaps different quality or live- vs. studio-versions). But usually - if you've got 2 or more binary-identical flac-copies in different locations and/or with different (misspelled?) names the final goal would of course be to keep only one version/copy thereof for the final/consolidated version - namely the one with the correctly spelled name in the correct systematic location (according to the prevailing naming-convention).

Those case-studies tell us that it makes absolutely no sense to cite examples where both (multiple) specimens/copies should rightfully co-exist (like identical car-models) with the intention of demonstrating that binary duplicates (with different file-names, locations or file-dates) are always "unique" in some formalistic sense - and therefore always meant to survive.

It clearly DEPENDS on the individual problem. But the point is: for the user being able to finally decide on a case-by-case-basis it is quintessential that this user KNOWS ABOUT THE (TRUE) BINARY-DUPLICATES-SITUATION IN THE FIRST PLACE, solely determined by contents and without being distracted by "pseudo-unique"-adornment-features (like license-plates, misspelled or different names, file-dates and so on) attached to the outside.
oko
Senior Member
Senior Member
Posts: 200
Joined: 2007-05-03, 16:22 UTC

Re: Compare directories (on diff. drives) by content

Post by *oko »

It looks like syncdir would work in 1:1 mode and on user request it would be able to find duplicates in the background and display them in a special window, where it would allow certain file operations, e.g. delete the selected duplicate by user. But syncdir would not work in this 1:n or n:n mode. So 1:1 principle would be not broken and there is no need to solve how syncdir handles these relations. Synchronization would only work in the mode as it is now 1:1. The searching and presenting duplicates extension in a special window would only serve for the user (to look and manually do file operations), but would not serve for the synchronization process. Syncdir would still treat pseudo-unique files as unique and synchronize this old way.

If the duplicates are in different locations (has been moved or copied) how would their locations be displayed in syncdir extension to make it clear? For the left duplicate, the path would be obvious from the header, but for the right duplicates, it would have to be listed next to each file (above it or next to it as in the find-duplicates feedlist) ?

What should be displayed and how, to make it visually as easy as possible for inspecting resp. deciding what to do with content-duplicates? Their filename, path, position in the tree, date, attributes? What is needed and what is not? Some users want more, for some it would be confusing.

You could create a graphical representation of your proposal, maybe it would be clearer than 1000 words. If we could see the final look, then we could look for answers on how to functionally achieve it.
Post Reply