Performance: BIGINT comparisons are faster using VARCHAR. But the big question is this: How are these columns compared to other columns or values? If, for instance, you frequently use [Column1_VarChar_10] for an inner join to another table with a similar VARCHAR column, then switching this one to BIGINT will hinder performance. If, however, the other table already uses an INT or BIGINT or other numeric data type, then converting this one would improve the performance of that query.
Storage: BIGINT uses 8 bytes. VARCHAR(10) uses 2-12 bytes. It's impossible to determine which is better without knowing more about the data (min, max, mean, median, etc.).
Sorting consideration: Since the fields are currently VARCHAR, they are sorted as strings. That is much different than numerical sorting. Any value beginning with a 1 ('100', '1000', '123456789', etc.) will be considered less than '2'. That is the way it is behaving today with the current index. If you change the fields to BIGINT, the sort order will change. This is true not only with the index but also with any query using an ORDER BY on one of those columns. This could have undesired effects on end user reports.
Additional suggestion: An 8-column index is heavy. In this particular example, your key size will be 32-72 bytes. Since you said you only have 78 million rows, what about creating a surrogate INT (identity) key? That would only be 4 bytes long. Keep in mind the clustered index dictates the sort order of the storage in the table itself. If one of the columns in your key has new values that fluctuate a lot, it will greatly degrade insert performance. The identity surrogate key allows for fast inserts because the new value is always greater than the current maximum value.