Hello,
I saw your example and it is a normal situation to have each Excel file around 300MB because Excel is designed for other purposes not for building databases.
What I saw in your example is that there is no normalization. Only in the sample I identified 11 entities that must be normalized: Persons, Detailed_Functions, Seniority, Countries, States, Cities, Organizations, Technologies, Languages, Industries, Linkedin_Specialities.
Once normalized the information, the amount of needed storage will decrease. Imagine that instead of "English", repeated 10000 times you will have an ID, let's say 1, repeated 10000 times. Or instead of name of the organization repeated 10000 times you will have also an ID consist in one or two digits repeated 10.000 times? Of course, this will be in back-end, you will see the same information as you see it now.
Those Excel files are very large and you have a huge risk to compromise one or more. 30 million records must stay in a SQL Server database, hosted on a powerful Windows Server. A desktop application can be written in order to maintain data(insert, update, delete) and in order to have reports that can be exported either in Excel or .CSV files.