It depends on what you were trying to with the data. Hadoop would never win, but Spark can allow you to hold all that data in memory across multiple machines and perform various operations on it.
If all you wanted to do was filter the dataset for certain fields, you can likely do something faster programmatically on a single machine.