The Type 1 error is when you say something is statistically significant but actually it wasn't: it was just chance. That is likely to happen if you do too many significance tests. If you do 100 P tests, 5 of them are going to be positive at the 5% level just by chance (that's not strictly true but it makes the point). Normally we look within a single study for too many P tests, but the same applies to the whole scientific literature: hundreds or thousands of studies done, there must be a lot of Type 1 errors out there.
Here is an argument against the use of P < 0.05 and here is a more extended discussion. I was intrigued to see that with text mining software, some researchers were able to look at 12 000 000 abstracts looking for P values. They found that:
'Among the abstracts and full-text articles with P values, 96% reported at least 1 “statistically significant” result, with strong clustering of reported P values around .05 and .001.'