They can be made larger but that would reduce the manufacturer's yield. The bigger a single chip is, the more surface area you lose when you get an error.
Imagine a scratch that runs straight through the middle of several chips parallel to 2 out of the 4 sides and runs for 175 millimeters. If the chips are 50mm x 50mm each, you would lose at most 12,500 sq mm of silicon (3 scratched through and 2 with 12.5mm scratches = 5 * 2500 sq mm). If they are, say, 60mm x 60mm you could lose 14,400 sq mm of silicon from the same error (2 scratched through and 2 with 12.5 mm scratches = 4 x 3600 sq mm).
IBM's Z-series CPUs are pretty ridiculously enormous in this regard. The z196 is 512mm^2[0] (compare with about 260mm^2 for a top-of-the-line Intel Xeon), and is able to run at 5.2GHz on a 45nm process. Because IBM can basically charge anything they want for these monsters[1]; their yield of working chips per wafer can be pretty terrible and they'll still make money.
Intel, on the other hand, needs to be able to sell as many chips as possible per wafer, because they have vastly lower margins. This is also why they do stuff like fusing off dead cores and cache to produce working lower-end parts from dice that aren't 100% functional out of the factory.
Imagine a scratch that runs straight through the middle of several chips parallel to 2 out of the 4 sides and runs for 175 millimeters. If the chips are 50mm x 50mm each, you would lose at most 12,500 sq mm of silicon (3 scratched through and 2 with 12.5mm scratches = 5 * 2500 sq mm). If they are, say, 60mm x 60mm you could lose 14,400 sq mm of silicon from the same error (2 scratched through and 2 with 12.5 mm scratches = 4 x 3600 sq mm).