back when we where buying hardware for big (at the time) intranet. All the servers where brought of the same batch of suns production line, I recall our sysadmin saying he rely wanted to do the same for the disks ie case of identical drives
With respect, I just went through a "code red" at a large, well-known cloud storage company caused by synchronized late-life death of hard disks all manufactured in the same batch. That's the second time in my career that I've been through the same phenomenon. Hard disks that are made together wear out together.
"Hewlett Packard Enterprise (HPE) has once again issued a warning to its customers that some of its Serial-Attached SCSI solid-state drives will fail after 40,000 hours of operation unless a critical patch is applied.
Back in November of last year, the company sent out a similar message to its customers after a firmware defect in its SSDs caused them to fail after running for 32,768 hours."
Can you imagine provisioning and deploying a rack or 3 full of shiny new identical drives, all in RAID6 or RAID10, so you couldn't possibly lose any data without multiple drives all failing at once...
(Evidence that the universe can and does invent better idiots...)
Your default assumption only works if every disk has an independent probability of failing from each other. Which is definitely not true if you buy all the disks from the same batch.
Others have mentioned the problems with this strategy, but getting drives with the same firmware is done routinely to avoid having slightly different behavior in the RAID set.