Seagate prototype NVMe HDD

Do not think so, Enterprises use U.2 interfaces if needed... and really, what takes more space NVMe connectors on a mobo, or 8 SATA ports off the side...
 
I've not heard anything about this. That would be quite a clunky cable.

I do think that there is potential though. I don't really understand why there hasn't been more development of Hybrid drives. Mechanical drives still have a huge lead in terms of capacity. While I'm mostly content with segregating my drives based on usage, there are still plenty of cases where users are better off with just one drive.
 
Do not think so, Enterprises use U.2 interfaces if needed... and really, what takes more space NVMe connectors on a mobo, or 8 SATA ports off the side...
U.3 is the latest now.

I've not heard anything about this. That would be quite a clunky cable.

I do think that there is potential though. I don't really understand why there hasn't been more development of Hybrid drives. Mechanical drives still have a huge lead in terms of capacity. While I'm mostly content with segregating my drives based on usage, there are still plenty of cases where users are better off with just one drive.
There are 30TB U.3 enterprise SSDs now (I'm tempted to get one, a Micron 9400). What are the largest HDDs... 22TB? Granted, it's still easier for a user on consumer grade motherboards/RAID cards to use spinning rust for pure capacity. I've not (yet) seen an 8 or 16 port U.2/U.3 RAID card.
 
Last edited:
I've not heard anything about this. That would be quite a clunky cable.

I do think that there is potential though. I don't really understand why there hasn't been more development of Hybrid drives. Mechanical drives still have a huge lead in terms of capacity. While I'm mostly content with segregating my drives based on usage, there are still plenty of cases where users are better off with just one drive.

Users are best off with a drive that has the size they need sure, and these days 2.5 SSD" or NVMe's cover 99% of those users.

For those who need a 2nd drive, add another, or build a NAS, or toss a 3.5" drive in your tower with spinning rust.

The very very very very very slim market of say a home user who needs an 18TB Spinning rust with fast NVMe in front of it, are very few in reality.

This would be a niche product, expensive to produce and change / create an interface standard for...
 
There would be no reason for such a drive, with current trends, to even exist. You aren't going to give up 4 lanes of PCIe for a drive that doesn't even come close to half of a SATA peak transfer. Even if you had a spinner that would beat 600MB/s, you would just upgrade to SAS which can handle more. There is no need for nvme on spinners, today (and probably tomorrow.) PCIe signalling is enough of a problem when on the board and as short as possible, start adding up to 18" cables to the equation and you have other issues (see problems with PCIe risers, for example.)
 
Last edited:
Users are best off with a drive that has the size they need sure, and these days 2.5 SSD" or NVMe's cover 99% of those users.

For those who need a 2nd drive, add another, or build a NAS, or toss a 3.5" drive in your tower with spinning rust.

The very very very very very slim market of say a home user who needs an 18TB Spinning rust with fast NVMe in front of it, are very few in reality.

I need (and have) 8TB and 16TB drives installed in my system. For Retrospect dataset backups, videos, RAW format photo files, etc. The NVMe interface, I'm not sure would pay a lot extra for it.
This would be a niche product, expensive to produce and change / create an interface standard for...
Of course. That hasn't stopped Micron.
 
There would be no reason for such a drive, with current trends, to even exist. You aren't going to give up 4 lanes of PCIe for a drive that doesn't even come close to half of a SATA peak transfer. Even if you had a spinner that would beat 600MB/s, you would just upgrade to SAS which can handle more. There is no need for nvme on spinners, today (and probably tomorrow.) PCIe signalling is enough of a problem when on the board and as short as possible, start adding up to 18" cables to the equation and you have other issues (see problems with PCIe risers, for example.)
Maybe that's why Seagate hasn't commercialized this technology.
 
I need (and have) 8TB and 16TB drives installed in my system. For Retrospect dataset backups, videos, RAW format photo files, etc. The NVMe interface, I'm not sure would pay a lot extra for it.

Of course. That hasn't stopped Micron.

Certainly, some people do "need it" but there are other options as well that are cheaper. And if you need performance and massive drive space, you are going to spend more overall for SSD tech and motherboards, or add-in cards to support more in a system and raiding them....or you get a NAS and start doing 10/25/40Gbps networking to a fast storage device.
 
Certainly, some people do "need it" but there are other options as well that are cheaper. And if you need performance and massive drive space, you are going to spend more overall for SSD tech and motherboards, or add-in cards to support more in a system and raiding them....or you get a NAS and start doing 10/25/40Gbps networking to a fast storage device.
I spent not a lot of money, maybe $50 max, for a used LSI SATA card that supports 8 drives.
 
As I understand, dual-actuator HDDs are already on the market - with currently existing SATA 6 Gbps/SAS 12 Gbps interfaces, at that, though as I understand it, the SATA ones are especially janky since they can't just present two LUNs like the SAS ones do. (Each actuator is, effectively, a separate logical hard drive.)

The main appeal is presumably having double the IOPS per drive, as HDDs have historically been quite limited due to the seeking head actuators, and massive arrays to spread out the load between actuators helped with getting around that limitation before pure solid-state storage became a remotely affordable option for anyone.

Actually using an NVMe interface for just one hard drive would be an utter waste; there's a reason that protocol was designed for much faster solid-state storage originally. Spinning rust just isn't fast enough on its own to warrant tying up that many PCIe lanes; note how your typical modern SAS HBA can drive 8 or even 16 drives off of just PCIe 3.0 x8.

The modern approach for storage, for anyone who actually needs datahoarder levels of capacity that only hard drives can offer, is to have cache SSDs within the same pool. ZFS makes this easy with L2ARC, SLOG and special (metadata) vdevs, on top of the main ARC being in RAM for frequently used data. Light amounts of data hit the SSDs first, then get gradually written through to the HDDs.

3D XPoint/Optane is even more suitable for such cache devices than typical NAND flash due to the much greater endurance and lower latency, but all of that's been discontinued for the time being ever since the Intel/Micron venture fell through. Get any remaining stock while you can; it'll probably dry up as time passes.
 
Back
Top