SCSI vs IDE (was:Re: [vox-tech] Xen + LVM usage)
Luke Crawford
lsc at prgmr.com
Tue Aug 8 10:15:52 PDT 2006
SCSI, painful? these massive fileservers you use, what is the
file-acces concurrency on them?
I've got around 20 virtual servers (some under moderate load) on one
server w/ 10 18G 10K fibre drives. I've got another 40 on 6 10K u160 SCSI
drives. before I had customers, I had put 10 virtual servers on a SATA
disk system. even though the SATA system was running trivial
loads (http, dns, spamfiltering and email for my personal stuff) the sata
system would have periods of unresponsiveness, when you'd have to wait 10
seconds or more to open a 5K file with PINE. On the SCSI system, I can
count on opening 1GB mail folders in the same 10 seconds. That's what I
need... predictable. It doesn't have to be blazingly fast, but it does
need to degrade gracefully when overloaded. IDE does not seem to do this.
Modern metadata ordering /caching fileystems (logging or softupdates or
whatever) all but eliminate the IDE penalty on write, but on read, you
still hit concurrency issues if you have to many users hitting the same
IDE disk.
Now, if you only have a small number of concurrent accesses to the
filesystem, I agree, IDE is the better choice, simply because it is so
cheap and so big. Right now, I'm looking at used SATA -> fibre channel
cases on E-bay, with the intent of renting customers IDE disks one at a
time. One business model I am considering is to use customer-owned disks;
perhaps requiring that they buy the disk from me + pay a up front deposit
that is enough to cover return shipping and handling, then just charge
$15/month or so for an IDE slot in my SAN. I can then connect the IDE
disk to the virtual server the customer requests. If they buy the disk
from me, I could even include a "I'll replace it within X hours of when
it dies, then handle the RMA myself at no charge" service, as that would
be easy for me to do. I just keep a couple extra disks around and swap
them as needed; then RMA all the disks once a month. When they quit, I
mail the disk back to them, or buy it back for some pre-arranged
(time-based) fee. This lowers my initial capital costs, and makes the
monthly cost of managed remote disk much more competitive with the
costs of throwing your own server full of ide up somewhere else.
Of course, I'm low on both capital and time, so who knows when or if I
will implement it... but my point is that I'm not a total IDE bigot.
Then we have the next big thing Serial attached SCSI- from what I
understand, it looks an auful lot like those raptor 10K drives. the
interesting part here is that there are/will be SAN-style disk aggrigation
technology, and SAS is plug-compatable with SATA, so if I get a SAS san
up, I will be able to swap in SATA disks using the same attachment
technology.
So far the only SAS/SATA aggrigation tech I've seen in the field is the
3ware multi-lane cable that aggrigates 4 SATA cables into one semi-propritary
cable (adaptec has something very similar, based on the same standard,
but the cables are subtly incompatable. One of my customers got the
wrong one; I tried.) and really, it's nothing to get
excited about. Fibre channel is a mature, stable and inexpensive
technology in comparison. (that is, if you buy used 1G fibre.)
On Mon, 7 Aug 2006, Bill Broadley wrote:
> Ugh, SCSI seems expensive and painful these days. I'm perfectly happy
> with, er, 8 or so large (4-6TB) fileservers I run. I definitely recommend
> the enterprise/RAID level SATA drives (usually an extra $10 each). In any case
> that is probably best saved for another thread.
Yes, well, I thought it was an interesting thread.
More information about the vox-tech
mailing list