Discussion:
How is hard drive formed? (Or: The myth of defragging and EXTn file systems)
(too old to reply)
IMBJR
2009-09-07 16:03:00 UTC
Permalink
I recently learnt about P & G lists on modern hard drives:

http://www.dataclinic.co.uk/hard-drive-defects-table.htm

The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.

OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.

The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.

Now, surely this makes a mockery of defragging your drive.

If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.

This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.

So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Tater Gumfries
2009-09-07 16:30:22 UTC
Permalink
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging,
Tater does it to watch them little colors wiggle around. Which he can
do because he uses Winders.

Tater
IMBJR
2009-09-07 16:42:45 UTC
Permalink
On Mon, 7 Sep 2009 09:30:22 -0700 (PDT), Tater Gumfries
Post by Tater Gumfries
Tater does it to watch them little colors wiggle around. Which he can
do because he uses Winders.
At werk, we would set it off so we didn't have to work, but after
reading about them lists it makes me wonder why bother?

I think perhaps one reason we do is that the FAT and NTFS file systems
do actually benefit from a re-ordering, as they just shit their file's
sectors randomly - so even if the hard-drive was transparently mapping
stuff about, the majority of the sectors would be placed where you
think they would be.

========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Steve Thompson
2009-09-08 02:23:52 UTC
Permalink
Post by IMBJR
On Mon, 7 Sep 2009 09:30:22 -0700 (PDT), Tater Gumfries
Post by Tater Gumfries
Tater does it to watch them little colors wiggle around. Which he can
do because he uses Winders.
At werk, we would set it off so we didn't have to work, but after
reading about them lists it makes me wonder why bother?
I think perhaps one reason we do is that the FAT and NTFS file systems
do actually benefit from a re-ordering, as they just shit their file's
sectors randomly - so even if the hard-drive was transparently mapping
stuff about, the majority of the sectors would be placed where you
think they would be.
Defect list management is a job for the physical device; it is in no
way a driver issue. Filesystem and volume management don't need to
know about underlying defects as their rate of occurence while reading
contiguous data is low enough to be lost in the noise, and should
be covered by a double-buffering or similar scheme at the application
layer, or possibly the operating system if it offers QOS guarantees to
individual applications. This is not an issue for Windows simply for
the reason that there is so much going wrong during normal operations
that a few remapped sectors aren't going to get noticed by anyone
sane.

And speaking of sanity, anyone who is storing shit on a FAT filesystem
or NTFS is more or less asking for an appointment for the next
available timeslot where a Microsoft employee can be made available to
sodomise them with a vacuum cleaner.


Regards,

Steve
--
Stupid SF Idea #5873: Intergalactic war.
Rev. 11D Meow!
2009-09-08 01:49:22 UTC
Permalink
On Tue, 08 Sep 2009 02:23:52 +0000, "Steve
Post by Steve Thompson
Post by IMBJR
On Mon, 7 Sep 2009 09:30:22 -0700 (PDT), Tater Gumfries
Post by Tater Gumfries
Tater does it to watch them little colors wiggle around. Which he can
do because he uses Winders.
At werk, we would set it off so we didn't have to work, but after
reading about them lists it makes me wonder why bother?
I think perhaps one reason we do is that the FAT and NTFS file systems
do actually benefit from a re-ordering, as they just shit their file's
sectors randomly - so even if the hard-drive was transparently mapping
stuff about, the majority of the sectors would be placed where you
think they would be.
Defect list management is a job for the physical device; it is in no
way a driver issue. Filesystem and volume management don't need to
know about underlying defects as their rate of occurence while reading
contiguous data is low enough to be lost in the noise, and should
be covered by a double-buffering or similar scheme at the application
layer, or possibly the operating system if it offers QOS guarantees to
individual applications. This is not an issue for Windows simply for
the reason that there is so much going wrong during normal operations
that a few remapped sectors aren't going to get noticed by anyone
sane.
And speaking of sanity, anyone who is storing shit on a FAT filesystem
or NTFS is more or less asking for an appointment for the next
available timeslot where a Microsoft employee can be made available to
sodomise them with a vacuum cleaner.
Regards,
Steve
Steve says funny things when he's guessing what to say here in
alt.slack.

ha ha ha, Steve-O
Steve Thompson
2009-09-08 02:52:35 UTC
Permalink
Post by Rev. 11D Meow!
Steve says funny things when he's guessing what to say here in
alt.slack.
I think it's called corporate espionage, and would be illegal if it
ever happened to you.
Post by Rev. 11D Meow!
ha ha ha, Steve-O
We'll see.


Regards,

Steve
--
Stupid SF Idea #5873: Intergalactic war.
iDRMRSR the Frosted Anamal Cookie Lovar
2009-09-07 17:08:41 UTC
Permalink
Post by IMBJR
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?

You're just learning this? Why we knew of the dangars of disk corruptian
evan back in COBOL days, when the thing was called a ATLAS tabal (alternate
track leval assignmant). Made a huge diffarance back then when a seek to a
distant track could take five secands.

AND...while I'm at it, don't say COBAL, you are profaning the mothar tongue.
COBOL was the language of choice, unless you couldn't do something you had
to do, in which case you inserted BAL (Basic Assembly Language) code. We
wizards kept THAT a big secrat from our bosses and each othar, so that they
would continue to give us big raises saying "I still wondar how you did
THAT!" as they passed us our fat checks.

[*]
-----
IMBJR
2009-09-07 17:22:04 UTC
Permalink
On Mon, 7 Sep 2009 13:08:41 -0400, "iDRMRSR the Frosted Anamal Cookie
Post by IMBJR
Post by IMBJR
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
You're just learning this?
Yes, but ...

It's moar a case of the effect this re-mapping has on disk geometry
(right phrase?) and therefore the seeming pointlessness of defrag and
EXTn.
Post by IMBJR
Why we knew of the dangars of disk corruptian
evan back in COBOL days, when the thing was called a ATLAS tabal (alternate
track leval assignmant). Made a huge diffarance back then when a seek to a
distant track could take five secands.
I thought the OS handled this, but it seems that it's the hard drive
that does. The OS seems to only be worried about the integrity of the
file system it poos down onto the disk.
Post by IMBJR
AND...while I'm at it, don't say COBAL, you are profaning the mothar tongue.
I did? Don't recall doing so, but if I did, I blame you for Aing
everything.
Post by IMBJR
COBOL was the language of choice, unless you couldn't do something you had
to do, in which case you inserted BAL (Basic Assembly Language) code. We
wizards kept THAT a big secrat from our bosses and each othar, so that they
would continue to give us big raises saying "I still wondar how you did
THAT!" as they passed us our fat checks.
Are you serious? This suggests one of your own could nevar rise above
because then the bosses would automatically know of your secret.

========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Rev. 11D Meow!
2009-09-07 18:54:43 UTC
Permalink
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Further complicate this issue when one has their hard drives tucked
behind RAID controller(s) which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.

Even more fun a question to ask...

When is it a good idea to defrag your Solid-State Disk Drive file
system?
Never.

I hear defragging can hurt performance on systems that are primarily
used for audio/video editing and production, by the way.
IMBJR
2009-09-07 19:41:32 UTC
Permalink
Post by Rev. 11D Meow!
which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.
I got the impression the OS cannot see that anyways - or at least not
the P and G lists.

========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Rev. 11D Meow!
2009-09-07 20:23:31 UTC
Permalink
Post by IMBJR
Post by Rev. 11D Meow!
which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.
I got the impression the OS cannot see that anyways - or at least not
the P and G lists.
Check the wikipedia article on S.M.A.R.T.
http://en.wikipedia.org/wiki/S.M.A.R.T.
There are several utilities that can read this data.
http://en.wikipedia.org/wiki/Comparison_of_S.M.A.R.T._tools
see also:
http://en.wikibooks.org/wiki/Minimizing_hard_disk_drive_failure_and_data_loss
and:
http://smartlinux.sourceforge.net/smart/faq.php
~~~
You can't see it through a RAID controller, at least the AMD one
that's on this system here. <pout> Not even if I boot to Western
Digital's CD with their diagnostics application. <pout> <pout>

This BIOS is S.M.A.R.T. aware, so it'll probably tell me something
soon enough to bother with it. With MTBF upwards over a million
hours, what me worry?
~~~
Considering the speed and size of hard drives today, I'm pretty sure
defrag is still a valid thing to do, considering the many other levels
of error-correction involved that are fixing things way before the
need to replace a bad sector come into play. Personally, I'm
surprised any of today's PC hardware can actually work at all any
more.
~~~
With Native Command Queuing available now, I don't see defrag as being
much more than a way to wear the hard drive(s) out faster!
~~~
IMBJR
2009-09-07 22:29:18 UTC
Permalink
Post by Rev. 11D Meow!
http://smartlinux.sourceforge.net/smart/faq.php
http://smartmontools.sourceforge.net/

Ah, inneresting. That's in the 'buntu repos. Might play with it one
day.

========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Rev. 11D Meow!
2009-09-07 23:55:45 UTC
Permalink
Post by IMBJR
Post by Rev. 11D Meow!
http://smartlinux.sourceforge.net/smart/faq.php
http://smartmontools.sourceforge.net/
Ah, inneresting. That's in the 'buntu repos. Might play with it one
day.
The (A) GUI is here, apparently:
http://gsmartcontrol.berlios.de/home/index.php/en/Home

Viewing those tables you point to are meaningless on the surface.
It's what the S.M.A.R.T. sub-system on the drive's controller does
with the data and some other stuff that is interesting to keep tabs
on. Some S.M.A.R.T. monitoring utilities give you a xx% Good rating
and not much more. I imagine others let you view all kinds of
historical data in there. The former is good enough for most users.

and drat!
http://smartmontools.sourceforge.net/docs/raid-controller_support.html
doesn't help me a bit for this.
and I sure as fuck aint undoing my strip-sets just to see these drives
will last way past 2012, when all bets are off for everything
everywhere.

yay

I think I like this toolkit here:
http://partedmagic.com
probably safer than using from a live work system, booting it from CD,
that is...
Sacre Bleu
2009-09-09 00:04:36 UTC
Permalink
Post by Rev. 11D Meow!
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Further complicate this issue when one has their hard drives tucked
behind RAID controller(s) which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.
Even more fun a question to ask...
When is it a good idea to defrag your Solid-State Disk Drive file
system?
Never.
I hear defragging can hurt performance on systems that are primarily
used for audio/video editing and production, by the way.
Defragging's kinda pointless in a video/film editing/compositing systems
as most use high end RAIDs, particularly RAID 5, which distributes
block-level parity information across each of the multiple drives in the
array, enabling complete data reconstruction in the event of a disk
failure. Drive dies, replace and heal. All you end up losing is time.

And believe you me, this happens a lot in a busy visual effects shop.
Rev. 11D Meow!
2009-09-09 00:36:09 UTC
Permalink
On Tue, 08 Sep 2009 17:04:36 -0700, Sacre Bleu
Post by Sacre Bleu
Post by Rev. 11D Meow!
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Further complicate this issue when one has their hard drives tucked
behind RAID controller(s) which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.
Even more fun a question to ask...
When is it a good idea to defrag your Solid-State Disk Drive file
system?
Never.
I hear defragging can hurt performance on systems that are primarily
used for audio/video editing and production, by the way.
Defragging's kinda pointless in a video/film editing/compositing systems
as most use high end RAIDs, particularly RAID 5, which distributes
block-level parity information across each of the multiple drives in the
array, enabling complete data reconstruction in the event of a disk
failure. Drive dies, replace and heal. All you end up losing is time.
And believe you me, this happens a lot in a busy visual effects shop.
In three months at one job doing perf testing on SQL monster servers
with rows of racks of drives, I managed to swap-out at least three
dozen drives. After hearing their history in 100- degree lab with
total lack of proper A/C, I could see why such a high failure rate.

If you're still seeing such high drive failure rate, you're buying the
wrong kinds of drives.

They just don't fail that often any more. But when they do,
redundency never hurts. YAY FOR RAID!
Sacre Bleu
2009-09-09 19:14:00 UTC
Permalink
Post by Rev. 11D Meow!
On Tue, 08 Sep 2009 17:04:36 -0700, Sacre Bleu
Post by Sacre Bleu
Post by Rev. 11D Meow!
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Further complicate this issue when one has their hard drives tucked
behind RAID controller(s) which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.
Even more fun a question to ask...
When is it a good idea to defrag your Solid-State Disk Drive file
system?
Never.
I hear defragging can hurt performance on systems that are primarily
used for audio/video editing and production, by the way.
Defragging's kinda pointless in a video/film editing/compositing systems
as most use high end RAIDs, particularly RAID 5, which distributes
block-level parity information across each of the multiple drives in the
array, enabling complete data reconstruction in the event of a disk
failure. Drive dies, replace and heal. All you end up losing is time.
And believe you me, this happens a lot in a busy visual effects shop.
In three months at one job doing perf testing on SQL monster servers
with rows of racks of drives, I managed to swap-out at least three
dozen drives. After hearing their history in 100- degree lab with
total lack of proper A/C, I could see why such a high failure rate.
If you're still seeing such high drive failure rate, you're buying the
wrong kinds of drives.
They just don't fail that often any more. But when they do,
redundency never hurts. YAY FOR RAID!
The RAIDs I mentioned were Autodesk Stones with Seagate Cheetah drives.
The amount of HD & 2K files that get manipulated and thrown around on
these things is staggering. The drives are in a consistant state of
reading/writing. The frequency of failure was normal per Autodesk.

I recall excessive dust bunny buildup being the root of a few problems.
Rev. 11D Meow!
2009-09-09 21:24:33 UTC
Permalink
On Wed, 09 Sep 2009 12:14:00 -0700, Sacre Bleu
Post by Sacre Bleu
Post by Rev. 11D Meow!
On Tue, 08 Sep 2009 17:04:36 -0700, Sacre Bleu
Post by Sacre Bleu
Post by Rev. 11D Meow!
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Further complicate this issue when one has their hard drives tucked
behind RAID controller(s) which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.
Even more fun a question to ask...
When is it a good idea to defrag your Solid-State Disk Drive file
system?
Never.
I hear defragging can hurt performance on systems that are primarily
used for audio/video editing and production, by the way.
Defragging's kinda pointless in a video/film editing/compositing systems
as most use high end RAIDs, particularly RAID 5, which distributes
block-level parity information across each of the multiple drives in the
array, enabling complete data reconstruction in the event of a disk
failure. Drive dies, replace and heal. All you end up losing is time.
And believe you me, this happens a lot in a busy visual effects shop.
In three months at one job doing perf testing on SQL monster servers
with rows of racks of drives, I managed to swap-out at least three
dozen drives. After hearing their history in 100- degree lab with
total lack of proper A/C, I could see why such a high failure rate.
If you're still seeing such high drive failure rate, you're buying the
wrong kinds of drives.
They just don't fail that often any more. But when they do,
redundency never hurts. YAY FOR RAID!
The RAIDs I mentioned were Autodesk Stones with Seagate Cheetah drives.
The amount of HD & 2K files that get manipulated and thrown around on
these things is staggering. The drives are in a consistant state of
reading/writing. The frequency of failure was normal per Autodesk.
I recall excessive dust bunny buildup being the root of a few problems.
yeah, Seagate Cheetahs suck, at least their first few generations. I
paid about $900 for the 9GB half-height SCSI model about 6 months
after they came out and continues to be the only drive I've ever used
that totally failed *blammo! no warning* in under 14 months.

Same as those drives I described on that job. All Cheetahs.

Seagate Cheetah drive are the SUXORZ!
Steve Thompson
2009-09-10 01:13:36 UTC
Permalink
Post by Rev. 11D Meow!
On Wed, 09 Sep 2009 12:14:00 -0700, Sacre Bleu
Post by Sacre Bleu
Post by Rev. 11D Meow!
On Tue, 08 Sep 2009 17:04:36 -0700, Sacre Bleu
Post by Sacre Bleu
Post by Rev. 11D Meow!
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Further complicate this issue when one has their hard drives tucked
behind RAID controller(s) which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.
Even more fun a question to ask...
When is it a good idea to defrag your Solid-State Disk Drive file
system?
Never.
I hear defragging can hurt performance on systems that are primarily
used for audio/video editing and production, by the way.
Defragging's kinda pointless in a video/film editing/compositing systems
as most use high end RAIDs, particularly RAID 5, which distributes
block-level parity information across each of the multiple drives in the
array, enabling complete data reconstruction in the event of a disk
failure. Drive dies, replace and heal. All you end up losing is time.
And believe you me, this happens a lot in a busy visual effects shop.
In three months at one job doing perf testing on SQL monster servers
with rows of racks of drives, I managed to swap-out at least three
dozen drives. After hearing their history in 100- degree lab with
total lack of proper A/C, I could see why such a high failure rate.
If you're still seeing such high drive failure rate, you're buying the
wrong kinds of drives.
They just don't fail that often any more. But when they do,
redundency never hurts. YAY FOR RAID!
The RAIDs I mentioned were Autodesk Stones with Seagate Cheetah drives.
The amount of HD & 2K files that get manipulated and thrown around on
these things is staggering. The drives are in a consistant state of
reading/writing. The frequency of failure was normal per Autodesk.
I recall excessive dust bunny buildup being the root of a few problems.
yeah, Seagate Cheetahs suck, at least their first few generations. I
paid about $900 for the 9GB half-height SCSI model about 6 months
after they came out and continues to be the only drive I've ever used
that totally failed *blammo! no warning* in under 14 months.
Same as those drives I described on that job. All Cheetahs.
Seagate Cheetah drive are the SUXORZ!
You're all full of shit and certainly aren't qualified to
differentiate between manufacturing defects and involuntary warranty
violations. Talk about wanking or something -- anything but another
display of your computer-systems incompetence.


Regards,

Steve
--
Stupid SF Idea #5873: Intergalactic war.
Rev. 11D Meow!
2009-09-10 00:55:51 UTC
Permalink
On Thu, 10 Sep 2009 01:13:36 +0000, "Steve
Post by Steve Thompson
Post by Rev. 11D Meow!
On Wed, 09 Sep 2009 12:14:00 -0700, Sacre Bleu
Post by Sacre Bleu
Post by Rev. 11D Meow!
On Tue, 08 Sep 2009 17:04:36 -0700, Sacre Bleu
Post by Sacre Bleu
Post by Rev. 11D Meow!
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================
http://www.cafepress.com/slackboyiDRMRSR
http://www.imbjr.com
Further complicate this issue when one has their hard drives tucked
behind RAID controller(s) which don't allow the OS to view the
S.M.A.R.T. data for each drive installed on it.
Even more fun a question to ask...
When is it a good idea to defrag your Solid-State Disk Drive file
system?
Never.
I hear defragging can hurt performance on systems that are primarily
used for audio/video editing and production, by the way.
Defragging's kinda pointless in a video/film editing/compositing systems
as most use high end RAIDs, particularly RAID 5, which distributes
block-level parity information across each of the multiple drives in the
array, enabling complete data reconstruction in the event of a disk
failure. Drive dies, replace and heal. All you end up losing is time.
And believe you me, this happens a lot in a busy visual effects shop.
In three months at one job doing perf testing on SQL monster servers
with rows of racks of drives, I managed to swap-out at least three
dozen drives. After hearing their history in 100- degree lab with
total lack of proper A/C, I could see why such a high failure rate.
If you're still seeing such high drive failure rate, you're buying the
wrong kinds of drives.
They just don't fail that often any more. But when they do,
redundency never hurts. YAY FOR RAID!
The RAIDs I mentioned were Autodesk Stones with Seagate Cheetah drives.
The amount of HD & 2K files that get manipulated and thrown around on
these things is staggering. The drives are in a consistant state of
reading/writing. The frequency of failure was normal per Autodesk.
I recall excessive dust bunny buildup being the root of a few problems.
yeah, Seagate Cheetahs suck, at least their first few generations. I
paid about $900 for the 9GB half-height SCSI model about 6 months
after they came out and continues to be the only drive I've ever used
that totally failed *blammo! no warning* in under 14 months.
Same as those drives I described on that job. All Cheetahs.
Seagate Cheetah drive are the SUXORZ!
You're all full of shit and certainly aren't qualified to
differentiate between manufacturing defects and involuntary warranty
violations. Talk about wanking or something -- anything but another
display of your computer-systems incompetence.
Regards,
Steve
Well, Steve, you made it a day (yesterday or the day before) and made
sense in two posts, for the first times ever.

What happened today, Steve-O?

Not enough shiny things in your diet this morning, eh?

Rev. Richard Skull
2009-09-07 18:55:51 UTC
Permalink
Post by IMBJR
http://www.dataclinic.co.uk/hard-drive-defects-table.htm
The P List is an odd one. If I bought a 320Gb hard drive, that's
exactly what I'd expect to see on it. So it sounds like hard drives
have extra capacity to account for manufacture problems. The P List
will point to the physical sector on a hard drive that replaces the
logical sector whose physical counter-part is b0rked.
OK, that's all well and fine - just about. But it gets odder when you
now consider the G List.
The G list is like the P list, but this grows during the lifetime of
the drive. Again, I suspect extra capacity comes into play, but once
the list is full, you best be replacing your drive. Note: apparently,
only until that list is full, do you realise your sectors have been
pegging out as the hard drive can no longer shield you from the truth.
Now, surely this makes a mockery of defragging your drive.
If a sector goes bad and is transparently replaced by another from the
spare pool, one hopes that the new physical sector is nearby where it
should have been, otherwise defragging will not be effective. You
could end up with also sorts of oddly mapped files on your hard-drive,
but your OS would report them as contiguous.
This also affects EXTn file systems so beloved of us open-sores types.
These systems pride themselves in minimal fragmentation because they
always seek to put down a file in as big a set of contiguous sectors
as possible. Well, that's fucked right out of the water by all of this
P and G List business.
So, why do we bother defragging, or using EXTn if our hard drives do
this list business?
========================================http://www.cafepress.com/slackboyiDRMRSRhttp://www.imbjr.com
Well, when a Mommy Floppy Dicks and a Daddy Floppy disk love each
other very much....

Oh, and Mommy Floppy like those BIG 5.25" Floppys! NOt them 3.5" ones!
Loading...