• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Tim Cooke
  • Campbell Ritchie
  • paul wheaton
  • Ron McLeod
  • Devaka Cooray
Sheriffs:
  • Jeanne Boyarsky
  • Liutauras Vilda
  • Paul Clapham
Saloon Keepers:
  • Tim Holloway
  • Carey Brown
  • Piet Souris
Bartenders:

the one time reddit was useful to me

 
Rancher
Posts: 326
14
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
First of: I don't like to "shoot against reddit" - or any other platform for that matter - I'm sure many question or other kinds of topics gets "solved" each day - but at least I had very bad experience on said "other platforms". So, I'd like to apologize upfront if anyone feels offended. Please keep in mind: The following is just a sum-up of personal experience over about the better half of the past two or so years. Please all let's just stay calm. If you don't "like" that topic, I guess it's maybe better to just read it (or not) without a reply. Sure, you may get your thoughts - but please respect this' forums guidelines.
I'd like to hear from stories of yours - maybe closely related - maybe only far remotely related to my story - but "ranting over eachother" isn't what I'm used to here. Thank you in advance.

So, my "journey" began about two years ago, maybe a bit more, maybe a bit less, maybe even way before what I consider of the "actual" start of my "questions and issues with responses".

To get you rather up to speed with this topic: It's all about using multiple physical HDDs as a big logical volume for several reasons. One of them: I consider myself as part of the "data hoarder" "community". I "gave up" using any other type of physical media, namely pretty much any type of optical or "analog" media, and decided to go the "only files on some HDDs"-route. Yes, it has its fair share of disadvantages, I can't deny that, and at least for some case I have to admit: Although my way to go for them isn't neither the only nor the best one, it's still a "possible", sometimes even a viable option to go for, for some cases there're still other options that are either (far (more)) easier than mine, or what I like to comment as "industry standard" or "best / (most) common practice". So, yes, while one of my really close friends is still using optical media (and has already filled about half his living room and already started to expand into his bedroom) I'm the kind of person rather would go for a more or less big storage server and use my network to stream it.
This "attitude" of mine got me intro trouble the moment I decided: "Hey, my motherboard (ASUS Crosshair V Formula-Z) others a "hardware"-raid - let's use it with my couple of HDDs to get a big volume.". Yea, anyone with some experience in that area already getting this "WHY? Why you wasted YEARS on that CRAP?"-kinda thinking. As from my todays knowledge I can only agree with you: Why, just why I have wasted so much time relying on such a "waiting for desaster to happen" fail of a bad design? To my defense: I wasn't aware about what a "real hardware" raid is, what's the difference between a "FakeRAID", a "proper" software raid and a resilient file system able to span across multiple partions or even multiple physical HDDs - and, most importantly: What the hell is "silent data corruption / bit rot"?
Looking back at my foolish way of thinking about "Yea, cool, I can use my multiple physical disks as one big large logical volume - and if a disk fails I can just replace it with another one." - with my current knowledge I question: Why many motherboard manufactures agreed upon the todays standard to (badly) implement what's called FakeRAID onto their consumer-grade boards and advertise that nonesense as selling point? It's even less worth than actual literal snake oil.

But, back to topic: So I made the mistake to buy 5 3TB HDDs and use the fakeraid proprietary to my specific hardware and OS (win7) and set up a "pseudo" RAID5 - without even being aware of the issues of RAID-level 5 and the various options the "interface" provided me with. Long story cut short: I already suffered from two (or three? can't remember exactly) drive failures - and, seen from my todays knowledge, played that risky 50/50 game of "Will the critical array survive the rebuild or will it fail completely due to a double-failure while rebuilding?" at least twice (or three times - as mentioned: I lost count) - and I'm sick of it.
So, I started to look for other options to get out of my (still current) situation. Back in around 2018/2019 I first got the idea of "using Linux bare metal and virtualizing Windows ontop of it". Back then, due to driver-incompatibilities (so, basically: software issues) I once (now: wrongly) concluded: My current hardware may not be able to get my what I requesting from it. As it turned out just recently: My hardware is perfectly able to provide all the necessary stuff for KVM, pci passthrough and all that stuff - just the software wasn't "evolved" or "matured" enough yet. Fast forward to today: May hardware is able to handle what I wanted about 2-3 years ago - but even with todays software it still suffers from some issues (I'll come back to that later).

It was around that time I experienced my first "disk failure". Luckly I already knew what that meant and did the only right thing to do: Shutdown my system and not turned it back on until I got a new disk to replace the failed one. "Luckly" I just inherited a notebook from my back then recently deceased grandfather (RIP grandpa) so I had at least one other fully working system (in addition to that I also had my now 2nd system I use as my off-main-system live stream encoder) to order new HDDs. About a week later I got the new HDDs (I ordered 3 of them) - replaced the drive the control software reported as failed - and stumbled upon my first "WTF?": The "low-bios/uefi-level" text-gui did not offer any way to "replace" the failed disk - neither to remove the failed one and add the new disk. At this point I realized: "Wait a second - shouldn't I be able to manage a disk replacement in the low-level management interface?" - turns out: NOPE! It's just one of the many rather big middle fingers from AMD/ASUS about my specific setup. If you want to do anything more then just view, create or delete the array it's all done in the windows-only software. So, I booted my system back up after disk replacement, tried to stop anything accessing the array (to this day I'm not sure if I got all - but most of the stuff) and rebuild the array. I'm not sure if it's some "evolving" of the driver, but the first rebuild took about 16 hours or so - the last just 6 hours (with even more data stored on the array). With it comes the second big middle finger from AMD/ASUS: Neither AMD nor ASUS ever took any efforts into releasing a driver for the 990FX/SB950 chipset than Windows 7 64-bit - it's the main reason I'm still stuck to it. From what I was able to dig up at least AMD made some effort to make a Linux-driver available for some time - but even with sites like "waybackmachine" or "archive" I was able to get any version of it. I also tried what's available for Win 8.x or Win 10 - but neither contains the RAID driver necessary to re-assemble the "mess". So - "just upgrade to win 10" - yea, easier said than done - at least for now: I tried several differnt strategies: Not possible without losing the array.

Anyway - after my first rebuild and "throwing out the failed drive" (I actually stored it in the package the new drive arrived in) I got my array back up fully working again as nothing happened. Although I was happy to have my system back after a week of downtime this raised the awareness of "well - if something bad happens - like my board goes bad - I'm in trouble!". As I experimented with SAN and PXE boot and couple other things that today are pretty much part of my daily used home-network and serve several systems at once - one day I tried to boot up some Linux distro (some OpenSuSE Leap version) via PXE - I noticed: My array only showed up as the five drives it's built from - but not the array itself. That was the day I learned about the difference between software raid, hardware raid and, how's it called in the linux community: FakeRAID. If I had known THAT one specific back when I first got the idea to use it - I would had avoided it at all and maybe not got stuck at where I'm at today. It also was the time I learned about MDADM, LVM, ZFS, BtrFS and ReFS and how "multi-disk arrays" work today (to avoid the term RAID as at least I don'T see it fit that well anymore today).

Sure, what's stored on the array isn't "mission critical" - it's mostly games using couple of different service (namely STEAM (part of steam pre-community beta since about 2006 or so - and I'm proud of have experience when the "friends" section just display that "comming soon" line ... aw, those days ... memories ... *flashback*) and the couple of other launchers like uplay, epic, battle.net and rockstars "social club"), tons of movies of one of my close friends he just "dumps" on me with the ever repeating phrase: "you've got more HDD storage than my entire optical media collection", and the really important "mission critical" data are stored on at at least two cold storages - any one of them finally going to be deposited off-site this or next weekend - so a proper 3-2-1 back up: 3 copies, at least 2 different media, at least 1 off-site. So, I don't really care about "losing" or "destroying" the array .. BUT ... it turned out I was trying to "solve" it the complete wrong way - and I got opened my eyes just within the past few hours:

So, after searching for ANY solution for about the past two years, pretty much all from KVM over using a 2nd system as local storage server to some way-over-complicated way to direct attach the disks - finally I got a reply from some reddit user just within the last couple few hours.
That user suggested this to me:

1) Windows Storage Space are NOT meant to be used as direct attached storage for my daily driver - especially using single-/dual-parity arrays with ReFS: It's purpose is for large archive storage without much of changing data. The "most used" case as recommended by Microsoft: local media center storage backend which only gets some new files saved on it every ones in a while. Even M$ itself recommends against using it as local direct attached storage for a workstation - both: storage spaces and ReFS. BTW: That's one of the reasons why the support for creating new ReFS volumes was removed from "client" versions of windows and is only available on "pro for workstations" / "business" and the server versions - although client versions still can access already existing ReFS volumes both reading and writing.

2) The user (don't know if s/he/them is fe/male or consider themselfs to some other group) recommended: The best thing I can do with my drives would be a ZFS array, either raidZ1 or raidZ2 - depending on my personal preference, but only if it's on it's own storage system only accessed via network - which may be worth upgrading to 10gbit network. As for my main rig - a cheap fast SSD as "fast-cache" (in terms of copy over the games/applications I use the most right now) and have the big array sit on its own. The user recommended against using this network storage directly for gaming or any other application with high read/write demand. Such should either be copied over to a local fast ssd cache or some caching should be set up to ease up the load on the array). Quite honestly: It's a way I not spent any thoughts about yet. It really opened my eyes and is pretty much the way I built the back then new pc for my dads 50th birthday: Using a 250-500 gb ssd for the OS and most used programes and a 3tb hdd for long term storage.

3) With above in mind: Unfortunately it's the kind of answer I'm looking for since the past two years. If only I would had gotten it back when my first drive failed - or maybe even back when I got that stupid idea of using a fakeraid - it could had saved me so much time and headaches ... So, to anyone consider using the fakeraid offered by your board: STAY AWAY FROM IT! At any cost! There're so much more way better solutions. Don'T even try to repeat what I (and many others) already suffered from over the past years. Do yourself a favor: Either keep using your drives on their own as single drives and PROPERLY (!) backup your data often (try to follow the 3-2-1 rule: at least 3 copies, at least 2 different types of storage media, at least 1 off-site) - and only use an array if, and only if, you can'T get away without one. BUT: If so: invest into a 2nd system as proper stoage server and upgrade your network. Oh, and don'T use it directly - setup some local cache.

So, yea, long post, not so complicated story, short summary: AVOID FAKERAID! - And only use hardware-grade raid if you can'T get away otherwise. Todays hardware is so powerful there's just no point not to run a proper software raid.
Why? Hardware-RAID does have its share of issues - FakeRAID is pretty much garbage and to put it simple: It's the evil some of the worse of both HW and SW raids without any realy benefits ... it's just marketing-BS / snake oil ... only proper software raid / resillient filesystems like BtrFS, ZFS (and maybe ReFS) are pretty much the only way to go these days.
Don't throw yourself into what I and many others already have suffered from - go and built a proper software- / filesystem-level storage system. If you can't afford it: DON'T EVEN TRY! - Your only proper way to go is to do regular and proper backups onto cold storage. Only try to go for software-/FS-level-RAID on a 2nd system if you can afford it. AND: Keep in mind: Only ever do so if you actually can afford to completely lose the data. If you have something "mission critical" - the ONLY way to go is proper backups. I spent couple of years and already spent more than 4-figure sum on it - don't re-do my mistake.
 
reply
    Bookmark Topic Watch Topic
  • New Topic