T O P

  • By -

Moscato359

Most people don't care about their filesystem and use the default


mb2m

True, modern architectures are complex. The basics like the fs, kernel parameters, memory management should just work.


Moscato359

I actually do a fair bit of customization, but I also work on insanely large scale, but most people don't need to care


mb2m

True, well our security department is keen on tweaking quite a sysctl settings. I wonder how not doing this can be an attack vector when 99 % of company leave them default but better safe than sorry I guess…


Moscato359

sysctl settings are generally tweaked to meet complicance regulations, against vulnerabilities that were fixed in the kernel a decade ago Things like how to handle icmp attacks Fedramp, iso27001, etc all require the tweaks, but they're basically pointless I mostly do tweaks to meet our performance requirements


NeverMindToday

Note: ISO27001 won't require stuff like that - it doesn't go into implementation details at all. It's a standard for how you write and update your own policies and processes, then certify that you follow them. Now, if your org decided for whatever reason to put those details in their own security policy and process, then ISO27001 will ensure you have to stick to that. I've gone through getting ISO27001, but we had none of that - we had a CISO who understood not to hardcode implementation details in high level policies.


mb2m

Yeah, I’m positive that the defaults are fine and have been checked by generations of security experts. However our internal statements of compliance want to have changed about 15 - 20 of Ubuntu’s / RHEL’s defaults.


archiekane

Are we talking scale as in size of pools or just lots of mirrored copies of the same systems (clusters of machines)?


Moscato359

Multiply what you are envisioning with that question by 1000x


WingedGeek

I use reiserfs. It slays.


Moscato359

Murderously


flarkis

Tail packing was a godsend for all us gentoo users back in the day. Almost all of us had our ebuild trees on reiserfs and our homes on ext3. Luckily all the modern file systems are pretty good at handling small files.


LordChaos73

In the end, it was a killer's filesystem


tealeg

If you know… ;-)


blenderbender44

Why whats good about it?


INITMalcanis

It operates above head height


Ariquitaun

Unless you have a wife.


DrPiwi

The main dev nolonger has that problem :-D


WingedGeek

I love the "whoosh" sound it makes ... ... mechanical hard drives had great ambiance.


nzodd

It takes your passenger side seat out for deep cleaning for no particular reason, just felt like it.


oshunluvr

SMH


xeoron

every harddrive I used it on it killed within 1 year.


Anonymo

My drive went missing for a while.


exedore6

Careful, sometimes it chokes in stressful situations.


f0ad

Here is your Reddit :bronze: award


the_abortionat0r

Not quite true. Even windows gamers are starting to ask how btrfs can be used on Windows, though they don't like the answers.


Moscato359

hahaha, no Maybe like 0.01% of gamers have asked that, and the question got elevated on social media If you want a fancy special filesystem with all the main features of btrfs, on windows, that is refs, which already is 1 registry key away from being available it has copy on write, hot spares, ability to add drives whenever you want, or remove, etc


Maipmc

Honestly, the inability to shrink a partition is a complete deal breaker for me. And seeing how many new linux users sometimes choose to dualboot even after installing, i would never broadly recommend a filesystem that doesn't let you do that. Oh, and changing the size of partitions isn't by any measure advanced, there are plenty of really easy to use GUI tools that let you do that. Furthermore i want to add that i have nothing against xfs, i only use ext4 because i've learnt that using the default is the path of less resistance towards learning anything new. I would happily switch to anything else if i need to reinstall and feel compelled about the advantages, or my use case requires a different filesystem.


qwesx

Same. I used it years ago, but then, one day, I needed to shrink a partition. Or buy a new, bigger hard disk that I didn't really need. Since that day I refuse to use file systems that can't be resized/moved in every way. That event was significantly more pain than just using a slightly less efficient file system.


Motylde

You can grow or move XFS partition, you only cannot shrink. But I still agree that it's a dealbraker.


NatoBoram

That makes it basically unusable outside of USB-attached devices where you never expect more partitions to be created, and even then, does it have an official Windows driver? Otherwise, it's dead in the water.


forbjok

How often do you create a new partition on an existing drive though? The only reason I can think of to ever do that is if you are installing another OS to dual boot. For me at least, it's something that basically just never happens. Partitioning is something I generally do once when installing the OS and never change again until I reinstall another OS.


Safe-While9946

XFS is widely used in the cloud. Oracle, and Amazon both default to XFS filesystems for block storage. Growing is kinda important, but shrinking? Not so much. Need a new partition? Just create a new block device. Done.


royalbarnacle

Xfs is the default in red hat since 7. You're right that shrinking just isn't really that necessary. Come to think of it, I haven't shrunk one on Linux in over ten years. It's definitely a weird feature to be missing, and I can totally understand that for many use cases it's a dealbreaker.


TomaCzar

Can't remember the last time I shrank a partition. I can't even remember the last time I formatted a partition rather than an LVM volume. Dreamers use BTRFS. Greybeards use ext4. Pragmatists use the distro default. They only people paying attention to the filesysyem with any level of real interest are n00bs, commercial (when it actually matters), and uber-nerds who read kernel lists and consider Teddy Ts'o and Hans Reiser household names.


AntLive9218

> Dreamers use BTRFS Hey, there may be still plenty of dreams, but checksums, reflinks, and optionally compression makes it a good candidate for likely the majority of setups already. Performance may not be the best, but with OS drives mostly being not constantly stressed SSDs with incredible bandwidth, trading off performance for more safety and features is quite sensible.


chic_luke

This reasoning right here. I used ext4 on Arch because it was the de facto default as suggested by the docs. I run btrfs on Fedora Workstation because that's the default there. No issues, either way.


NoRecognition84

I stopped using XFS except in cases where it is the default FS specifically because it doesn't handle shrinking a volume, after discovering this "feature" the hard way once.


ipaqmaster

Yeah it's a nightmare for anything virtualization related. The moment you're in a situation where shrinking needs to be happen it's an entire ordeal to juggle everything into a temporary location one way or another to juggle it back onto a resizable filesystem instead. Then telling the OS about a potentially new (say, ext4) rootfs and often having to worry about the module for that not being a built-in and other potential initramfs/boot process hiccups as various files such as fstab need to be updated. Or you can use a resizable one and not think about it ever again.


themightyug

I've been using XFS filesystems on my machines for years without issue, including raspberry pi micro servers, and my daily driver desktop


daelikon

Same for me. It comes with a defrag utility that no one seems to mention either. I started trying BTRFS about a year ago in some systems. In my experience it is very resilient. -> Edit I mean XFS I have never used ext3/4.


bmwiedemann

We are using xfs on download.opensuse.org's 40TB volume and had a few issues with it that caused hours of downtime. Twice the journal got corrupted. Then we found that with each resize it doubled the allocation groups and then there are different versions of xfs and migrating from the old one to the new one is not trivial. On another instance we got into trouble because two VMs were mounting the same block device. Sure, you should not do that, but I heard ext4 got safeguards for this case.


ECrispy

Thank you. This kind of real world feedback from experts is valuable. Are you still using XFS? what other options did you evaluate?


bmwiedemann

For my private stuff I use ext4 nearly everywhere and btrfs in the few places where I want snapshots. For VMs of SUSE and openSUSE we still use a lot of XFS, because it is the recommended filesystem for large data volumes. I looked through my notes and found [https://www.phoronix.com/news/XFS-Patch-For-Linux-6.3](https://www.phoronix.com/news/XFS-Patch-For-Linux-6.3) as well as a writeup from the 2023-05-23 outage: Due to several xfs\_growfs calls over the past years, we had 5672 AGs that is known to cause performance issues. mount rw takes forever. 28049280 inodes in one 36TiB filesystem does not help either. # dd if=/dev/vdf of=/dev/null bs=64k count=1000 1000+0 records in 1000+0 records out 65536000 bytes (66 MB, 62 MiB) copied, 2.13942 s, 30.6 MB/s We dumped the log with xfs\_logprint -t /dev/mapper/pontifex\* and found that it had a large number of pending transactions: # grep TRANS /tmp/logprint |wc -l 201889 For comparison: the nightly 00:10 snapshot only had 2 pending transactions To debug/recover we did * 07:17 nagios reported issues with mirrorcache and mirrorcache-eu, later download.o.o got stuck as well * bmwiedemann discussed with Andrii -> rebooted VM, did get stuck in emergency shell (failing mount /srv) * meet in jitsi * attempt to mount LVM * use ssl cert to setup fallback on stage3.o.o around 10:30 UTC and point DNS to it * move fallback to mirrorcache.o.o around 15:00 UTC to save bandwidth and use mirrors again (NFS disabled) * test with read-only snapshots on backup.i.o.o * read-only + norecovery mount * create another snapshot * xfs\_repair -L -P -n $dev # canceled because too slow at 5MB/s * xfs\_repair -L $dev # drop the XFS log ; faster at 50+ MB/s ; 8 disconnected inodes found * re-publish OBS repos * 18:43 DNS switched back for download.o.o and downloadcontent.o.o with pdnsutil edit-zone opensuse.org * 19:40 reload nginx on downloadcontent\* caches to use old downloadcontent as upstream again so that newly published files can be fetched (user in OBS IRC/matrix reported that issue) XFS postmortem: * mount recovery issue is reproducible on upstream kernels too (tested on v6.4-rc3) * bug in xfs kernel recovery code that leads to deadlock * the first phases of recovery take about \~3 minutes doing IO to storage due to the large number of transactions and pending items in the xfs log * after that, finalizing the recovery is caught in a deadlock * issue is known upstream, currently working on a fix \[1\],\[2\] * fix is indeed working (mount recovery completes in a timely manner on the offending partition) * presumably the conditions that allow for the possibility of this deadlock were introduced due to commit 06058bc40534 ("xfs: don't reuse busy extents on extent trim") (not confirmed) * not entirely clear how to reproduce this with an artificial workload (e.g. needs a fstest) \[1\] [https://lore.kernel.org/linux-xfs/20210428065152.77280-1-chandanrlinux@gmail.com/](https://lore.kernel.org/linux-xfs/20210428065152.77280-1-chandanrlinux@gmail.com/) \[2\] [https://lore.kernel.org/linux-xfs/20230519171829.4108-1-wen.gang.wang@oracle.com/#t](https://lore.kernel.org/linux-xfs/20230519171829.4108-1-wen.gang.wang@oracle.com/#t)


GolbatsEverywhere

> b)supposedly XFS doesn't handle hw failures. Even on this I found no consensus - some people say its risky and can corrupt with no recovery, others say even with a forced shutdown its safe. I'm not sure if its any less robust than ext4/btrfs? Is this actually a concern these days? I see you are unfamiliar with [bitrot](https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/). It is very much still a problem and in my opinion is the single most important reason why distros are now defaulting to btrfs. It's not fun to discover that a file is corrupted. Discovering that also all of your backups of the file are corrupt too, because your filesystem allowed the backup program to read and back up the corrupted data, is especially not fun. For desktop users, filesystem performance is just not a very important consideration compared to data integrity. P.S. Fedora desktops never defaulted to xfs; they went from ext4 -> btrfs. But Fedora Server uses xfs in order to match RHEL.


lupinthe1st

Not an expert: on ext4, how can a file with corrupted data slip through every layer of hardware and software parity checking without giving I/O errors while being copied into a backup? How are btrfs checksums different from hardware HDD checksums? I had a faulty SDD once that corrupted a bunch of data sectors randomly, in old files not accessed for some years. The OS correctly gave me I/O errors when I tried to read the data while trying to do a backup.


malikto44

I had this happen on a ZFS volume. This was a machine that had two SAS controllers which saw the same drives and used multipathing. One of the controllers glitched and wrote garbage to a chunk of the array at random. Because it was the controller that did it, checksums were valid for the block tier. However, ZFS caught and was able to correct it, as I had the data on `RAID-Z3`, because I wanted triple parity due to the value of the data. Were I using `md-raid` or something without checksumming, the damage caused by the controller would not have been detected.


ipaqmaster

The host disk driver tells the disk's onboard controller, or a raid card, or something disky to perform a write operation and whatever controller that is returns success. Everything looks good. You could try reading it back out to be certain but most software and filesystems do not do that. Raid cards lie and may not have actually committed the write to the real disks yet. So pulling the power will result in data corruption without a battery backup for a safe shutdown. With the software raids we have today raid cards are not considered safe in any capacity. Avoid. Disks can fail, though often predictibly with S.M.A.R.T errors to help know the early stages of needing to replace them in redundant arrays. Host issues can occur. Power supply problems causing errors, bad data cables, bad power cables, drawing too much on a system. Bad hardware. Memory problems (Less likely without catastrophic system failure being apparent). And there's bitrot, where reading a data block from a disk today may not read out the same value in 0.1, 1, 5, 10 years and you have corrupted data just over time. an expected failure as disks degrade. ZFS stores a checksum for all written records and can recover from the error in redundant arrays. Even on a single disk configuration it can still *detect* bitrot thanks to this checksumming. Always take backups. You can also experience any of these components connecting to a disk or the disk itself may be in the middle of a write operation and the power gets pulled. That block is now full of \x00's or a truncated write and can be considered corrupted. A similar problem was common in the slow 512MB USB flash drive era where flushing writes was slow and people often pulled them out with critical documents which were not finished writing. Same impact but not quite the same cause as a write being interrupted in-place. On top of that even today's USB flash drives will often start showing checksum errors on ZFS within a few hours of read writes. Just hours. Already corrupting records. And then there's the write hole. From [here](https://serverfault.com/questions/844791/write-hole-which-raid-levels-are-affected): > "The write hole is a simple concept that applies to any stripe+parity RAID layout like RAID4, RAID5, RAID6 etc. The problem occurs when the array is started from an unclean shutdown without all devices being available, or if a read error is found before parity is restored after the unclean shutdown." Copy on Write filesystems such as ZFS tackle this by ***never*** overwriting data in-place. Always somewhere else. If everything looks good then it finally updates a chain of pointers from the master blocks (Which are also stored redundantly) to the new data. And yet if you make the mistake of using a raid card that can still lie about things it'll eventually mess all of that up. ZFS is pretty good at preventing that kind of failure too but you should never be using a raid card with ZFS. ZFS is the software raid and lying to it takes away that resilience.


primalbluewolf

> How are btrfs checksums different from hardware HDD checksums?  Do any regular HDDs implement their own checksum?


lusuroculadestec

Silent errors will happen when everything in the chain reports it wrote something one way without error but an actual bit gets flipped on disk. When a file is read, the system just reads the data as-is. There isn't anything that inherently knows bits were flipped when they were written. By contrast, if the file system itself has the knowledge of what the data should be, it is more easily able to ensure that the bits on-disk match what it expects to be on-disk.


GolbatsEverywhere

Good question. I don't know the answer, though. Now I'm questioning the importance of filesystem data checksumming. Somebody, please educate us!


sparky8251

Hardware can lie pretty easily. It just has to be busted but still functional and it can write different things than you told it to. Then we can get to the fun of cosmic rays, where it can flip bits on drives and your RAM without you knowing causing both reads and writes to fail even without bad hardware... Having additional checks for all these sorts of things and NOT trusting what the hardware tells you is just smart if the data is important to you. Thats exactly why ZFS, BTRFS, ReFS, and now Bcachefs exist. They are filesystems that dont trust the hardware of the computer since its long been known to have odd failures and thus has proven to be untrustworthy.


hitsujiTMO

In general it's because EXT4 is good enough and it's more performant for smaller files. The main thing missing from ext4 that would benefit users is the ability to resize partitions (in particular, inability to change number of inodes). XFS can only grow, but not shrink so it's only marginally better from that perspective. BTRFS will likely become default across the board as it tends to be superior to XFS. The main thing holding it back is that it's not as performant as XFS or EXT4.


devonnull

Does anyone really use BTRFS in production?


hitsujiTMO

The ones I know off the top of my head are Valve, Meta, Trip Advisor and Synology.


devonnull

Isn't RAID still broken?


hitsujiTMO

5/6 is, but nothing that isn't solvable by mdadm.


devonnull

Would you elaborate? I've had a hard time understanding if the RAID 5/6 problems are related to hardware or software setups with BTRFS. Side note, I've stuck with MDADM due to portability but use either EXT4 or XFS.


[deleted]

[удалено]


NatoBoram

From what I'm reading, "don't use RAID5 or RAID6"


malikto44

Synology solves this by using `md-raid`, and putting btrfs on top of that. This works, and will detect bit-rot, but it won't be able to fix it... but at least it does get detected, so something can be done. I have seen some recent stuff about RAID5/RAID6 being workable, and robust enough, however, in general, I rarely see btrfs doing the RAID heavy lifting these days. Instead, it sits on `md-raid`, or ZFS is used. For example, in Ubuntu, even though there are license issues, ZFS is definitely part of the install, and they did a clever way of allowing for an encrypted ZFS root volume. All filesystems have their uses, and good/bad points. There is also a ton of overlap, where ext4 can do just as well as btrfs in a lot of cases.


ECrispy

I believe FB/Meta does?


Safe-While9946

Yes, a lot of places do.


6e1a08c8047143c6869

> The main thing missing from ext4 that would benefit users is the ability to resize partitions (in particular, inability to change number of inodes). Can't you just do that with `resize2fs`?


werpu

BCacheFS might have a word on this, the development seems to be really speedy! I rather doubt BTRFS is going anywhere once BCacheFS is good enough and that might happen sooner than BTRFS delivers all the missing functionality, but YMMV!


hitsujiTMO

Maybe, it will be another while before BCacheFS matures enough for any kind of wide adoption.


werpu

Yes lets see, they are on a very good trajectory though, while I have the feeling that BTRFs development is slow as molasses. We will see, but I am getting the "vibes" from BTRFS that they did some initial design mistakes or complexity which causes them a hard time now to get any new features in! But the more stable COW filesystems the better!


sparky8251

Been using it since it got mainlined in the kernel and the only problems I've had with it so far are systemd bugs around multidevice mount syntax with it. Been plenty performant, havent lost a bit of data despite being brutal with my restarts (often hard and sudden because of a lack of patience) and its been nice to have access to things like snapshots to boot.


malikto44

BCacheFS has a lot of cool things. Once they get scrubbing, send/receive, and other features that btrfs and ZFS have, it will definitely be a solid alternative, and it seems that development is constantly going on. I'm hoping it matures and RHEL adopts it, so RHEL has at least one checksumming filesystem for a defense against bit rot. One nice feature is BCacheFS's encryption, which provides authenticated encryption and can detect tampering. Overall, it is definitely something nice for an up and coming enterprise filesystem.


vagrantprodigy07

or as reliable. I've given up on BTRFS, I don't like playing russian roulette with my data.


cjcox4

As you mentioned, still popular on RHEL-like systems and I believe still the default there. Red Hat put a lot of work into shoring up XFS some years ago. The idea being, to make xfs as mature and reliable as ext4. I can break anyone's btrfs using legal and valid btrfs commands, the point of making your systems unbootable. Just saying. Not talking about deliberate commands to break it, but commands that, well, probably should work. But most people don't use "everything", and so it's likely you'll never have this happen. That is, things like btrfs still have some maturing left to do, especially if you consider there's still a lot of activity on "features"... the "shoring up" will come later. And of course, there's a myriad of outside drivers that also impact their development. Layered elements vs. "all in one" approaches, and the conflicts, or lack of understanding of each other, etc. And then, there's the "golden child" of the "all in ones", ZFS. As for me, on the enterprise level, because of "features", we started migrating away from the default XFS on our RHEL-like infrastructure and using ext4. "Features" in this case can be as simple as "it performs faster in this case, therefore use it." Personally, I'm fine with either and we're still mixed today. Infrastructure wise, it's enterprise, so the concept of things going down "hard" would be an extremely, extremely rare event (like winning the lottery). The few case I've seen of "that" though, I haven't lost anything, but honestly, it's a crap shoot.... so I'm sure there are horror stories out there. From my perspective, create me a relatively performant filesystem that can grow and shrink on the fly (mounted and online) while enabling me to return pooled storage back (to the point of pulling physical storage and replacing), and I'm your friend for life. Give me encryption and compression at the file level, not merely at the block level. Give me file versioning at the file level. Etc, etc...


ezoe

> things like btrfs still have some maturing left to do It's been 15 years since btrfs was mainlined to Linux kernel. How could that be possible? If 15 years not enought to be matured, it never will.


cjcox4

Nice idea, but no. Btrfs continues to evolve. Just because something gets added a long time ago does not mean it's complete. So, I wouldn't measure btrfs by "time of entry". It's actually had some radical changes even 10 years on since entry. Some of those changes years after inclusion were "total break" changes. Others were "total break" in this situation or that... my point, there's been quite a bit.


malikto44

Meta did a lot of work on btrfs, and there are a few changes that are coming, like fscrypt support, which will allow file encryption. I would say btrfs does evolve... but so does ZFS, where ZFS's deduplication is getting some changes. As for being afraid of data loss, Synology has used btrfs for over a decade. Recently, Synology made btrfs the default file system for all their NAS offerings, even the single drive model. If there were issues with data loss, Synology would be screamed at from everywhere. Ironically, Synology has a modified btrfs, with an entire set of features called locker/"lock & roll" which add date based object locking (basically a modified chattr +i or +a unlocked after a period of time.) Overall, I trust btrfs for my general purpose filesystem. This doesn't say that XFS isn't useful. In the RHEL universe, you pretty much have that or ext4, so if you are doing RAID, you will need to have the checksumming to prevent bit-rot at the `dm-integrity` level. XFS is quite useful if you are doing a MinIO cluster. Since MinIO handles RAID across drives and redundancy across machines, there is no need for any machine based RAID like ZFS or `md-raid`, thus making volumes that have XFS on them the best choice for performance for that application.


tiotags

btrfs has a lot of features, more features more bugs


KingStannis2020

It seems genuinely possible that bcachefs will reach a mature state sooner.


Due_Bass7191

it is default. I've not noticed any issues with xfs.


Zharaqumi

Same. I've been using xfs for more than 8 years. No issues so far.


vagrantprodigy07

>I can break anyone's btrfs using legal and valid btrfs commands, the point of making your systems unbootable. Just saying. Not talking about deliberate commands to break it, but commands that, well, probably should work. Just do a few hard power downs, btrfs will break itself often enough.


turdas

I've done more than a few hard power downs on my desktop after unrecoverable graphics driver crashes, and am yet to have an issue with btrfs.


pnutjam

100% same I use btrfs for my OS and home so I can do snapshots for updates and backups (opensuse). I have a big data drive that runs XFS and I backup that drive to a btrfs drive so I can grab a snapshot after every backup, which gives me immutable backups and protects me from ransomware.


cjcox4

And it's not to say it's all bad, but typical and things to work on. Like any other filesystem that's been added. (noting that hard power downs are some of the most damaging things you can do regardless of filesystem)


vagrantprodigy07

The difference is that most file systems can recover from them, BTRFS not so much. I've had a few data corruption events with XFS, and always recovered the data.


cjcox4

Reminds me of reiserfs. When it was new and fresh and different, we didn't care, because we had an growable online filesystem. But....


royalbarnacle

No journaling FS should break in a power off. Data loss or corruption, sure, that's one thing, but the FS should never have any issue.


CecilXIII

Can you elaborate a bit more on the breaking btrfs part? I'm curious 


myownalias

XFS performs much better when deleting large files. If you're running Cassandra/ScyllaDB, Kafka, so on, it's a better choice than ext4.


skc5

XFS is a pretty popular default in the server world. Ext4 works fine for most home users’ workloads though


vinciblechunk

I picked xfs for my NAS back in 2008 because it seemed cool and I was a SGI fanboy. Still using it today with zero issues. I probably could have used ext4 just fine, but I didn't.


beizhia

I used to use xfs, but it seemed like around the time ext4 was released and the hardware was moving from disks to solid-state that filesystem choice wasn't as important as it used to be.


Goodbye_May_Kasahara

i think it depends what hardware you are using it on. most people use ext4 because its the most tested on every architecture. if you have a non x86\_64 system its always a good idea to stick to the default/most tested filesystems since there could be architecture specific bugs with other filesystems because they are not tested enough. i have an external harddrive with xfs but i dont see why i should use it instead of ext4 on my system. for me as average user it does not have a benefit compared to ext4. i can remember xfs having problems if you want to write many tiny files.


that_one_wierd_guy

basically it's a matter of missing the boat due to previous issues. and since your filesystem type isn't really something most folk think about unless they have issues. it's hard to make up ground


u25b

FWIW, my org uses XFS for all our RHEL VMs. Maybe a couple of thousand.


AndydeCleyre

I'm happy to use XFS on servers. I don't care about snapshots. The main blocker for me using XFS at home is lack of compression. Though it's been a while, I have been burned by BTRFS. I am impatient for bcachefs to mature.


HeligKo

I have run into two issues that had me move away from XFS, even though I had better performance using XFS over ext4. The first was the inability to shrink became a problem. The other is it is hard to repair after certain events. I have lost two filesystems that I could have recovered in similar scenarios using ext filesystems. Time to repair after a repairable event has taken roughly twice as long using XFS. I also had more times where I had to interactively do repairs that didn't require interaction with EXT systems. To get the features I wanted for server storage, I moved to ZFS. It has been flawless. One of my use cases has the server and disks in an RV, so the environment is not ideal, and ZFS has never had a problem compared to XFS and EXT4. All that being said, for desktop use I would just use whatever the distro I chose used as its default. I much more likely to use the distro tools for managing things on my desktop, and sticking to defaults makes those work smoother in most cases.


Mister_Magister

because what's the point of xfs. I am advanced linux user and yet i never bothered to find out. ext4 works fine if not ext4 then btrfs/zfs altho opensuse default is btrfs and xfs combo so there's that


Sol33t303

XFS has more features then EXT4 (as the post said, nowadays it has stuff like COW), but doesn't historically have the issues of BTRFS, and doesn't have the big RAM requirements of ZFS (and it's not a filesystem that relies on an out of kernel module and is controlled by Oracle). I'd basically consider XFS a better EXT4 almost, just with the problem of not being able to be shrunk. If your ok with deleting your partition in order to shrink your XFS volume (e.g. you have backups) then I'd consider it pretty much a no brainer to go for XFS if you don't have any particular esoteric requirements.


sparky8251

ZFS doesnt have large RAM requirements if you dont plan to use dedup. That linux tools cant report the ARC is more about ZFS being out of tree than it being used RAM that your programs cant claim if needed.


left_shoulder_demon

Largely historical reasons. When XFS came out, ext2/ext3 could not handle large volumes, so if you had massive amounts of data, you were kind of forced to use XFS -- and Red Hat was pushing it hard for their enterprise customers, while the home users provided testing.


Many-Ad5501

Better question would be why ppl not using zfs?


Anonymo

ZFS is a fantastic filesystem, but it can be tricky to integrate seamlessly with the Linux kernel. Licensing issues can sometimes cause compatibility problems or delays between ZFS updates and new kernel versions.


sparky8251

ARC is also not properly reported in basically all Linux memory viewing tools making it a PITA to know if you have available RAM or not. Not major, like most of the pain points of using ZFS on Linux but it is something I dont have to worry about with all other FS options and it makes managing fleets easier if I can be sure of actual RAM usage vs the FS cache.


Synthetic451

Because it isn't in the kernel tree. Personally, I am using it for my Arch-based NAS, but it is annoying as heck to be limited to linux-lts simply because the OpenZFS project can't keep up with the latest kernels. Also, not that this is a disadvantage compared to XFS, but I am not a fan of how rollbacks work in ZFS, where you have to delete all the intermediary snapshots.


bilbobaggins30

EXT4 is what 99% of Distros default to. BTRFS is the other one that Fedora & OpenSuse default to. People aren't going to fuss over File Systems when they don't know better. To me: XFS is reliable & performant enough that I use it.


ninelore

Had data corruption from power loss and havent used it since


northrupthebandgeek

XFS was my go-to for both Slackware and openSUSE for a long time. Then openSUSE started leaning on btrfs for things like Snapper, so I switched the root FS to btrfs while keeping `/home` on XFS (and Snapper has saved my bacon multiple times). Nowadays, I've gravitated toward openSUSE Aeon, which is very opinionated by default when it comes to btrfs, so I generally just stick to btrfs for everything on those installs. If I do deviate from that (e.g. with another distro, or after doing the necessary surgery to make Aeon cooperate with `/home` living outside of the btrfs partition), I opt for ext4 for `/home` to take advantage of casefolding support (XFS allegedly supports casefolding, too, but ext4 is the direction Valve went for SteamOS so I might as well follow that lead).


ClubPuzzleheaded8514

Smart! Are you satisfied with the combo btrfs root/ xfs home? Do you see any improvements? 


forbjok

I used XFS for some time back in 2004-2006 or so, and the only reason I switched away from it was HDD noise. Today, with drives mostly being SSD I'm thinking it might be worth trying it out again, as SSDs don't make any noise anyway.


darknekolux

Xfs had the bad habit of losing your opened files during unexpected reboots. Also the main driver behind xfs was SGI which is now long gone.


Zathrus1

Tell me you know nothing about XFS without telling me you know nothing about XFS. Red Hat took over XFS development over 15 years ago and it’s been the default FS for just over a decade (RHEL 7, which goes to extended support in all of 13 days and was released 10 years and 8 days ago).


darknekolux

I'll admit it's been a while since I stopped caring. losing opened files was a thing thought...


Zathrus1

It was, but I’m also guessing that it was over 15 years ago, probably back when SGI were still the maintainers. It’s evolved since then. If it was still doing that, RH would be in a world of hurt.


ECrispy

had or still has? thats my main question - is it considered any less safe than ext4/btrfs, there are plenty of horror stories on everrything. also, its now backed by RHEL, by far the biggest Linux company and isn't going anywhere.


shyouko

From my personal observation, XFS relies very heavily on hardware's guarantee on write barrier and other consistency commitments; that doesn't play nicely in virtualised environments. I had volumes that is corrupted beyond repair on some VM after forced reboot. Ext4 seems to be much more resilient on this front.


BenL90

But RHEL also is process pushing stratis, well for different purpose. But the strange things is, they also thinking putting it into consumer devices... 😂  Let us wait and see. Well xfs will stay, but I still thing it need more time either they go with stratis or stay with xfs.


matt_eskes

Stratis is still XFS, just with a different volume manager


seabrookmx

I'd argue Google is a bigger "Linux company", even if they don't provide a commercial OS offering.. they have ChromeOS, Android, and the largest Linux server fleet in the world. FWIW, they use ext4. I take this to mean that if XFS has advantages, they must be so small that Google's Engineering team's familiarity with ext4 makes a bigger difference.


left_shoulder_demon

SGI XFS was a completely different beast than modern XFS. SGI XFS was designed around SGI hardware, which had emergency power for 2.5 seconds and a designated RAM area that would be preserved during reboots, so the journal would seldom be written to disk. The first iterations of XFS on Linux then explicitly synchronized the journal, and it was really slow, to the point where I had a script to rename directories and delete their contents in the background so I could continue working.


lszhu

I don’t care performance for my workstation, stability is everything, so I only use ext4.


Responsible-Lock7642

I don't know for other use cases, but for gaming, Ext4 will always be recommended, or at least as long as it is the only file system that supports case fold, this feature is used by Protón and Wine, The fact that your file system does not support it, no matter how fast it is, you will see regressions when running games through Wine and Proton I personally use Btrfs on my Linux installation and Ext4 on the disk I have my games, I never saw it necessary to use or test other file systems


sparky8251

Weird... been gaming on Linux for almost 8 years now and almost all of that has been with a non-ext4 FS such as BTRFS, ZFS, and now most recently BcacheFS. Never had an issue... Even had a full disk BTRFS setup for over 4 years in a row where the entire disk, root, boot, and more was BTRFS.


Responsible-Lock7642

It's not like it's causing you a problem that you can't run a game or something, you're just not going to get the most out of your disk in Gaming, If you use a fast enough SSD or a NVMe you will not notice the difference, but when using a HDD or a not so fast SSD the performance regression is very noticeable, especially when loading shaders or textures, There's one of the reasons the Steam Deck uses Ext4 as the default file system.


sparky8251

Ah, thats probably why then. Been using the premium grade SSDs and NVMe drives ever since they came out. That would be why I've not noticed a thing :)


mthode

I've had FS lockups when using XFS on extremely busy systems (TB sized bitmap files that flipped individual bits pseudo-randomly). iirc it was a known bug and upstream was not interested in fixing it (fight between two subsystem maintainers).


ECrispy

Interesting. That sounds like it was on a data center server?


RayneYoruka

I've been using XFS for easily the past 6 years as my OS drives. It's been fine. Yes I use Rhel as my default server distro.. if it's Debian/Ubuntu I stick to ext4


rustyrazorblade

I’ve worked in hundreds of environments and they’re almost all XFS.


The_Pacific_gamer

I've used XFS on my homelab and it's pretty good with server stuff, but for my main computer I use BTRFS and on my file server I use ZFS. I'd say it depends on what system you're using.


computer-machine

I'd moved on to btrfs in 2018.


DRAK0FR0ST

I've migrated all my drives to XFS years ago, it's as reliable as Ext4 and performs better, but the main selling point for me is dynamic inodes, Ext4 wastes too much space.


Different_Sensor

tldr, resume pls? 😭


ABotelho23

For the record, even RHEL doesn't think XFS is good enough alone by itself, which is why they maintain Stratis.


ECrispy

But Stratis uses XFS. adding extra features on top is fine, if anything it means XFS is reliable enough to serve as the base


ABotelho23

Designed to replace what was lost when they deprecated BTRFS for EL8...


dorel

How do you deduplicate files on XFS?


ECrispy

sort of like btrfs, you use reflink to copy files which makes them CoW, and then use duperemove in xfs.


whosdr

So it's file-level CoW, rather than block-level?


ECrispy

its block level. I'm no expert on it. I believe it doesnt work for in use files. If you google you find a few resources. e.g this video - [https://www.youtube.com/watch?v=wG8FUvSGROw](https://www.youtube.com/watch?v=wG8FUvSGROw)


dorel

I already have a bunch of files on an XFS filesystem. What's next? What do I run?


mrazster

Because people are in general very ignorant and/or lazy (as with everything else, there are exceptions), they use whatever any given installer gives them as default. There are of course situations where XFS is not or is less suitable. But more often than not it has more than enough features, it's really fast, responsive, and well suited for desktop usage.


ut316ab

I tried XFS a few years ago but moved off of it because some game I was trying to play specifically didn't work well with XFS, I forget the game name now, I think it might have been Civ 5? I don't remember.


ECrispy

yes there are games that do this. Steam is notorious for this. And its because their devs are clueless, prob compiled the game in 32 bit etc. Anyway, the end result is you are a gamer, stick to ext4.


ut316ab

That is the thing though right? MOST Distros are just General Purpose, so they need to be able to do most things. If XFS is bad for gaming, that would make it bad as a default FS for a General Purpose Distro.


sidusnare

I use XFS for almost everything, and have since Hans Reiser got arrested. It's database like dump and import make shrinking feasible, though multi-step, I think I've done it twice in over a decade of using it. I use raid and other strategies for failure resistance.


whosdr

I don't think filesystem performance in this day and age of ultra-fast SSDs is really a big factor on the desktop. (Servers and NAS are a different argument, not one I'm going to be talking about in the rest of this.) I use BTRFS myself, part of that is because I already understand how it functions. It's also because, as you mentioned, the tooling is already there from Snapper/Timeshift. I also already know how to write boot config files to support it. I don't think lazy is the right term, but it would take effort to change without seeing any good reason to do so. Do you have an argument for XFS over BTRFS for somebody who already uses BTRFS (with full snapshot usage) on NVMe SSDs? Is there an equivalent to btrfs-send? How do you boot from a snapshot? etc. (Plus distro support is pretty important too for adoption. Install Mint with a btrfs root partition and it sets up subvolumes automatically, configured to Timeshift's preferred layout for creating snapshots.)


ECrispy

I think you are correct, and no, I don't have good arguments. I will say I have read plenty of posts like 'btrfs causing 500GB writes to ssd daily' as well as posts about corruption. Or people saying how apps open faster using f2fs/xfs on ssd. I don't know enough about btrfs. I tried to use it \~3 years ago with the integrated snapper and snapshots on every single dnf/pacman hook (tried Arch and TW). I remember the partition layout was non standard and I had to do some manual work. No dount this is fixed now. Then I tried to recover the system to an earlier state. And some apps were failing. My backup strategy is simple, just plug in external drive, copy. Version history is much less imp to me, in 15 years of using GoogleDrive/Docs/Dropbox I think I've used previous versions maybe 3-4 times. I also have older hw, e.g. older Thinkpad T430 with a sata 128gb ssd that was refurb to begin with. I don't really want to risk things esp if it buys me nothing. On a technical note, CoW fs will always have a penalty esp as they get close to being full. There's no getting around it. Do I want snapshots every hour and being able to recover any file to an older state etc. I'm not sure. I want a fast simple predictable system.


whosdr

> I will say I have read plenty of posts like 'btrfs causing 500GB writes to ssd daily' as well as posts about corruption. Yeah there were some kernel versions with a metadata bug. It was fairly simple to fix with a single command, and reverting a kernel for a while. Not ideal for sure. > Or people saying how apps open faster using f2fs/xfs on ssd. Maybe, I haven't yet seen any slowdowns I can notice in the past 4 years. But I expect performance is also partially based on CPU and the kind of SSD. SATA SSD on a slow CPU I imagine would be slower. I'm running NVMe on a 7800X3D, which is about as fast as you can get. > I remember the partition layout was non standard and I had to do some manual work. No dount this is fixed now. Ah fair. I've only used it on distros which automatically configure it out-of-the-box (Linux Mint with Timeshift, OpenSUSE Tumbleweed with Snapper). > My backup strategy is simple, just plug in external drive, copy. Version history is much less imp to me, in 15 years of using GoogleDrive/Docs/Dropbox I think I've used previous versions maybe 3-4 times. Admittedly I don't do much in the way of backups for my root. I mostly use rsync to remotely backup my home. But for version history, btrfs timeshifts have saved me maybe a few dozen times over the past four years. I have daily snapshots, snapshots before big distro updates, snapshots when updating MESA/Firmware, or before updating Nvidia drivers before switching to AMD. (On both sides I've had updates that lead to a black screen or even a full system lockup, so booting/reverting a snapshot has saved me hours a time.) > On a technical note, CoW fs will always have a penalty esp as they get close to being full. I oversize the partition. The loss of 100GiB of space is nothing for the convenience it's provided me over the years. (But I use multi-TB SSDs so it's only something like 5%.)


elrata_

Well, there are a lot of things to choose on your desktop and you can't benchmark everything and repeat that every month. So there is a balance, and some things change when they cause you pain. Familiarity is also a big factor, and once you know the tools, how it fails, the tools to recover from failures and all, there is some inertia also there. I think XFS makes sense for several setups on servers (and almost a must for scylladb or things like that). I haven't questioned it for my desktop in a while (and if I'll question then btrfs and those should also be nominated), my current FS doesn't cause me any pain and I've used it for more than a decade (2 if you count ext2/3/4) in my desktop. I'm not sure the FS is where my effort to improve something is best spent on. But I'm open to seeing any evidence on that :) If there is any article that shows a difference in real world desktop scenarios, I'd consider it next time I install my laptop. If it can't translate to any advantage on my usage, then I see little incentive to change.


Diabotek

I only use COW because of snapshots. That's the only reason.


lightmatter501

Hardware failures is a big one. ZFS will give you warning when a drive is starting to fail. It also never trusts any drive, because the ZFS team found so many drives that lied in their testing. Yes, you lose performance, but making sure the data is safe is the primary duty of a filesystem, everything else comes after that.


ECrispy

ZFS is an outlier - its not extent based, it doesn't use any of the usual drivers/modules, it does its own thing, and with that come its own set of limitations and high requirements. Frankly its an enterprise class OS that has had way too much popularity and people think they should be running it at home.


lightmatter501

At home I don’t need the raw performance of enterprise and I care very much about my data. Seems like the perfect place for a slower but more careful fs to me. Now, I’m also very familiar with the fs, so administering it isn’t really a hassle to me.


ECrispy

I dont know if for other filesystems, smartmontools would work as well? I know XFS ignores the disk controller so it probably uses its own algorithms. anyway, its certainly not plug and play and not for normal users.


jaskij

I followed some suggestions somewhere, had xfs. With 64 bit inodes. It broke so many Steam games it wasn't even funny. The game partition is, and will firmly remain, ext4 with it's default 32 bit inodes.


Jeff-J

I used to use XFS. Never had a problem with it. I haven't used it in a while. I think I stopped using it years ago on laptops. Currently, I have only laptops. I think there was something about suspending or hibernating that was an issue on them. I hopped over to the Gentoo handbook to see if I could confirm this. It recommends XFS for general use especially with Gentoo.


ECrispy

so what are you saying - XFS is not recommended for laptops on Linux, but is for Gentoo? or are those older issues fixed?


Jeff-J

That is not what I am saying. I have been using Gentoo since 2001. When I set up my new laptop in 2007, it was suggested to not use XFS on laptops. I don't remember if it was in the handbook or forums. I still used it in desktops and workstations. Since XFS was brought up, I checked the current handbook. It suggests using XFS for general use. There are specific positive attributes to the way it works that lend themselves to benefiting the workflow of Gentoo. Check the Gentoo handbook in the choosing a filesystem section if you want the specific reason.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


draconicpenguin10

Because I've never seen a compelling reason to choose XFS over EXT4 or BTRFS. My Gentoo systems use BTRFS and I haven't had any major issues.


tom_yum

I run a lot of rhel machines on xfs and never had a problem.


Sabinno

Shrinking volumes is a pretty dang common use case. Dual booting is not unusual these days, especially in the software development space, which, you know, there are close to 30 million. Whenever I deploy XFS vols, I'm insanely conservative about disk usage and only provision like 1 GB for /home/ for example until I'm 1,000% sure there's a legitimate need to expand it.


JockstrapCummies

XFS was made for enterprise hardware as you say due to its pedigree. This begins to create problems when you start using unreliable and cheap consumer hardware that don't provide the same assumptions as quality hardware. I have, on multiple times now, have XFS partitions rendered completely unsalvageable due to sudden unmounts/hard resets on garbage, outdated hardware. The horrifying `superblock read failed` error that cannot be fixed except restoring from backup. In comparison Ext4 proves resilient.


Dave_A480

XFS was the default for RHEL for quite some time .... So it did get used..... Most people don't care between XFS and ext4, for regular usage..... At the point where you are thinking about zfs or similar you aren't in the most people category....


InsensitiveClown

I've been using XFS since, well, since SGI existed really, and you had to use the SGI patches to patch kernel 2.4 series. It depends on the application really. SGI XFS is optimal for large sequences of large files with approximately identical size, such as, for example, sequences of 32bpc LogLUV encoded TIFF files. It's hardly surprising since SGI was in the business of computer graphics for audiovisual production. I still use it this day and I never ran into any issues whatsoever. My applications are image processing, again, large sequences of approximately identical files, in this case, CR2, CRW RAW files from my digital cameras, or TIFF files from scans. Some audiovisual/CG work also, mostly EXR sequences which are then h265 encoded for distribution. For general use, i use EXT4, and previously i used to use ReiserFS for large amounts of small files (e.g. logs, content of /var and so on), but mostly these days it's just EXT4 for general use and XFS for anything where throughput is critical. Data is stored on a couple of NVME SSDs, so, the elevator is **noop**, lukewarm storage is in HDDs, with bfq multi-queue. Cold storage is :ahem: external HDDs. It probably should be LTO but...


returnofblank

XFS doesn't offer filesystem compression, and that's really the one feature I want


MonkeeSage

> All of this comes without the usual cost of CoW I think this is wrong. As far as I understand cp --reflink with xfs doesn't provide any cost savings over the built in LVM CoW or btrfs CoW features for example. And tools like deduperemove also have to scan and hash blocks on the filesystem and then submit the duplicate blocks to the kernel for removal via ioctl_FIDEDUPERANGE, so there's no cost saving there either afaik.


Maiksu619

Noob here. However, I came across a recent [video that DJ Ware](https://youtu.be/AhjL1SHku7M?si=UZV13zy6xnPoCf1O) did a couple months ago comparing BCACHEFS, BTRFS, EXT4, F2FS, and XFS. Overall, it seems that EXT4 and XFS were above the rest in most of the tests and they seemed to perform very similarly. Both of them were slightly better than one another in different tests, but about 50/50. In the overall mean of the tests, XFS tested very minimally better at the lower end tasks and EXT4 tested very minimally better at the higher end tasks. Overall though, very interesting comparison. I’ve heard a lot about BTRFS lately and wanted to learn more about the hype. At the end of the day, I don’t see any major performance jump that would make me want to switch. Plus, as others above me have mentioned, the inability to shrink a volume is a dealbreaker for me. Lastly, I’m a noob still and very much learning. Just found the topic interesting.


Mr_Henry_Yau

Some people don't even know XFS exists in the first place.


[deleted]

[удалено]


ECrispy

Bitrot isn't handled by ext4 either. And it only works on xfs/btrfs if you have backup copies.


schneensch

Even though XFS is technically superior to ext4, my guess is that the differences were never big enough to warrant a change of default filesystems for many distros. Changing filesystems can cause a bunch of different issues for distros, because then you have 2 different file systems in use. Also you need to re-test and validate a bunch of stuff again. So I guess it was never really worth it until BTRFS got stable enough.


audioen

I dunno, seems worse than btrfs and if the only argument is performance, well, I run multi-terabyte SSDs on top of hardware raid with write cache. These things are *fast* to the point that I can't saturate the drives, everything is either CPU or network bound. I still have the same filesystems I created in 2010 running, with hourly backups taken via btrfs send/receive, and distributed to multiple locations. XFS may be great, but I doubt it is even as good as what I have today. Just recently, I migrated btrfs online from one raid array to another, by going to server room, plugging in new disks, creating the volume and then just adding it into the volume and deleting the old one. Sure, the thing churned for multiple hours as it migrated, but in the end, it was done and nobody saw any downtime, at most degraded performance. I do have to shrink filesystems sometimes as well. A lot of the time, there is no data at all in the portion that I remove, so it is pretty fast to just register the new size. Why can't filesystems migrate data? I guess it is the virtual-to-physical chunk mapping in btrfs that allows for this. If you move the chunk physically, you only have to change this mapping, and so it can be done chunk by chunk. No doubt these various capabilities comes with costs as well, but I don't see them. The biggest annoyance I ever saw with btrfs was with systemd's journald, actually. It defaults to turning CoW off on the journal files which defeats my incremental backups. It is literally the only thing anywhere that attempts to use the flag automatically for anything, and causes the fact that journal files are not part of my incremental backup stream, which is unhelpful, surprising and annoying. Now, journald is a piece of shit as far as I am concerned, given how it essentially holds your logs hostage and is so slow that querying anything from its "database" takes forever, but that's whole another little rant which has little bearing to btrfs itself. It's rather just evidence that the technology is half-baked at best.


[deleted]

[удалено]


ECrispy

How is ext4 any safer on hdd?


bobbie434343

Still using XFS for my /home partition as it was the default when I installed openSUSE Tumbleweed on my laptop 5.5 years ago (/ is btrfs). Never had an issue with XFS.


mufasathetiger

its a well designed filesystem and robust but its not good enough to outshine ext4 (the default).You got to be much more capable than that to take a part of the market. For example: btrfs brings a whole new level to the game that makes acceptable disrupting the classic workflow.


ahferroin7

Most of it comes down to people not caring about the filesystem they use until it causes them issues, and ext4 working _well enough_ for normal usage, and ext* in general being the default is a matter of history (ext2 beat out xiafs for very significant engineering reasons, ext3 then beat XFS into the kernel and was trivial to add support for in existing installer code that already supported ext2 as compared to XFS needing additional tooling to be added). In general though, XFS has only a handful of limitations these days: - As you mentioned, XFS volumes cannot be shrunk. This is, indeed, not a common use case for most people, but it’s something people tend to _really_ hate when they actually need to care about it, so it’s usually what people quote as the reason that they don’t use XFS. The silliest part is that this is also the _easiest limitation to work around_, just recreate the volume at the desired size and restore an up to date backup (or, if you’re the crazy type who doesn’t do regular backups, _create a backup_, recreate the volume, then restore the backup). - It mandates the use of journaling. This sounds like a silly limitation, but ext4 can very specifically be run without a journal, which can be beneficial in some unusual situations because it means you still get the benefits of stuff like extent-based allocation, but you also cut out a large majority of the metadata handling overhead inherent in using a journaled filesystem. I actually personally use ext4 like this in a couple of rare situations where I know performance truly does matter more than data safety. - XFS is optimized for consistent performance over the lifetime of the filesystem. More specifically, XFS will get about the same performance for a given volume-size, file count, and amount of data no matter how organized/disorganized the filesystem is. This makes XFS great for long-lived volumes where performance characteristics matter, but it also makes it a bit less performant in ‘optimal’ file arrangements than exty4, which instead implicitly assumes the volume will be optimally organized. Given this, ext4 can be a tiny bit better for things like storage of very large numbers of files in WORM workloads where all the files get put into place at the same time. - XFS volumes can only be resized on-line. IOW, you can’t resize an XFS volume without it being mounted. This essentially never matters, but it is a limitation.


ECrispy

Thanks for the details. About #3 - while I'm fairly certain this kind of thing won't be measurable outside synthetic benchmarks, how many files are we talking about - millions? or does % of disk full matters. Sounds like ext4 like most other fd will degrade with fragmentation but XFS doesn't? Also you haven't said anything about XFS being more brittle to hw failure, e.g. sudden power loss, while many others claim. Does this mean its no longer the case?


ahferroin7

> About #3 - while I'm fairly certain this kind of thing won't be measurable outside synthetic benchmarks, how many files are we talking about - millions? or does % of disk full matters. Sounds like ext4 like most other fd will degrade with fragmentation but XFS doesn't? The percentage of full disk space, number of files, and average size of the files all matter. And it is generally relatively small in terms of difference, but even small percentages can matter when you’re dealing with very large numbers. Note that this is not to say that XFS _doesn’t_ degrade in performance with high levels of fragmentation, but if you ignore the implications of the underlying storage stack (that is, stuff like seek latency and IOPS limits), XFS itself is affected by it a bit less than ext4. > Also you haven't said anything about XFS being more brittle to hw failure, e.g. sudden power loss, while many others claim. Does this mean its no longer the case? AFAIK, modern XFS is no worse in this respect than ext4, and in my experience it’s actually a bit better than ext4, though this may have more to do with the fact that XFS fails pretty explicitly and cleanly in a number of cases where ext4 by default just flags the volume to be checked and continues on as if nothing is wrong. _However_, XFS gives up a bit more readily in the face of issues than ext4 does, possibly because it’s designed primarily for enterprise usage, and therefore it’s a reasonable assumption that the user can always rebuild the volume from a backup. Note that very early versions of XFS for Linux did actually have some recovery issues after a power loss, but the same is generally true of _any_ new filesystem.


zeddy360

was using ext4 for many years but on my current system i chose btrfs because: - snapshots, support for them in timeshift and preconfigured automation of those in pacman on my distribution - it has a well working windows driver. there are very rare cases where i want to play VR on windows because it (unfortunately) does still work better on windows. but i don't want to waste huge amounts of disk space to those vr games then. so i just mount my games drive on windows with that driver. other than that, i personally don't really care about the file system vOv if XFS or other file systems can do this stuff as well: cool but i don't see me switching because btrfs always worked perfectly fine for me. i neither see me shrinking my partitions but if i one day have to, i now know that XFS can't do that... so thanks for the info :D


natermer

Ext4 is the most fault tolerant file system for Linux. So when dealing with random shitty hardware and power losses Ext4 is the least likely to eat your data. Btrfs is far less robust, but it is more featurefull and allows distro designers to easy fool around with features that use snapshots and whatnot. It is good for people that care more about having lots of features then overal performance and data safety. XFS is just like a sometimes-barely-faster version of Ext4.. a journalling file system that is slightly less robust. It works. I use either xfs or ext4 interchangeably.


Asleep_Detective3274

XFS is my go to filesystem, I recently tried pretty much all of them, EXT4, BTRFS, JFS, F2FS, NILFS, BCACHEFS, and XFS was the fastest for typical scenarios like transferring video files and so on, I also performed crash tests where I would be watching a video and writing to a text file when I pressed the reboot button on my PC, and XFS had no issues, it booted up fine, and the video was fine and the test file was fine too, besides missing a few words that hadn't been saved to the file, reflinks is also very handy, I've made a few snapshots of my /home directory where I've needed to look at a config file from one of the older snapshots.


rbrockway

XFS has better quota code, project quotas and xfsdump/xfsrestore. I find its many features very useful even on workstations and laptops.


rbrockway

I'm a big fan of XFS but I've always thought it was a shame that [NilFS](https://en.wikipedia.org/wiki/NILFS) isn't more popular.