I hear from Google Gemini that the contents of the fs directory includes:
The Virtual File System Layer (VFS) -- abstracts common operations across various file systems to enable presenting a unified interface to user applications,
Specific File System Implementations -- btrfs, ext4, xfs, etc. -- each file systems implementation contains a "file_operations" structure with pointers to the filesystem-specific functions which implement user-level system call operations, such as read, write, open, and ioctl, and
Additional, related "helper" implementations -- e.g., read-write.c which handles certain generalized parts of read/write operations such as buffering and page-caching. For example, the functions in read-write.c might be used by multiple specific file system implementations.
I asked Google Gemini to organize the files within fs by listing the top level functions together with the files which implement each of the top level functions. Here below is Gemini's list. I commented a few places where Gemini's list didn't seem in accordance with Linus' HEAD.
Key Subsystems and Representative Files (fs/ directory):
Virtual Filesystem (VFS) Core: (Provides the common interface and infrastructure for all filesystems)
fs/namei.c: Pathname lookup (resolving paths to inodes).
fs/file.c: File object management (representing open files).
fs/stat.c: File status information (stat() system call).
fs/open.c: File opening (open() system call).
fs/read_write.c: Core read/write operations and helpers (often used by specific filesystems). Not the primary entry point, but provides important utilities.
I'm curious why the fs directory doesn't have subdirectories organized according to function. For example, there could be fs/vfs-core, fs/implementations, fs/caching, etc.
Apparently BSD kernels also have a Virtual Filesystem Layer ("VFS"). How early in Unix did the VFS layer appear? Was a VFS common in pre-Unix operating systems?
It's easy to imagine that ZFS conceivably might have its own implementation of some of the helper systems or even its own version of the VFS, and that these within-the-specific-filesystem implementations might have contributed to possible "difficulty" of including ZFS in the Linus kernel proper. I'm curious why ZFS wasn't included. Do we know?
Is anybody aware of a well recognized, introductory Linux kernel or BSD source code discussion which follows the "overview / top-down" style of analysis which I am trying to use here?
Is anybody up on the mistakes Google Gemini apparently made? Could these be as simple as, "Oh, that's version x.xx?"
Thanks for any help on any of these questions! Thanks for additional observations and comments!
Does anybody else want to join the server? If yes, please check How to Apply in the OP. Thanks @Hosteroid!
Hello!
I'd like a slice for experimenting with NixOS and hosting some open-source projects (Vaultwarden, Whoogle search, Openwebui). If you need any additional infos about me, I'll happy to provide through PM.
My ssh public key is ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHZ7KokkDS4XU9M15R3htHbt4ZJ9NQeYxVbKWinbE3n5.
Thank you!
@Not_Oles said: I'm curious why ZFS wasn't included. Do we know?
Not being that familiar with the history of the relevant discussions I don't know if there are also other (technical, political, social, whatever) reasons, but the reason that is usually given is the legal uncertainty of GPLv2 (the Linux license) and CDDL (the ZFS license) compatibility. Here's one summary, with links to other discussion (including opposing opinions):
@quangthang said:
Hello!
I'd like a slice for experimenting with NixOS and hosting some open-source projects (Vaultwarden, Whoogle search, Openwebui). If you need any additional infos about me, I'll happy to provide through PM.
My ssh public key is ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHZ7KokkDS4XU9M15R3htHbt4ZJ9NQeYxVbKWinbE3n5.
Thank you!
Thanks for your request, which makes me happy! Of course, I definitely do remember you from previous servers, and so I am very delighted to have you joining hlcs!
I sent you login info. You are sharing root with me and with @cmeerw.
Please feel free to do whatever you want as long as it's White Hat. Please post here about what you are doing and about any questions you have.
Hello everyone.
I just got around to do some basic setup (set up user account, change to fish shell).
BashVM is installed
It seems like it's not installed globally. I cloned the repo into my home dir, ran the install script again but it mostly skipping cause required pkgs and configs are already installed
...
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
dnsmasq is already installed.
Default storage pool is already active.
Default network is already active.
...
I'm gonna go ahead and see if I can make a NixOS VM.
The installation process was successful but it's stuck on this screen. I've must have done something wrong. I think I'll grab the graphical iso to use their graphical installer.
One of dependencies on my config keep failing build. Which is weird cause it built just fine in other instances and my local laptop. Anyway, I'm going to call it a day.
One of dependencies on my config keep failing build. Which is weird cause it built just fine in other instances and my local laptop. Anyway, I'm going to call it a day.
Isn't that just a timing thing, i.e. it took 6 ms too long?
Comments
I heard the the filesystem code lives in the
fs
directory. So I took a look.Hey! We can see ext4 in there! And xfs! And btrfs! If I understand right, the ZFS code is not present, but needs to be added via a kernel module.
I hope everyone gets the servers they want!
I hear from Google Gemini that the contents of the fs directory includes:
The Virtual File System Layer (VFS) -- abstracts common operations across various file systems to enable presenting a unified interface to user applications,
Specific File System Implementations -- btrfs, ext4, xfs, etc. -- each file systems implementation contains a "file_operations" structure with pointers to the filesystem-specific functions which implement user-level system call operations, such as read, write, open, and ioctl, and
Additional, related "helper" implementations -- e.g., read-write.c which handles certain generalized parts of read/write operations such as buffering and page-caching. For example, the functions in read-write.c might be used by multiple specific file system implementations.
I asked Google Gemini to organize the files within fs by listing the top level functions together with the files which implement each of the top level functions. Here below is Gemini's list. I commented a few places where Gemini's list didn't seem in accordance with Linus' HEAD.
Key Subsystems and Representative Files (fs/ directory):
Virtual Filesystem (VFS) Core: (Provides the common interface and infrastructure for all filesystems)
fs/dcache.c
: Directory entry cache (dentry).fs/inode.c
: Inode management (core inode operations).fs/namei.c
: Pathname lookup (resolving paths to inodes).fs/file.c
: File object management (representing open files).fs/stat.c
: File status information (stat()
system call).fs/open.c
: File opening (open()
system call).fs/read_write.c
: Core read/write operations and helpers (often used by specific filesystems). Not the primary entry point, but provides important utilities.fs/super.c
: Superblock management (representing mounted filesystems).fs/vfs_kern.c
: Miscellaneous VFS kernel functions. # No vfs_kern.cfs/libfs.c
: Helper functions for various filesystem operations.fs/mount.c
: Mounting and unmounting filesystems. # No mount.c. There is mount.h.Specific Filesystem Implementations: (Provide the concrete implementation for each filesystem type)
fs/ext4/
: (Entire directory) Ext4 filesystem.fs/xfs/
: (Entire directory) XFS filesystem.fs/btrfs/
: (Entire directory) Btrfs filesystem.fs/fat/
: (Entire directory) FAT filesystem.fs/ntfs/
: (Entire directory) NTFS filesystem support.Buffer and Page Caching: (Manage caching of disk blocks and file data in memory)
fs/buffer.c
: Buffer cache management (for disk blocks).fs/pagemap.c
: Page cache management (for file data). # No pagemap.cfs/bio.c
: Block I/O layer. Handles submission and completion of block I/O requests. #No bio.cQuota Management: (Limit disk usage by users and groups)
fs/quota/
: (Entire directory) Disk quota management.File Locking: (Mechanisms for coordinating access to files)
fs/locks.c
: File locking (e.g., flock, fcntl locks).Virtual Filesystems: (Provide filesystem-like interfaces to kernel resources)
fs/proc/
: (Entire directory) procfs (process information).fs/sysfs/
: (Entire directory) sysfs (kernel parameters and system information).fs/debugfs.c
: debugfs (kernel debugging).fs/tmpfs/
: (Entire directory) tmpfs (RAM-based filesystem). # No tmpfs subdirectoryNamespaces: (Isolate filesystems and other resources)
fs/nsfs.c
: Namespace support (mount namespaces).Other Utilities and Helpers: (Various supporting functions and modules)
fs/ioctl.c
: ioctl system calls (device-specific operations).fs/exec.c
: Program execution (related to theexecve()
system call).fs/fs-writeback.c
: Writeback mechanism (flushing dirty data to disk).fs/char_dev.c
: Character device management (though some character device drivers might be elsewhere).fs/fs_struct.c
: Per-process filesystem information.I'm curious why the fs directory doesn't have subdirectories organized according to function. For example, there could be fs/vfs-core, fs/implementations, fs/caching, etc.
Apparently BSD kernels also have a Virtual Filesystem Layer ("VFS"). How early in Unix did the VFS layer appear? Was a VFS common in pre-Unix operating systems?
It's easy to imagine that ZFS conceivably might have its own implementation of some of the helper systems or even its own version of the VFS, and that these within-the-specific-filesystem implementations might have contributed to possible "difficulty" of including ZFS in the Linus kernel proper. I'm curious why ZFS wasn't included. Do we know?
Is anybody aware of a well recognized, introductory Linux kernel or BSD source code discussion which follows the "overview / top-down" style of analysis which I am trying to use here?
Is anybody up on the mistakes Google Gemini apparently made? Could these be as simple as, "Oh, that's version x.xx?"
Thanks for any help on any of these questions! Thanks for additional observations and comments!
Does anybody else want to join the server? If yes, please check How to Apply in the OP. Thanks @Hosteroid!
I hope everyone gets the servers they want!
Hello!
I'd like a slice for experimenting with NixOS and hosting some open-source projects (Vaultwarden, Whoogle search, Openwebui). If you need any additional infos about me, I'll happy to provide through PM.
My ssh public key is
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHZ7KokkDS4XU9M15R3htHbt4ZJ9NQeYxVbKWinbE3n5
.Thank you!
MetalVPS
Not being that familiar with the history of the relevant discussions I don't know if there are also other (technical, political, social, whatever) reasons, but the reason that is usually given is the legal uncertainty of GPLv2 (the Linux license) and CDDL (the ZFS license) compatibility. Here's one summary, with links to other discussion (including opposing opinions):
https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/
Hi @quangthang!
Thanks for your request, which makes me happy! Of course, I definitely do remember you from previous servers, and so I am very delighted to have you joining hlcs!
I sent you login info. You are sharing root with me and with @cmeerw.
Please feel free to do whatever you want as long as it's White Hat. Please post here about what you are doing and about any questions you have.
Always best wishes!
Tom
Thanks to @Hosteroid for our very nice server!
I hope everyone gets the servers they want!
Hello everyone.
I just got around to do some basic setup (set up user account, change to fish shell).
It seems like it's not installed globally. I cloned the repo into my home dir, ran the install script again but it mostly skipping cause required pkgs and configs are already installed
I'm gonna go ahead and see if I can make a NixOS VM.
MetalVPS
VM created, now to the installation. Not sure if it's my part but i'm getting constant vnc timed out
MetalVPS
The installation process was successful but it's stuck on this screen. I've must have done something wrong. I think I'll grab the graphical iso to use their graphical installer.
MetalVPS
It's up!
MetalVPS
One of dependencies on my config keep failing build. Which is weird cause it built just fine in other instances and my local laptop. Anyway, I'm going to call it a day.
MetalVPS
Isn't that just a timing thing, i.e. it took 6 ms too long?