How do I implement a file system driver driver in Linux? [on hold]
Assume that I have invented a new file system, and now I want to create a file system driver for it.
How would I implement this file system driver, is this done using a kernel module?
And how can the file system driver access the hard disk, should the file system driver contain code to access the hard disk, or does Linux contain a device driver to access the hard disk that is used by all the file system drivers?
linux filesystems drivers
New contributor
put on hold as too broad by Gilles, muru, Michael Homer, jimmij, msp9011 15 hours ago
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
Assume that I have invented a new file system, and now I want to create a file system driver for it.
How would I implement this file system driver, is this done using a kernel module?
And how can the file system driver access the hard disk, should the file system driver contain code to access the hard disk, or does Linux contain a device driver to access the hard disk that is used by all the file system drivers?
linux filesystems drivers
New contributor
put on hold as too broad by Gilles, muru, Michael Homer, jimmij, msp9011 15 hours ago
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
Assume that I have invented a new file system, and now I want to create a file system driver for it.
How would I implement this file system driver, is this done using a kernel module?
And how can the file system driver access the hard disk, should the file system driver contain code to access the hard disk, or does Linux contain a device driver to access the hard disk that is used by all the file system drivers?
linux filesystems drivers
New contributor
Assume that I have invented a new file system, and now I want to create a file system driver for it.
How would I implement this file system driver, is this done using a kernel module?
And how can the file system driver access the hard disk, should the file system driver contain code to access the hard disk, or does Linux contain a device driver to access the hard disk that is used by all the file system drivers?
linux filesystems drivers
linux filesystems drivers
New contributor
New contributor
edited yesterday
Gilles
544k12811021619
544k12811021619
New contributor
asked yesterday
user343344user343344
762
762
New contributor
New contributor
put on hold as too broad by Gilles, muru, Michael Homer, jimmij, msp9011 15 hours ago
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
put on hold as too broad by Gilles, muru, Michael Homer, jimmij, msp9011 15 hours ago
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
add a comment |
4 Answers
4
active
oldest
votes
Yes, filesystems in Linux can be implemented as kernel modules. But there is also the FUSE (Filesystem in USErspace) interface, which can allow a regular user-space process to act as a filesystem driver. If you're prototyping a new filesystem, implementing it first using the FUSE interface could make the testing and development easier. Once you have the internals of the filesystem worked out in FUSE form, you might then start implementing a performance-optimized kernel module version of it.
Here's some basic information on implementing a filesystem within kernel space. It's rather old (from 1996!), but that should at least give you a basic idea for the kind of things you'll need to do.
If you choose to go to the FUSE route, here's libfuse, the reference implementation of the userspace side of the FUSE interface.
Filesystem driver as a kernel module
Basically, the initialization function of your filesystem driver module needs just to call a register_filesystem()
function, and give it as a parameter a structure that includes a function pointer that identifies the function in your filesystem driver that will be used as the first step in identifying your filesystem type and mounting it. Nothing more happens at that stage.
When a filesystem is being mounted, and either the filesystem type is specified to match your driver, or filesystem type auto-detection is being performed, the kernel's Virtual FileSystem (VFS for short) layer will call that function. It basically says "Here's a pointer to a kernel-level representation of a standard Linux block device. Take a look at it, see if it's something you can handle, and then tell me what you can do with it."
At that point, your driver is supposed to read whatever it needs to verify it's the right driver for the filesystem, and then return a structure that includes pointers to further functions your driver can do with that particular filesystem. Or if the filesystem driver does not recognize the data on the disk, it is supposed to return an appropriate error result, and then VFS will either report a failure to userspace or - if filesystem type auto-detection is being performed - will ask another filesystem driver to try.
The other drivers in the kernel will provide the standard block device interface, so the filesystem driver won't have to implement hardware support. Basically, the filesystem driver can read and write disk blocks using standard kernel-level functions with the device pointer given to it.
The VFS layer expects the filesystem driver to make a number of standard functions available to the VFS layer; a few of these are mandatory in order for the VFS layer to do anything meaningful with the filesystem, others are optional and you can just return a NULL in place of a pointer to such an optional function.
1
This is a pretty good answer though to fully answer the question as stated you'd also need to say a bit about the functionality the block device layer provides for the file system layer to build upon.
– kasperd
yesterday
I sort of alluded to that with the "here's a pointer to a standard block device" bit, but good point; I expanded on that.
– telcoM
yesterday
This answer, specifically the description of what happens in what order, is divine. Is there some sort of book/website I could read that has descriptions like that for all of "how linux works"?
– Adam Barnes
yesterday
You might be interested in Linux Kernel Internals or Linux Device Drivers, 3rd Edition. And of course, there's the option of reading the actual source code.
– telcoM
yesterday
add a comment |
Yes a kernel driver can manage a file-system .
The best solution to mock up , prototype a file-system is to use FUSE . And after you can think about transform it into a kernel driver .
Wikipedia =>
https://en.wikipedia.org/wiki/Filesystem_in_Userspace
Source => https://github.com/libfuse/libfuse
a tutorial => https://developer.ibm.com/articles/l-fuse/
add a comment |
Yes this would typically be done using a kernel driver that can either be loaded as a kernel module or compiled into the kernel.
You can check out similar filesystem drivers and how they work here.
These drivers likely use internal kernel functions to access storage devices as blocks of bytes but you could also use blockdevices as exposed by drivers in the block devices and character devices folders.
New contributor
add a comment |
You can use fuse, to make a user-land file-system, or write a kernel module.
It is easier to do with fuse, as you have a choice of languages, and won't crash the kernel (and therefore the whole system).
Kernel modules can be faster, but the first rule of optimisation is: Don't do it until you have tested working code. The second is like it: Don't do it until you have evidence that it is too slow. And the third: Don't keep it unless you have evidence that it makes it faster/smaller.
And yes the kernel already has drivers for the hardware, you don't re-implement them.
There are major downsides to FUSE other than performance: it's hard to use it for your root filesystem. (Maybe possible with an initrd, but the FUSE binary couldn't be freed after booting because it would still be executing from the ramdisk.)
– Peter Cordes
yesterday
1
@PeterCordes It couldn't be freed, but that doesn't mean it can't be unlinked. If there's still a reference to it, it'll be kept in memory regardless of whether or not you left the initramfs and deleted the underlying binary.
– forest
18 hours ago
@forest: right, therefore you can't unmount the initrd afterpivot_root
, because there are still busy inodes in the initramfs.
– Peter Cordes
17 hours ago
A normal/init
started from an initramfs will (I think) execve/init
after pivot_root, to transfer control to the real root FS's/init
. But a FUSE binary couldn't replace itself with execve if access to the root FS depended on the FUSE process responding to the kernel. Well maybe by priming the pagecache first, but that doesn't sound reliable.
– Peter Cordes
17 hours ago
add a comment |
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
Yes, filesystems in Linux can be implemented as kernel modules. But there is also the FUSE (Filesystem in USErspace) interface, which can allow a regular user-space process to act as a filesystem driver. If you're prototyping a new filesystem, implementing it first using the FUSE interface could make the testing and development easier. Once you have the internals of the filesystem worked out in FUSE form, you might then start implementing a performance-optimized kernel module version of it.
Here's some basic information on implementing a filesystem within kernel space. It's rather old (from 1996!), but that should at least give you a basic idea for the kind of things you'll need to do.
If you choose to go to the FUSE route, here's libfuse, the reference implementation of the userspace side of the FUSE interface.
Filesystem driver as a kernel module
Basically, the initialization function of your filesystem driver module needs just to call a register_filesystem()
function, and give it as a parameter a structure that includes a function pointer that identifies the function in your filesystem driver that will be used as the first step in identifying your filesystem type and mounting it. Nothing more happens at that stage.
When a filesystem is being mounted, and either the filesystem type is specified to match your driver, or filesystem type auto-detection is being performed, the kernel's Virtual FileSystem (VFS for short) layer will call that function. It basically says "Here's a pointer to a kernel-level representation of a standard Linux block device. Take a look at it, see if it's something you can handle, and then tell me what you can do with it."
At that point, your driver is supposed to read whatever it needs to verify it's the right driver for the filesystem, and then return a structure that includes pointers to further functions your driver can do with that particular filesystem. Or if the filesystem driver does not recognize the data on the disk, it is supposed to return an appropriate error result, and then VFS will either report a failure to userspace or - if filesystem type auto-detection is being performed - will ask another filesystem driver to try.
The other drivers in the kernel will provide the standard block device interface, so the filesystem driver won't have to implement hardware support. Basically, the filesystem driver can read and write disk blocks using standard kernel-level functions with the device pointer given to it.
The VFS layer expects the filesystem driver to make a number of standard functions available to the VFS layer; a few of these are mandatory in order for the VFS layer to do anything meaningful with the filesystem, others are optional and you can just return a NULL in place of a pointer to such an optional function.
1
This is a pretty good answer though to fully answer the question as stated you'd also need to say a bit about the functionality the block device layer provides for the file system layer to build upon.
– kasperd
yesterday
I sort of alluded to that with the "here's a pointer to a standard block device" bit, but good point; I expanded on that.
– telcoM
yesterday
This answer, specifically the description of what happens in what order, is divine. Is there some sort of book/website I could read that has descriptions like that for all of "how linux works"?
– Adam Barnes
yesterday
You might be interested in Linux Kernel Internals or Linux Device Drivers, 3rd Edition. And of course, there's the option of reading the actual source code.
– telcoM
yesterday
add a comment |
Yes, filesystems in Linux can be implemented as kernel modules. But there is also the FUSE (Filesystem in USErspace) interface, which can allow a regular user-space process to act as a filesystem driver. If you're prototyping a new filesystem, implementing it first using the FUSE interface could make the testing and development easier. Once you have the internals of the filesystem worked out in FUSE form, you might then start implementing a performance-optimized kernel module version of it.
Here's some basic information on implementing a filesystem within kernel space. It's rather old (from 1996!), but that should at least give you a basic idea for the kind of things you'll need to do.
If you choose to go to the FUSE route, here's libfuse, the reference implementation of the userspace side of the FUSE interface.
Filesystem driver as a kernel module
Basically, the initialization function of your filesystem driver module needs just to call a register_filesystem()
function, and give it as a parameter a structure that includes a function pointer that identifies the function in your filesystem driver that will be used as the first step in identifying your filesystem type and mounting it. Nothing more happens at that stage.
When a filesystem is being mounted, and either the filesystem type is specified to match your driver, or filesystem type auto-detection is being performed, the kernel's Virtual FileSystem (VFS for short) layer will call that function. It basically says "Here's a pointer to a kernel-level representation of a standard Linux block device. Take a look at it, see if it's something you can handle, and then tell me what you can do with it."
At that point, your driver is supposed to read whatever it needs to verify it's the right driver for the filesystem, and then return a structure that includes pointers to further functions your driver can do with that particular filesystem. Or if the filesystem driver does not recognize the data on the disk, it is supposed to return an appropriate error result, and then VFS will either report a failure to userspace or - if filesystem type auto-detection is being performed - will ask another filesystem driver to try.
The other drivers in the kernel will provide the standard block device interface, so the filesystem driver won't have to implement hardware support. Basically, the filesystem driver can read and write disk blocks using standard kernel-level functions with the device pointer given to it.
The VFS layer expects the filesystem driver to make a number of standard functions available to the VFS layer; a few of these are mandatory in order for the VFS layer to do anything meaningful with the filesystem, others are optional and you can just return a NULL in place of a pointer to such an optional function.
1
This is a pretty good answer though to fully answer the question as stated you'd also need to say a bit about the functionality the block device layer provides for the file system layer to build upon.
– kasperd
yesterday
I sort of alluded to that with the "here's a pointer to a standard block device" bit, but good point; I expanded on that.
– telcoM
yesterday
This answer, specifically the description of what happens in what order, is divine. Is there some sort of book/website I could read that has descriptions like that for all of "how linux works"?
– Adam Barnes
yesterday
You might be interested in Linux Kernel Internals or Linux Device Drivers, 3rd Edition. And of course, there's the option of reading the actual source code.
– telcoM
yesterday
add a comment |
Yes, filesystems in Linux can be implemented as kernel modules. But there is also the FUSE (Filesystem in USErspace) interface, which can allow a regular user-space process to act as a filesystem driver. If you're prototyping a new filesystem, implementing it first using the FUSE interface could make the testing and development easier. Once you have the internals of the filesystem worked out in FUSE form, you might then start implementing a performance-optimized kernel module version of it.
Here's some basic information on implementing a filesystem within kernel space. It's rather old (from 1996!), but that should at least give you a basic idea for the kind of things you'll need to do.
If you choose to go to the FUSE route, here's libfuse, the reference implementation of the userspace side of the FUSE interface.
Filesystem driver as a kernel module
Basically, the initialization function of your filesystem driver module needs just to call a register_filesystem()
function, and give it as a parameter a structure that includes a function pointer that identifies the function in your filesystem driver that will be used as the first step in identifying your filesystem type and mounting it. Nothing more happens at that stage.
When a filesystem is being mounted, and either the filesystem type is specified to match your driver, or filesystem type auto-detection is being performed, the kernel's Virtual FileSystem (VFS for short) layer will call that function. It basically says "Here's a pointer to a kernel-level representation of a standard Linux block device. Take a look at it, see if it's something you can handle, and then tell me what you can do with it."
At that point, your driver is supposed to read whatever it needs to verify it's the right driver for the filesystem, and then return a structure that includes pointers to further functions your driver can do with that particular filesystem. Or if the filesystem driver does not recognize the data on the disk, it is supposed to return an appropriate error result, and then VFS will either report a failure to userspace or - if filesystem type auto-detection is being performed - will ask another filesystem driver to try.
The other drivers in the kernel will provide the standard block device interface, so the filesystem driver won't have to implement hardware support. Basically, the filesystem driver can read and write disk blocks using standard kernel-level functions with the device pointer given to it.
The VFS layer expects the filesystem driver to make a number of standard functions available to the VFS layer; a few of these are mandatory in order for the VFS layer to do anything meaningful with the filesystem, others are optional and you can just return a NULL in place of a pointer to such an optional function.
Yes, filesystems in Linux can be implemented as kernel modules. But there is also the FUSE (Filesystem in USErspace) interface, which can allow a regular user-space process to act as a filesystem driver. If you're prototyping a new filesystem, implementing it first using the FUSE interface could make the testing and development easier. Once you have the internals of the filesystem worked out in FUSE form, you might then start implementing a performance-optimized kernel module version of it.
Here's some basic information on implementing a filesystem within kernel space. It's rather old (from 1996!), but that should at least give you a basic idea for the kind of things you'll need to do.
If you choose to go to the FUSE route, here's libfuse, the reference implementation of the userspace side of the FUSE interface.
Filesystem driver as a kernel module
Basically, the initialization function of your filesystem driver module needs just to call a register_filesystem()
function, and give it as a parameter a structure that includes a function pointer that identifies the function in your filesystem driver that will be used as the first step in identifying your filesystem type and mounting it. Nothing more happens at that stage.
When a filesystem is being mounted, and either the filesystem type is specified to match your driver, or filesystem type auto-detection is being performed, the kernel's Virtual FileSystem (VFS for short) layer will call that function. It basically says "Here's a pointer to a kernel-level representation of a standard Linux block device. Take a look at it, see if it's something you can handle, and then tell me what you can do with it."
At that point, your driver is supposed to read whatever it needs to verify it's the right driver for the filesystem, and then return a structure that includes pointers to further functions your driver can do with that particular filesystem. Or if the filesystem driver does not recognize the data on the disk, it is supposed to return an appropriate error result, and then VFS will either report a failure to userspace or - if filesystem type auto-detection is being performed - will ask another filesystem driver to try.
The other drivers in the kernel will provide the standard block device interface, so the filesystem driver won't have to implement hardware support. Basically, the filesystem driver can read and write disk blocks using standard kernel-level functions with the device pointer given to it.
The VFS layer expects the filesystem driver to make a number of standard functions available to the VFS layer; a few of these are mandatory in order for the VFS layer to do anything meaningful with the filesystem, others are optional and you can just return a NULL in place of a pointer to such an optional function.
edited yesterday
answered yesterday
telcoMtelcoM
20k12450
20k12450
1
This is a pretty good answer though to fully answer the question as stated you'd also need to say a bit about the functionality the block device layer provides for the file system layer to build upon.
– kasperd
yesterday
I sort of alluded to that with the "here's a pointer to a standard block device" bit, but good point; I expanded on that.
– telcoM
yesterday
This answer, specifically the description of what happens in what order, is divine. Is there some sort of book/website I could read that has descriptions like that for all of "how linux works"?
– Adam Barnes
yesterday
You might be interested in Linux Kernel Internals or Linux Device Drivers, 3rd Edition. And of course, there's the option of reading the actual source code.
– telcoM
yesterday
add a comment |
1
This is a pretty good answer though to fully answer the question as stated you'd also need to say a bit about the functionality the block device layer provides for the file system layer to build upon.
– kasperd
yesterday
I sort of alluded to that with the "here's a pointer to a standard block device" bit, but good point; I expanded on that.
– telcoM
yesterday
This answer, specifically the description of what happens in what order, is divine. Is there some sort of book/website I could read that has descriptions like that for all of "how linux works"?
– Adam Barnes
yesterday
You might be interested in Linux Kernel Internals or Linux Device Drivers, 3rd Edition. And of course, there's the option of reading the actual source code.
– telcoM
yesterday
1
1
This is a pretty good answer though to fully answer the question as stated you'd also need to say a bit about the functionality the block device layer provides for the file system layer to build upon.
– kasperd
yesterday
This is a pretty good answer though to fully answer the question as stated you'd also need to say a bit about the functionality the block device layer provides for the file system layer to build upon.
– kasperd
yesterday
I sort of alluded to that with the "here's a pointer to a standard block device" bit, but good point; I expanded on that.
– telcoM
yesterday
I sort of alluded to that with the "here's a pointer to a standard block device" bit, but good point; I expanded on that.
– telcoM
yesterday
This answer, specifically the description of what happens in what order, is divine. Is there some sort of book/website I could read that has descriptions like that for all of "how linux works"?
– Adam Barnes
yesterday
This answer, specifically the description of what happens in what order, is divine. Is there some sort of book/website I could read that has descriptions like that for all of "how linux works"?
– Adam Barnes
yesterday
You might be interested in Linux Kernel Internals or Linux Device Drivers, 3rd Edition. And of course, there's the option of reading the actual source code.
– telcoM
yesterday
You might be interested in Linux Kernel Internals or Linux Device Drivers, 3rd Edition. And of course, there's the option of reading the actual source code.
– telcoM
yesterday
add a comment |
Yes a kernel driver can manage a file-system .
The best solution to mock up , prototype a file-system is to use FUSE . And after you can think about transform it into a kernel driver .
Wikipedia =>
https://en.wikipedia.org/wiki/Filesystem_in_Userspace
Source => https://github.com/libfuse/libfuse
a tutorial => https://developer.ibm.com/articles/l-fuse/
add a comment |
Yes a kernel driver can manage a file-system .
The best solution to mock up , prototype a file-system is to use FUSE . And after you can think about transform it into a kernel driver .
Wikipedia =>
https://en.wikipedia.org/wiki/Filesystem_in_Userspace
Source => https://github.com/libfuse/libfuse
a tutorial => https://developer.ibm.com/articles/l-fuse/
add a comment |
Yes a kernel driver can manage a file-system .
The best solution to mock up , prototype a file-system is to use FUSE . And after you can think about transform it into a kernel driver .
Wikipedia =>
https://en.wikipedia.org/wiki/Filesystem_in_Userspace
Source => https://github.com/libfuse/libfuse
a tutorial => https://developer.ibm.com/articles/l-fuse/
Yes a kernel driver can manage a file-system .
The best solution to mock up , prototype a file-system is to use FUSE . And after you can think about transform it into a kernel driver .
Wikipedia =>
https://en.wikipedia.org/wiki/Filesystem_in_Userspace
Source => https://github.com/libfuse/libfuse
a tutorial => https://developer.ibm.com/articles/l-fuse/
answered yesterday
EchoMike444EchoMike444
1,0506
1,0506
add a comment |
add a comment |
Yes this would typically be done using a kernel driver that can either be loaded as a kernel module or compiled into the kernel.
You can check out similar filesystem drivers and how they work here.
These drivers likely use internal kernel functions to access storage devices as blocks of bytes but you could also use blockdevices as exposed by drivers in the block devices and character devices folders.
New contributor
add a comment |
Yes this would typically be done using a kernel driver that can either be loaded as a kernel module or compiled into the kernel.
You can check out similar filesystem drivers and how they work here.
These drivers likely use internal kernel functions to access storage devices as blocks of bytes but you could also use blockdevices as exposed by drivers in the block devices and character devices folders.
New contributor
add a comment |
Yes this would typically be done using a kernel driver that can either be loaded as a kernel module or compiled into the kernel.
You can check out similar filesystem drivers and how they work here.
These drivers likely use internal kernel functions to access storage devices as blocks of bytes but you could also use blockdevices as exposed by drivers in the block devices and character devices folders.
New contributor
Yes this would typically be done using a kernel driver that can either be loaded as a kernel module or compiled into the kernel.
You can check out similar filesystem drivers and how they work here.
These drivers likely use internal kernel functions to access storage devices as blocks of bytes but you could also use blockdevices as exposed by drivers in the block devices and character devices folders.
New contributor
New contributor
answered yesterday
ErikErik
31
31
New contributor
New contributor
add a comment |
add a comment |
You can use fuse, to make a user-land file-system, or write a kernel module.
It is easier to do with fuse, as you have a choice of languages, and won't crash the kernel (and therefore the whole system).
Kernel modules can be faster, but the first rule of optimisation is: Don't do it until you have tested working code. The second is like it: Don't do it until you have evidence that it is too slow. And the third: Don't keep it unless you have evidence that it makes it faster/smaller.
And yes the kernel already has drivers for the hardware, you don't re-implement them.
There are major downsides to FUSE other than performance: it's hard to use it for your root filesystem. (Maybe possible with an initrd, but the FUSE binary couldn't be freed after booting because it would still be executing from the ramdisk.)
– Peter Cordes
yesterday
1
@PeterCordes It couldn't be freed, but that doesn't mean it can't be unlinked. If there's still a reference to it, it'll be kept in memory regardless of whether or not you left the initramfs and deleted the underlying binary.
– forest
18 hours ago
@forest: right, therefore you can't unmount the initrd afterpivot_root
, because there are still busy inodes in the initramfs.
– Peter Cordes
17 hours ago
A normal/init
started from an initramfs will (I think) execve/init
after pivot_root, to transfer control to the real root FS's/init
. But a FUSE binary couldn't replace itself with execve if access to the root FS depended on the FUSE process responding to the kernel. Well maybe by priming the pagecache first, but that doesn't sound reliable.
– Peter Cordes
17 hours ago
add a comment |
You can use fuse, to make a user-land file-system, or write a kernel module.
It is easier to do with fuse, as you have a choice of languages, and won't crash the kernel (and therefore the whole system).
Kernel modules can be faster, but the first rule of optimisation is: Don't do it until you have tested working code. The second is like it: Don't do it until you have evidence that it is too slow. And the third: Don't keep it unless you have evidence that it makes it faster/smaller.
And yes the kernel already has drivers for the hardware, you don't re-implement them.
There are major downsides to FUSE other than performance: it's hard to use it for your root filesystem. (Maybe possible with an initrd, but the FUSE binary couldn't be freed after booting because it would still be executing from the ramdisk.)
– Peter Cordes
yesterday
1
@PeterCordes It couldn't be freed, but that doesn't mean it can't be unlinked. If there's still a reference to it, it'll be kept in memory regardless of whether or not you left the initramfs and deleted the underlying binary.
– forest
18 hours ago
@forest: right, therefore you can't unmount the initrd afterpivot_root
, because there are still busy inodes in the initramfs.
– Peter Cordes
17 hours ago
A normal/init
started from an initramfs will (I think) execve/init
after pivot_root, to transfer control to the real root FS's/init
. But a FUSE binary couldn't replace itself with execve if access to the root FS depended on the FUSE process responding to the kernel. Well maybe by priming the pagecache first, but that doesn't sound reliable.
– Peter Cordes
17 hours ago
add a comment |
You can use fuse, to make a user-land file-system, or write a kernel module.
It is easier to do with fuse, as you have a choice of languages, and won't crash the kernel (and therefore the whole system).
Kernel modules can be faster, but the first rule of optimisation is: Don't do it until you have tested working code. The second is like it: Don't do it until you have evidence that it is too slow. And the third: Don't keep it unless you have evidence that it makes it faster/smaller.
And yes the kernel already has drivers for the hardware, you don't re-implement them.
You can use fuse, to make a user-land file-system, or write a kernel module.
It is easier to do with fuse, as you have a choice of languages, and won't crash the kernel (and therefore the whole system).
Kernel modules can be faster, but the first rule of optimisation is: Don't do it until you have tested working code. The second is like it: Don't do it until you have evidence that it is too slow. And the third: Don't keep it unless you have evidence that it makes it faster/smaller.
And yes the kernel already has drivers for the hardware, you don't re-implement them.
answered yesterday
ctrl-alt-delorctrl-alt-delor
12.1k42561
12.1k42561
There are major downsides to FUSE other than performance: it's hard to use it for your root filesystem. (Maybe possible with an initrd, but the FUSE binary couldn't be freed after booting because it would still be executing from the ramdisk.)
– Peter Cordes
yesterday
1
@PeterCordes It couldn't be freed, but that doesn't mean it can't be unlinked. If there's still a reference to it, it'll be kept in memory regardless of whether or not you left the initramfs and deleted the underlying binary.
– forest
18 hours ago
@forest: right, therefore you can't unmount the initrd afterpivot_root
, because there are still busy inodes in the initramfs.
– Peter Cordes
17 hours ago
A normal/init
started from an initramfs will (I think) execve/init
after pivot_root, to transfer control to the real root FS's/init
. But a FUSE binary couldn't replace itself with execve if access to the root FS depended on the FUSE process responding to the kernel. Well maybe by priming the pagecache first, but that doesn't sound reliable.
– Peter Cordes
17 hours ago
add a comment |
There are major downsides to FUSE other than performance: it's hard to use it for your root filesystem. (Maybe possible with an initrd, but the FUSE binary couldn't be freed after booting because it would still be executing from the ramdisk.)
– Peter Cordes
yesterday
1
@PeterCordes It couldn't be freed, but that doesn't mean it can't be unlinked. If there's still a reference to it, it'll be kept in memory regardless of whether or not you left the initramfs and deleted the underlying binary.
– forest
18 hours ago
@forest: right, therefore you can't unmount the initrd afterpivot_root
, because there are still busy inodes in the initramfs.
– Peter Cordes
17 hours ago
A normal/init
started from an initramfs will (I think) execve/init
after pivot_root, to transfer control to the real root FS's/init
. But a FUSE binary couldn't replace itself with execve if access to the root FS depended on the FUSE process responding to the kernel. Well maybe by priming the pagecache first, but that doesn't sound reliable.
– Peter Cordes
17 hours ago
There are major downsides to FUSE other than performance: it's hard to use it for your root filesystem. (Maybe possible with an initrd, but the FUSE binary couldn't be freed after booting because it would still be executing from the ramdisk.)
– Peter Cordes
yesterday
There are major downsides to FUSE other than performance: it's hard to use it for your root filesystem. (Maybe possible with an initrd, but the FUSE binary couldn't be freed after booting because it would still be executing from the ramdisk.)
– Peter Cordes
yesterday
1
1
@PeterCordes It couldn't be freed, but that doesn't mean it can't be unlinked. If there's still a reference to it, it'll be kept in memory regardless of whether or not you left the initramfs and deleted the underlying binary.
– forest
18 hours ago
@PeterCordes It couldn't be freed, but that doesn't mean it can't be unlinked. If there's still a reference to it, it'll be kept in memory regardless of whether or not you left the initramfs and deleted the underlying binary.
– forest
18 hours ago
@forest: right, therefore you can't unmount the initrd after
pivot_root
, because there are still busy inodes in the initramfs.– Peter Cordes
17 hours ago
@forest: right, therefore you can't unmount the initrd after
pivot_root
, because there are still busy inodes in the initramfs.– Peter Cordes
17 hours ago
A normal
/init
started from an initramfs will (I think) execve /init
after pivot_root, to transfer control to the real root FS's /init
. But a FUSE binary couldn't replace itself with execve if access to the root FS depended on the FUSE process responding to the kernel. Well maybe by priming the pagecache first, but that doesn't sound reliable.– Peter Cordes
17 hours ago
A normal
/init
started from an initramfs will (I think) execve /init
after pivot_root, to transfer control to the real root FS's /init
. But a FUSE binary couldn't replace itself with execve if access to the root FS depended on the FUSE process responding to the kernel. Well maybe by priming the pagecache first, but that doesn't sound reliable.– Peter Cordes
17 hours ago
add a comment |