in General

How to mount a VirtualBox VDI image

Don’t believe the hype! It is entirely possible to mount a VirtualBox VDI image, just like a loopback filesystem… all you need are the right tools and know-how. Allow me to illustrate.

My apologies, that was the wrong illustration. Onward!

Before we start, it should be noted that you don’t want to do this while your disk image is already in use. That is to say, if you’re running a virtualised host using this image, GTFO.

First, install the QEMU tools. In Ubuntu, you’ll find them in the qemu-kvm package. Whatever package your distribution ships which contains the qemu-nbd binary should be fine.

Load the nbd kernel module. Yes, I’m serious, the network block device module! (Note: All of the following commands require superuser permissions, so escalate your privileges in whatever way makes you most comfortable.)

modprobe nbd

Then run qemu-nbd, which is a user space loopback block device server for QEMU-supported disk images. Basically, it knows all about weird disk image formats, and presents them to the kernel via nbd, and ultimately to the rest of the system as if they were a normal disk.

qemu-nbd -c /dev/nbd0 <vdi-file>

That command will expose the entire image as a block device named /dev/nbd0, and the partitions within it as subdevices. For example, the first partition in the image will appear as /dev/nbd0p1.

Now you could, for instance, run cfdisk on the block device, but you will most likely want to mount an individual partition.

mount /dev/nbd0p1 /mnt

Gadzooks! Now you can muck around inside the filesystem to your heart’s content. Go ahead and copy stuff in or out, or if you’re feeling fruity, have some chrooty: chroot /mnt.

When you’re done, unmount the filesystem and shut down the qemu-nbd service.

umount /mnt
qemu-nbd -d /dev/nbd0

Write a Comment



  1. I also have to run partprobe (from parted package) before I’m able to access the partitions (tested under Debian Squeeze RC). After that the partitions will accessible through /dev/disk/by-id/*nbd*part[0-9]*.

    Sadly the nbd module seems to be very unstable for this kind of operation. After copying ~1.5GB of data from a VDI-File the module dies and the copying process (cp and rsync tested) will produce I/O Errors.

    • The versions of qemu and nbd worked well for me on Ubuntu 10.10.

      In fact, I was meant to be fetching packaging stuff off the disk image (because the new VirtualBox was wedging the machine) when I found myself doing the packaging and building while chrooted in it! Not a terrible stress test. ;-)

  2. Hi,
    Many thanks for the Tip.

    I use debian squeeze. To get this to work there you have to insert the nbd module with the paramter max_part with appropriate number of partitions to use:

    rmmod nbd
    modprobe nbd max_part=16

    without this Parameter you won’t get the partitions. partprobe gives a weird Error:
    “Error: Error informing the kernel about modifications to partition /dev/nbd0p1 — Invalid argument. This means Linux won’t know about any changes you made to /dev/nbd0p1 until you reboot — so you shouldn’t mount it or use it in any way before rebooting.”

    Cheers Ulli

  3. so, this is if you are running virtualbox on an ubuntu system, how would I go about it if I’m running virtualbox on a winxp machine?

  4. Sadly the nbd module is very unstable for this kind of operation. It only mostly works in Ubuntu 9:10, 10:04, 10:10 and now 11:04. Unfortunately, ‘mostly’ is not good enough for production work. Sooner or later the copy process (also cp and rsync tested) will produce I/O Errors. This has caused me no end of headaches. I have investigated many fixes and work arounds. The I/O error proble is easy to duplicate:

    root@somehost:~# qemu-nbd --connect=/dev/nbd0 ./gmos/tmp4UatFh.qcow2
    root@somehost:~# dd if=/dev/nbd0p1 of=/dev/null bs=1M
    dd: reading `/dev/nbd0p1': Input/output error
    1175+0 records in
    1175+0 records out
    1232076800 bytes (1.2 GB) copied, 16.2789 s, 75.7 MB/s

  5. had to use the max_part=16 trick, but got it working on Linux Mint 9. Very cool info, thanks for sharing!

  6. Thank you for this. :)

    But it doesn’t work for me. :(

    qemu-nbd dies without doing nothing. Neither passing the max_part=16 to the module, nor running partprobe afterwards.

    If I run qemu-nbd with the -v (verbose) option, I always get an error: “Failed setting NBD block size”. I’ve googled that, but didn’t find anything understandable. Any suggestion?

  7. (Y)(Y)(Y)(Y)(Y)

    Thank you man,
    thank you many times
    there should be more people and blogs like you/rs
    makes linux better than already is

  8. In case you get
    qemu-nbd: Could not access ‘/dev/nbd0′: No such file or directory

    Do this first:
    apt-get install nbd-server

  9. Nice access of vdi from host.
    How about the other way, guest accesses passive partition in host?
    Host system has 2 partitions Windows 7 and ubuntu 11.10 with Windows hosting.
    Normal ubuntu guest running in vbox needs access to host ubuntu partition.
    Any suggestions?

  10. Mine worked one time but next it say:

    fdisk /dev/nbd0

    Unable to read /dev/nbd0

    Wtf? nbd0 is unreadable? why?

    I just tried with nbd1 and it worked but again it freezes last when I try to -d it.
    Now I have two qemu-nbd freezed in my terminals. So? What I must wait for?
    killalled it all (it hangs with a message).

    SOoo!!! I cannot use this method guys!!

    I need something to read my VDI image and for now I cannot get nothing to work.

    Thanks but no thanks… :S

  11. Sorry guys my qemu-nbd is using 100% cpu when I give to it the -d parameter. It does not disconnect anything just use 100% CPU.

    What happen with that? How to update to the lastest version? I’m using Ubuntu 10.10 and don’t want to update to 11.

    I searched in google but there are no ppa related to backports of qemu for Ubuntu 10.10. I’m using custom kernel manually built (2.6.38-rc3).

    I see a possible solution there are some bug report about qemu-nbd hangued. It is related with a kernel 2.6.38 update. I guess that what I’m using (rc3) have the bug. I will recompile a kernel but 2.6.39.


  12. On my slackware box (Q6600, kernel 3.2), partprobe -s /dev/nbd0 could not notify the creation of the new device /dev/nbd0p1 (ERROR in modification transmitting to the kernel)!! although fdisk /dev/nbd0 shows correctly partition 1

    • I just not read the Ulli Hochholdinger message… SORRY

      In my Slackware 13.37, the max_part parameter is also mandatory!

  13. Works flawlessly on Mint-11 (Ubuntu-11.04) as well – when the correction (max_part=16 on modprobe; and qemu-nbd -c as stated above) are applied. Thank you for a great tip.

  14. This is the only solution for Dynamic virtual disks!!!. You better emphasize this. All other solution are deploying loop back concept which is lacking the awareness to partition table and dynamic drives.

  15. Thanks a lot for the useful post. I test with two vdi images: ext4 and ntfs, in read and write. My VDI disks are dynamically-sized images.
    It Works ^_^


  16. Woohoo!!! I searched google all night to figure out how to mount a dynamic vdi image, and this worked like a charm. Most other methods required finding the data offset number, or didnt work right with dynamic images (for example, after I finally figured out the correct offset number, I kept getting a “Failed to read last sector” error for an ntfs filesystem trying to mount it with a /dev/loop0).
    I did have to use the “max_part” option as listed in the above post.

    For clarity and to help others, the following worked for me on openSuse 11.4 – 64bit host, mounting an ntfs filesystem from a dynamic vdi created from a virtualbox windows7 guest:

    1) unload network block devicei kernel module, in case it is loaded without the max_part option (if you have already linked any nbd devices to vdi files, remove them with “qemu-nbd -d /dev/nbd0″ or similar)

    rmmod nbd

    2) reload nbd with max_part option

    modprobe nbd max_part=16

    3) link your nbd to your vdi file

    qemu-nbd -c /dev/nbd0 VDIFILE.vdi

    4) scan for partitions, this should create the new device files in dev


    5) list your device files to make sure they were created

    ls -l /dev/nbd0*

    6) to view the sizes of any partions found in the vdi, you can run fdisk and print the partition table

    fdisk /dev/nbd0

    7) finaly, mount the desired partition (I used ‘-r’ for readonly to be safe)

    mount -t ntfs -r /dev/nbd0p2 /MNTPOINT

  17. Hi,

    I tried your technique and it works for me. But I wanted to save the pain of having to go to the terminal and eventually create mountpoins for my partitions, so I was hoping ‘partprobe’ to allow me to handle the nbdXpY volumes through nautilus…
    Should it work?
    When I run partprobe, I get this error:
    miguel@cdrsp-laptop-miguel:~$ sudo rmmod nbd
    miguel@cdrsp-laptop-miguel:~$ sudo modprobe nbd max_part=16
    miguel@cdrsp-laptop-miguel:~$ sudo qemu-nbd -c /dev/nbd0 '/home/miguel/VirtualBox VMs/Tests/NewHardDisk1.vdi'
    miguel@cdrsp-laptop-miguel:~$ sudo partprobe -s/dev/sda: msdos partitions 1 2 3
    Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only.
    Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only.
    Error: Can't have a partition outside the disk!
    /dev/nbd0: msdos partitions 1

    Any hint?

    • Firstly, I don’t understand why you’re using -s or specifying /dev/sda… but I’m not entirely sure that partprobe is going to do what you want anyway. :-)

  18. My Ubuntu didn’t want to create the /dev/nbd0p1 device, and partprobe failed with Error: Error informing the kernel about modifications to partition /dev/nbd0p1 — Invalid argument. This means Linux won’t know about any changes you made to /dev/nbd0p1 until you reboot — so you shouldn’t mount it or use it in any way before rebooting..

    So i used another way:
    $ fdisk -lu /dev/nbd0
    Device Boot Start End Blocks Id System
    /dev/nbd0p1 * 2048 16775167 8386560 83 Linux

    $ mount -o loop,offset=1048576 /dev/nbd0 /mnt # sector * size (2048*512)

    That was about it. Thank’s.

  19. Thanks a lot Jeff,

    Your excellent blog has allowed the recovery of a full VDI disk containing critical working documents of one colleague of mine.

    All the best from the Pyrenees,


  20. Thanks for a great tutorial, just something to add for those that use LVM within their virtual machines: Once you mount the virtual drive, you can issue a ‘pvscan’ to check for any LVM physical volumes, then you may need to ‘vgchange -ay vgname’ to activate any LVM volumes. Once that has been done, you can do a normal mount of the logical volume.
    The only potential problem may come if the volume group name on the virtual disk matches your host machine’s group name. I resolved this by mounting a live-cd with my virtual machine and renaming the group with ‘vgrename oldvgname newvgname’. You will also have to change the /etc/fstab and /boot/grub/grub.cfg to reflect the newvgname.
    Hope this helps someone! Great info!

  21. Couldn’t get this working at first – the partitions didn’t show up as devices. Before I read all of the comments and saw the ‘partprobe’ method, I’d discovered a Partition option to qemu-nbd, so I simply mounted the one partition that I wanted as nbd0. Thanks for the pointers!

  22. Thank you very much for this post – it took only a few minutes to resize the root partition of the virtual machine!
    First, I increased the VDI image size via
    $ VBoxManage modifyhd archlinux.vdi –resize 2500
    There’s no VirtualBox GUI for that.
    After that I used the nbd trick from this post to mount the VDI in the host.
    I only needed to install qemu-utils; the kernel module was already there.
    Then I could resize the partition via gparted GUI:
    $ sudo gparted /dev/nbd0

  23. I had some problems with the partition (“/dev/nbd0p1″) not showing up.

    using: qemu-nbd -P 1 -c /dev/nbd0

    worked. The partition #1 was then mapped directly to the existing node /dev/nbd0


  24. Damn, why didn’t I come up with this solution myself! It even mounts a Windows-7 partition.

    I was afraid for a moment that, after the untimely demise of vdfuse, I’d have to use libguestfs (which is less of a ‘lib’ and more of a ‘massive dependency takeover of your PC’).

    But I still have to figure out how to make it work with an LVM volume.

  25. OK, edge case here.
    It’s possibly a bad habit.
    I sometimes create virtualHD’s with no_partitions.
    The entire HD is the file system.
    So far I’ve not found a way to mount such a vdi.
    fdisk see the disk once looped back to /mnt/nbd0
    Though mounting cannot see a valid file system (type).
    Any clues?

    Thanks for a well simplified page.


  26. > First, install the QEMU tools. In Ubuntu, you’ll find them in the qemu-kvm package. Whatever package your distribution ships which contains the qemu-nbd binary should be fine.

    For Fedora systems, the package is qemu-img.

    • On Ubuntu 12.04, I found it in qemu-utils. You get that if you install qemu-kvm. If you don’t need the other stuff, then the qemu-utils has a smaller footprint.

  27. This guide works like a charm except for one specific issue that I ran into.

    In my particular scenario, I want to mount a vdi disk used by my Win Server 2008 R2 guest to access some program files. After I mount the vdi and access the filesystem according to this guide, I can see *most* files on the disk. I can, however, not see the files that belong to some programs that I installed on the guest myself. I can not even see the directories used by these programs, and yet they are not hidden or anything of that sort.

    Am I missing something obvious? I will be very grateful for your help.

  28. Followup – Had to use kpartx -a /dev/nbd0 before mounting partitions. kpartx makes partitions visible via /dev/mapper directory.

    * moderator may combine this and my previous post if desired.

  29. It seems that to see the partitions in the image, one must load the nbd module with the max_part option, i.e., modprobe nbd max_part=16

  30. Hi,
    thanks a lot, work perfectly for me even with dynamically allocated VDI disk (thin provisioning).

    Just a note:
    on my debian squeeze x86 I have to install qemu-utils package which contains qemu-nbd not found in the qemu-kvm package.

    Thanks again

  31. I am an XP refugee running Mint 17 with Cinnamon. Been trying to find out how to get into the .vdi files for ages. That worked a dream for me. Thank you. To get qemu-nbd I had to install qemu-utils using the software manager.