Unprocessed orphan inode list in VirtualBox VM
Rise to the top 3% as a developer or hire one of them at Toptal: https://topt.al/25cXVn
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: The Builders
--
Chapters
00:00 Unprocessed Orphan Inode List In Virtualbox Vm
00:46 Accepted Answer Score 38
02:38 Answer 2 Score 0
03:54 Thank you
--
Full question
https://superuser.com/questions/947942/u...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#linux #virtualbox #filesystems #vagrant #filesystemcorruption
#avk47
ACCEPTED ANSWER
Score 38
I managed to solve this problem this morning. Here are the steps I took in case anyone else encounters this problem:
Download a bootable linux .iso.
I am running Ubuntu 14.04 x64 in my VM, so I decided to download the 64-bit Ubuntu 14.04 installation .iso from here. It shouldn't really matter what release you download, as long as it supports your file system and you are familiar with it.
Mount the .iso file in the VM as a virtual CD-ROM.
From the console, run virtualbox
to open the Virtual Box configuration GUI. From there, go to:
Settings ->
Storage ->
Add CD/DVD Device ->
Choose disk. Browse to the .iso file you just downloaded.
Boot from the .iso file.
Now start the boot process for your VM by using the command vagrant up
. During boot, you should be prompted to press a key to select a boot device (For me it was F12
). Now select CD-ROM to boot from. The .iso you downloaded should boot. If you are using Ubuntu, select Try Ubuntu. Now, start a Terminal window.
Unmount and fsck the disk
In the terminal, you may first have to unmount the virtual HDD. If your disk is /dev/sda1
, use the following command:
sudo umount /dev/sda1
You can then run fsck
on the disk:
fsck /dev/sda1
After confirming the fixes, reboot the VM. All should be back to normal.
ANSWER 2
Score 0
I was facing the same issue on an AWS EC2 machine. To complicate the resolution, the volume that was affected was the root volume of the EC2 instance. Hence the device was failing to boot and SSH was also not possible to the instance.
The following steps helped me resolve the issue:
- Detach the volume from the EC2 instance.
- Configure a new EC2 instance using the same AMI and in the same AZ as that of the old one.
- Attach the volume (detached in Step 1) to the new instance.
- Execute the following commands:
# Switch to Root user:
sudo -i
# Identify the device Filesystem name and save it as a variable:
lsblk
rescuedev=/dev/xvdf1 # Mention the right Filesystem for the particular volume.
# Use /mnt as the mount point:
rescuemnt=/mnt
mkdir -p $rescuemnt
mount $rescuedev $rescuemnt
# Mount special file systems and change the root directory (chroot) to the newly mounted file system:
for i in proc sys dev run; do mount --bind /$i $rescuemnt/$i ; done
chroot $rescuemnt
# Download, install and execute EC2Rescue tool for Linux to fix the issues:
curl -O https://s3.amazonaws.com/ec2rescuelinux/ec2rl.tgz
tar -xf ec2rl.tgz
cd ec2rl-<version_number>
./ec2rl run
cat /var/tmp/ec2rl/*/Main.log | more
./ec2rl run --remediate
# Switch back from the Root user and unmount the volume:
exit
umount $rescuemnt/{proc,sys,dev,run,}
- Shut down the EC2 instance and detach the volume.
- Attach the volume to the original instance and start the EC2 instance.