I thought this would be easier than it turned out to be. Documenting my steps here in case I need to do this again someday.
I had a running Amazon EC2 instance and I want to copy it for possible migration to a different Amazon region.
I created the snapshot of the boot drive, and copied that over to the desired destination region. Then I created an image from the snapshot and tried to launch that image with the default kernel option. The system starts running but fails 1/2 checks and is not reachable. I looked around and found some similar cases:
https://forums.aws.amazon.com/thread.jspa?messageID=452648
https://forums.aws.amazon.com/thread.jspa?messageID=292162
https://forums.aws.amazon.com/thread.jspa?messageID=388700
These posts clued me in to the kernel. I found which Amazon kernel image the source is running, but that kernel can’t be found in the destination region. At first I thought maybe it’s obsolete, or region-specific, but then I figured out how to find the correct kernel in the new region.
Look at the source instance in the source region, and find the AMI ID in the instance description. In my case it is
ubuntu-lucid-10.04-i386-server-20101020 (ami-8c0c5cc9)
Switch to the destination region and view the AMIs, switch to public AMIs and search based on the description (ubuntu-lucid-10.04-i386-server-20101020)
This brings up 2 results, I’m looking for the ‘ebs’ result.
Click on it and in the Details look for the Kernel ID, in my case it is aki-6603f70f
Now I can launch my snapshot-sourced AMI and choose this kernel ID under Advanced. My server comes up and passes 2/2 checks now, but I still can’t connect. I’m getting “Connection refused” even though the same security group allows me to connect to a neighboring instance.
This explained it for me: http://stackoverflow.com/questions/14026148/running-ec2-instance-suddenly-refuses-ssh-connection
I had to temporarily attach the volume to a different running instance in the same region & zone so that I could modify /etc/fstab to remove the defunct entries. After doing this, I re-attached the volume to the new instance as /dev/sda1 and then it booted with 2/2 checks passing and I am able to connect using SSH.