Failed a general system error occurred vim fault filenotfound

vMotion fails with vim.faultNotFound - A general system error occurred. It turns out you might want to have a look at the ports backing your dvSwitches!

vMotion is pretty awesome am I right?  Ever since I first saw my first VM migrate from one host to another without losing a beat I was pretty blown away – you always remember your first Smile  In my opinion it’s the vMotion feature that truly brought VMware to where they are today – laid the groundwork for all of the amazing features you see in the current release.  It’s something I’ve taken for granted as of late – which is why I was a little perplexed when all of a sudden, for only a few VMs, it just stopped working…

vMotionError

You can see above one of my VMs that just didn’t seem to want to budge!  Thankfully we get a very descriptive and helpful error message of “A general system error occurred: vim.faultNotFound” – you know, because that really helps a lot!  With my Google-Fu turning up no results and coming up empty handed in forum scouring I decided to take a step back to the VCP days and look at what the actual requirements of vMotion are – surely, this VM is not meeting one of them!  So with that, a simplified version of the requirements to vMotion…

  • Proper vSphere licencing
  • Compatible CPUs
  • Shared Storage (for normal vMotion)
  • vMotion portgroups on the hosts (min 1GB)
  • Sufficient Resources on target hosts
  • Same names for port groups

Licensing – check!  vCloud Suite

CPU Compatibility – check! Cluster of blades all identical

Shared Storage – check!  LUNs available on all hosts

vMotion interface – check!  Other VMs moved no problem

Sufficient Resources – check!  Lots of resources free!

Same names for port groups – check!  Using a distributed switch.

So, yeah, huh?

Since I’d already moved a couple dozen other VMs and the fact that this single VM was failing no matter what host I tried to move it to I ruled out the fact that there was anything host related causing this and focussed my attention to the single VM.  Firstly I thought maybe the VM was tied to the host somehow, using local resources of some sort – but the VM had no local storage attached to it, no CD ROMs mounted, nothing – it was the perfect candidate for vMotion but no matter what I tried I couldn’t get this VM to move!  I then turned my attention to networking – maybe there was an issue with the ports on the distributed switch, possibly having none available.

After a quick glance, there was lots of ports available, but there was another abnormality that reared its ugly head!  The VM was listed as being connected to the switch on the ‘VMs’ tab – however on the ‘Ports’ tab it was nowhere to be found!   So what port was this VM connected to?  Well, let’s ssh directly to the host to figure this one out…

To figure this out we need to run the “esxcli network vm port list” command and pass it the VMs worldID – to get that, we can simply execute the following

esxcli network vm list

From there, we can grab the world ID of our VM in question and run the following

esxcli network vm port list –w world_id

In my case, I came up with the following…

vmportid

Port 317!  Sounds normal right?  Not in my case.  In fact, I knew for certain from my documentation that the ports on this port group only went up to 309!  So, I had a VM, connected to the port group, on a port that essentially didn’t exist!

How about a TL;DR version?

Problem stemmed from the VM being connected to essentially a non-existent port!  Since I couldn’t have any downtime on this port my fix was to simply create a another port group on the dvSwitch, mimicking the settings from the first.  After attaching the VM to the newly built port group, then re-attaching back to the existing one I was finally attached to what I saw as a valid port, Port #271.

port-fixed

After doing this guess what finally started working again – that’s right, the wonderful and amazing vMotion Smile.  I’m sure you could achieve the same result by simply disconnecting and connecting, however you will experience downtime with that method – so I went the duplicate port group route.

Where there is one there’s many

All of this got me thinking – this can’t be the only VM that’s experiencing this issue is it?  I started looking around trying to find some PowerCLI scripts that I could piece together and as it turns out, knowing what the specific problem was certainly helps with the Google-Fu and I found a blog by Jason Coleman dealing with this exact same issue!  Wish I could’ve found that earlier Smile.  Anyways, Jason has  a great PowerCLI script attached to his post that peels through and detects which VMs in your environment are experiencing this exact problem!  He even has automated the creation of the temporary port groups as well!  Good work Jason!  After running it my conclusions were correct – there were about a dozen VMs that needed fixing in my environment.

How or why this occurred I have no idea – I’m just glad I found a way around it and as always, thought I’d share with intention of maybe helping others!  Also – it gave me a chance to throw in some Seinfeld action on the blog!  Thanks for reading!

6 Replies

  • forgot to mention:

    VMWare Essentials 5.5 on 3 hosts. VCenter SA on one of them. About 12 guests total. 

    One of the 3 hosts is down/missing, could that have something to do with it?

    Here is the article Opens a new window I forgot to link above


    Was this post helpful?
    thumb_up
    thumb_down

  • Author Darren Schoen

    Brand Representative for VMware

    cayenne

    Hello!

    So you are upgrading your Windows install of vCenter? That is what I am assuming as I read vCenter SA to mean Stand Alone and not the appliance.

    Let me know and thanks!

    Darren


    Was this post helpful?
    thumb_up
    thumb_down

  • VCenter SA = VCenter Server Appliance


    Was this post helpful?
    thumb_up
    thumb_down

  • Author Jonathan Holmgren

    I’m having the same issue and have had a support ticket with vmware open since 2015-05-01. They are still trying to figure it out.


    Was this post helpful?
    thumb_up
    thumb_down

  • I just shut it down, installed the new VCenter in parallel, and fixed my Veeam settings/jobs. Worked out great.


    Was this post helpful?
    thumb_up
    thumb_down

  • Author Jonathan Holmgren

    I had the same issue with the vim.fault.FileNotFound error. Went round and round with support for a couple months with no success. Was googling around and found this article. Changed my administrator@vsphere.local password to one that didn’t have a ! or ‘ character and it worked!

    https://kb.vmware.com/selfservice/microsites/search.do%3Flanguage%3Den_US%26cmd%3DdisplayKC%26extern… Opens a new window Opens a new window


    Was this post helpful?
    thumb_up
    thumb_down

Introduction

Since vSphere 5.1, VMware offers an easy migration path for VMs running on hosts managed by a vCenter. Using Enhanced vMotion available in Web Client, VMs can be migrated between hosts, even if they don’t have shared datastores. In vSphere 6.0 cross vCenter vMotion(xVC-vMotion) was introduced, which no longer requires you to even have old and new hosts be managed by the same vCenter.

But what if you don’t have a vCenter and you need to move VMs between standalone ESXi hosts? There are many tools that can do that. You can use V2V conversion in VMware Converter or replication feature of the free version of Veeam Backup and Replication. But probably the easiest tool to use is OVF Tool.

Tool Overview

OVF Tool has been around since Open Virtualization Format (OVF) was originally published in 2008. It’s constantly being updated and the latest version 4.2.0 supports vSphere up to version 6.5. The only downside of the tool is it can export only shut down VMs. It’s may cause problems for big VMs that take long time to export, but for small VMs the tool is priceless.

Installation

OVF Tool is a CLI tool that is distributed as an MSI installer and can be downloaded from VMware web site. One important thing to remember is that when you’re migrating VMs, OVF Tool is in the data path. So make sure you install the tool as close to the workload as possible, to guarantee the best throughput possible.

Usage Examples

After the tool is installed, open Windows command line and change into the tool installation directory. Below are three examples of the most common use cases: export, import and migration.

Exporting VM as an OVF image:

> ovftool “vi://username:password@source_host/vm_name” “vm_name.ovf”

Importing VM from an OVF image:

> ovftool -ds=”destination_datastore” “vm_name.ovf” “vi://username:password@destination_host”

Migrating VM between ESXi hosts:

> ovftool -ds=”destination_datastore” “vi://username:password@source_host/vm_name” “vi://username:password@destination_host”

When you are migrating, machine the tool is running on is still used as a proxy between two hosts, the only difference is you are not saving the OVF image to disk and don’t need disk space available on the proxy.

This is what it looks like in vSphere and HTML5 clients’ task lists:

Observations

When planning migrations using OVF Tool, throughput is an important consideration, because migration requires downtime.

OVF Tool is quite efficient in how it does export/import. Even for thick provisioned disks it reads only the consumed portion of the .vmdk. On top of that, generated OVF package is compressed.

Due to compression, OVF Tool is typically bound by the speed of ESXi host’s CPU. In the screenshot below you can see how export process takes 1 out of 2 CPU cores (compression is singe-threaded).

While testing on a 2 core Intel i5, I was getting 25MB/s read rate from disk and an average export throughput of 15MB/s, which is roughly equal to 1.6:1 compression ratio.

For a VM with a 100GB disk, that has 20GB of space consumed, this will take 20*1024/25 = 819 seconds or about 14 minutes, which is not bad if you ask me. On a Xeon CPU I expect throughput to be even higher.

Caveats

There are a few issues that you can potentially run into that are well-known, but I think are still worth mentioning here.

Special characters in URIs (string starting with vi://) must be escaped. Use % followed by the character HEX code. You can find character HEX codes here: http://www.techdictionary.com/ascii.html.

For example use “vi://root:P%40ssword@10.0.1.10”, instead of “vi://root:P@ssword@10.0.1.10” or you can get confusing errors similar to this:

Error: Could not lookup host: root

Disconnect ISO images from VMs before migrating them or you will get the following error:

Error: A general system error occurred: vim.fault.FileNotFound

Conclusion

OVF Tool requires downtime when exporting, importing or migrating VMs, which can be a deal-breaker for large scale migrations. When downtime is not a concern or for VMs that are small enough for the outage to be minimal, from now on OVF Tool will be my migration tool of choice.

Tags: CLI, ESX, ESXi, export, import, migration, OVF, OVF Tool, performance, speed, throughput, VM, vMotion, vmware, xVC-vMotion


This entry was posted on September 26, 2017 at 10:28 pm and is filed under Virtualization. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

Also seen in test 3-02-voting-app here https://ci.vcna.io/vmware/vic/11246

Persona log:

Jun  5 2017 21:27:36.878Z DEBUG [BEGIN] [github.com/vmware/vic/lib/apiservers/engine/backends.(*Volume).volumeCreate:227]
Jun  5 2017 21:27:36.878Z INFO  Finalized model for volume create request to portlayer: &models.VolumeRequest{Capacity:-1, Driver:"vsphere", DriverArgs:map[string]string{"flags":"rw", "container":"redis", "image":"redis:alpine"}, Metadata:map[string]string{"DockerMetaData":"{"Driver":"vsphere","DriverOpts":{"container":"redis","flags":"rw","image":"redis:alpine"},"Name":"cbaaddc3-4a35-11e7-9dd0-000c297828ef","Labels":null,"AttachHistory":["redis"],"Image":"redis:alpine"}"}, Name:"cbaaddc3-4a35-11e7-9dd0-000c297828ef", Store:"default"}
Jun  5 2017 21:27:36.937Z DEBUG [ END ] [github.com/vmware/vic/lib/apiservers/engine/backends.(*Volume).volumeCreate:227] [59.196079ms] 
Jun  5 2017 21:27:36.937Z DEBUG [ END ] [github.com/vmware/vic/lib/apiservers/engine/backends.(*ContainerProxy).AddVolumesToContainer:359] [59.259797ms] afe5f033616fdbd40ce5b2c8e85df84a
Jun  5 2017 21:27:36.937Z DEBUG [ END ] [github.com/vmware/vic/lib/apiservers/engine/backends.(*Container).containerCreate:619] [360.064312ms] Container.containerCreate
Jun  5 2017 21:27:36.937Z DEBUG [ END ] [github.com/vmware/vic/lib/apiservers/engine/backends.(*Container).ContainerCreate:557] [360.207392ms] 
Jun  5 2017 21:27:36.937Z ERROR Handler for POST /v1.25/containers/create returned error: Server error from portlayer: [POST /storage/volumes][500] createVolumeInternalServerError  &{Code:500 Message:ServerFaultCode: File [datastore1] test/volumes/cbaaddc3-4a35-11e7-9dd0-000c297828ef was not found}

PL log

Jun  5 2017 21:27:36.878Z DEBUG op=283.103 (delta:3.175µs): [NewOperation] op=283.103 (delta:1.336µs) [github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*StorageHandlersImpl).CreateVolume:354]
Jun  5 2017 21:27:36.878Z INFO  Creating directory [datastore1] test/volumes/cbaaddc3-4a35-11e7-9dd0-000c297828ef
Jun  5 2017 21:27:36.918Z DEBUG vSphere Event Creating result-4175f89d7103 on host ns539601.ip-144-217-72.net in ha-datacenter for eventID(308309) ignored by the event collector
Jun  5 2017 21:27:36.918Z DEBUG vSphere Event Creating worker-94df235c186f on host ns539601.ip-144-217-72.net in ha-datacenter for eventID(308310) ignored by the event collector
Jun  5 2017 21:27:36.937Z DEBUG Creating [datastore1] test/volumes/cbaaddc3-4a35-11e7-9dd0-000c297828ef error: ServerFaultCode: File [datastore1] test/volumes/cbaaddc3-4a35-11e7-9dd0-000c297828ef was not found
Jun  5 2017 21:27:36.937Z ERROR storagehandler: VolumeCreate error: soap.soapFaultError{fault:(*soap.Fault)(0xc42085d7c0)}
Jun  5 2017 21:27:36.937Z DEBUG [ END ] [github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*StorageHandlersImpl).CreateVolume:330] [58.841193ms] storage_handlers.CreateVolume

hostd.log

2017-06-05T21:27:37.430Z info hostd[2C3CDB70] [Originator@6876 sub=Hostsvc.DatastoreSystem] RefreshVdiskDatastores: Done refreshing datastores.
2017-06-05T21:27:37.430Z info hostd[2F180B70] [Originator@6876 sub=Nfcsvc opID=93aa8a63 user=droneci] Create requested for /vmfs/volumes/589f9716-34cbc097-9753-0cc47a9adeac/test/volumes/cbaaddc3-4a35-11e7-9dd0-000c297828ef
2017-06-05T21:27:37.437Z info hostd[2F180B70] [Originator@6876 sub=Default opID=93aa8a63 user=droneci] AdapterServer caught exception: vim.fault.FileNotFound
2017-06-05T21:27:37.437Z info hostd[2F180B70] [Originator@6876 sub=Vimsvc.TaskManager opID=93aa8a63 user=droneci] Task Completed : haTask--vim.FileManager.makeDirectory-6208666 Status error
2017-06-05T21:27:37.437Z info hostd[2F180B70] [Originator@6876 sub=Solo.Vmomi opID=93aa8a63 user=droneci] Activation [N5Vmomi10ActivationE:0x319e62a8] : Invoke done [makeDirectory] on [vim.FileManager:ha-nfc-file-manager]
2017-06-05T21:27:37.437Z verbose hostd[2F180B70] [Originator@6876 sub=Solo.Vmomi opID=93aa8a63 user=droneci] Arg name:
--> "[datastore1] test/volumes/cbaaddc3-4a35-11e7-9dd0-000c297828ef"
2017-06-05T21:27:37.437Z verbose hostd[2F180B70] [Originator@6876 sub=Solo.Vmomi opID=93aa8a63 user=droneci] Arg datacenter:
--> 'vim.Datacenter:ha-datacenter'
2017-06-05T21:27:37.437Z verbose hostd[2F180B70] [Originator@6876 sub=Solo.Vmomi opID=93aa8a63 user=droneci] Arg createParentDirectories:
--> false
2017-06-05T21:27:37.437Z info hostd[2F180B70] [Originator@6876 sub=Solo.Vmomi opID=93aa8a63 user=droneci] Throw vim.fault.FileNotFound
2017-06-05T21:27:37.437Z info hostd[2F180B70] [Originator@6876 sub=Solo.Vmomi opID=93aa8a63 user=droneci] Result:
--> (vim.fault.FileNotFound) {
-->    faultCause = (vmodl.MethodFault) null,
-->    file = "[datastore1] test/volumes/cbaaddc3-4a35-11e7-9dd0-000c297828ef",
-->    msg = ""
--> }

Понравилась статья? Поделить с друзьями:
  • Failed a general system error occurred missing vstor2 driver or not started
  • Failed to allocate memory call to arms как исправить
  • Failed a general system error occurred fault cause vim fault genericvmconfigfault
  • Failed to allocate from state pool black ops 2 как исправить
  • Fail with error bep20 transfer amount exceeds balance