VMware to Proxmox: An Honest and Comprehensive Guide to Migrating to Proxmox VE in 2025 (Step-by-Step, No Fluff)

The shift of many organizations toward Proxmox VE is no longer a trend: it is an ongoing project. By 2025, the combination of costs, operational simplicity, and technical maturity is leading IT teams on the path VMware → Proxmox VE with clear objectives: reduce complexity and TCO, maintain (or improve) resilience, and minimize downtime.

This technical report provides a structured overview of what you really need to know and do: the core architecture of Proxmox, storage and networking options, real migration methods (automatic and manual), pre-migration checklists, optimized VM configurations, HA, backups, and the fine points that can break a project if not anticipated.


Proxmox VE in 5 Key Ideas (the essentials)

  • Platform: based on Debian GNU/Linux, Linux kernel, QEMU/KVM for VMs, and LXC for containers. Management via web GUI, CLI, and REST API.
  • Cluster: multi-master with quorum (Corosync). Recommended ≥ 3 nodes; for a 2-node cluster, add QDevice.
  • Storage: native plugins (Ceph RBD, ZFS, LVM/thin, directory, NFS/SMB, iSCSI/FC). Content (disks, ISOs, backups) is declared at the Datacenter level.
  • Proxmox configuration: dedicated files under /etc/pve, backed up by pmxcfs and replicated to all nodes.
  • License: FLOSS software (AGPLv3); subscriptions provide access to the Enterprise repository and technical support, but all features are available at no extra cost.

Networking and Storage: Decisions Impacting Migration

Networking

  • vmbrX = Linux bridges (virtual switches on the host).
  • VLAN: on guest NICs or at any layer (e.g., bond0.20 as bridge-port).
  • Bonds for LAG/LACP.
  • Corosync: dedicated and redundant link (low latency with no congestion).
  • SDN: VLAN zones, VXLAN, etc., if the design requires.

Storage (practical summary)

  • Ceph RBD (recommended shared): primary choice for HA/live migration.
  • NFS/SMB: simple and flexible; with qcow2 snapshots for VMs (not for containers).
  • ZFS local: excellent in small clusters + ZFS replication (asynchronous; RPO > 0).
  • SAN (FC/iSCSI):
    • Shared thick LVM: simple; no native snapshots.
    • In Proxmox VE 9.0 (August 2025), arrives “Snapshots as Volume-Chain” (technical view) enabling qcow2 snapshots over LVM-thick (except TPM state).
    • One LUN per disk: snapshot at the array level but very costly operation (avoid at scale).
    • ZFS over iSCSI: viable with compatible arrays and supported tooling.
  • Multipath: mandatory if redundant paths to the SAN exist.

Pre-flight Checklist — Don’t Migrate Without It

  1. Version: Proxmox VE 8 or higher, updated.
  2. Inventory: BIOS/UEFI, disks and controllers, vTPM (if encrypting), IPs and DHCP reservations, vSAN (if applicable), snapshot chain.
  3. Guest tools & drivers: uninstall VMware Tools; prepare VirtIO (Windows) and confirm virtio-scsi in initramfs (Linux).
  4. Security: if full-disk encryption with keys in vTPM, disable vTPM and keep the keys (vTPM does not migrate).
  5. Networking/Corosync: dedicated links, VLANs, bonds; plan for host vmbr bridges.
  6. Storage: define target per VM (Ceph/NFS/ZFS/SAN).
  7. Backups: choose Proxmox Backup Server (dedup + incremental + live-restore) or vzdump.
  8. Downtime and rollback plan: test with 1-2 VMs, maintenance window, documented rollback procedures.

Best practices for VMs in Proxmox VE (avoid surprises)

  • CPU
    • Same model across all nodes → host.
    • Mixed CPUs/future expansion → generic x86-64-v (preserves live-migration).
  • NIC
    • VirtIO by default (minimal overhead). e1000/rtl8139 only for legacy OS without VirtIO.
  • Memory
    • Ballooning Device enabled (useful telemetry even without ballooning).
  • Disks
    • SCSI bus + virtio-scsi-single; discard/trim on thin; IO thread for I/O-intensive workloads.
  • Agent
    • Install qemu-guest-agent for clean shutdowns, IP detection, and hooks.
  • Startup
    • SeaBIOS (legacy) or OVMF/UEFI depending on source. If UEFI does not boot, create EFI entry and add EFI Disk.

Migration methods (from minimal to more involved)

1) Automatic import from ESXi (fast and supported)

Proxmox includes an ESXi importer (GUI/API):

  1. Datacenter → Storage → Add → ESXi. Use credentials for the ESXi host (vCenter works but slower).
  2. View available VMs in the import panel.
  3. Select destination storage, network bridge, and adjust hardware settings (NIC model, ISO, disk storage, etc.).
  4. Shutdown the VM on ESXi for consistency.
  5. Start import in Proxmox and boot; install VirtIO/QEMU agent; verify MAC/DHCP/IP.

Watch out for:

  • vSAN (disks on vSAN do not import; move to another datastore first).
  • Encrypted disks by policy (remove encryption).
  • Datastores with special characters (e.g., ‘+’) → rename.
  • Large snapshots at source → slow import.

Live import: reduces downtime by starting the VM while blocks are still importing. Initially, I/O is slower; if the process fails, written data from live-import start is lost → test in lab.

Bulk import: limit concurrency (general rule: ≤ 4 disks at once). The ESXi API rate-limits; esxi-folder-fuse service helps but do not run dozens of imports simultaneously. Monitor RAM (read-ahead cache per disk).


2) Export OVF/OVA + qm importovf (portable and reliable)

When importer is not suitable:

# 1) Export with ovftool (ESXi)
./ovftool vi://root@ESXI-FQDN/NOMBRE-VM /exports/NOMBRE-VM/

# 1b) or from vCenter
./ovftool vi://usuario:pass@VCENTER/DC/vm/NOMBRE-VM /exports/NOMBRE-VM/

# 2) Import in Proxmox
qm importovf  NOMBRE-VM.ovf 

# 3) Recommended adjustments
qm set  --cpu x86-64-v2-AES --scsihw virtio-scsi-single

In Windows, start temporarily with IDEs/SATA, install VirtIO drivers, and switch to VirtIO SCSI.


3) qm disk import (direct import/conversion from VMDK)

If Proxmox sees the *.vmdk (e.g., in NFS/SMB):

# Create target VM (no default disk)
# Import and convert on-the-fly
qm disk import  Server.vmdk  --format qcow2
# It appears as "Unused"; attach to SCSI/VirtIO and set boot order

4) Attach and Migrate (almost zero downtime)

If ESXi and Proxmox share a datastore:

  1. Add the share as storage (type Disk Image) in Proxmox.
  2. Create the VM in Proxmox with OS disk on that storage and vmdk format (Proxmox creates the descriptor).
  3. Replace the descriptor with the original VMDK and point the Extent to the *-flat.vmdk of the source (relative path).
  4. Power on the VM in Proxmox (starts from the source flat).
  5. Hot-migrate the disk to destination storage (Disk Action → Move Storage).
  6. Uninstall VMware Tools, install QEMU Agent/VirtIO, and switch to VirtIO SCSI.

High Availability (HA) Without Surprises

  • Corosync: low latency and dedicated link; if saturated (backup/storage), timeouts increase and self-fence can occur.
  • Fencing: if a node loses quorum, it auto-reboots to ensure consistency before recovering VMs on another node.
  • Disks: for effective failover, VMs should reside on shared storage (or use ZFS Replication asynchronously with its small RPO).
  • No COLO: as of 2024/25, no lockstep operation of 2 VMs (QEMU COLO function still under upstream development).

Backup and Recovery: Integrate Proxmox Backup Server

  • Proxmox Backup Server (PBS): deduplication independent of FS, incremental (only changes), hot backups, and live-restore (boot VM during restore).
  • vzdump: classic file-based option.
  • Strategy: schedule daily incremental and weekly full backups, verify restores, add SLA tags, and conduct periodic DR tests.

Schedules and Operations: Realistic Timeframes

  • Average VM (200 GB) in same data center (10 GbE)
    • Preparation/testing: 30–60 min
    • Automatic import: 15–45 min
    • Post-adjustments (drivers/network): 10–20 min
    • Typical downtime: 10–30 min (or less with live-import)
  • Batch of 20 VMs (mixed sizes)
    • Stagger (≤ 4 disks simultaneously), preferably during night window
    • Parallel team: one monitors ESXi API and rate-limit, one adjusts drivers/agent, another verifies applications

Quick Post-Cut Harden (10 minutes)

  1. Reassign DHCP reservations or static IPs (change MAC).
  2. Activate discard and IO thread where needed.
  3. Install qemu-guest-agent and VirtIO on all guests.
  4. Schedule backups to PBS (and test live-restore).
  5. Perform live migration as smoke test between nodes.
  6. Review firewall (Datacenter/Node/VM) and monitoring (CPU steal, balloon, latency).

Quick Technical Reference Table (Decision & SEO)

TopicVMware (ESXi/vCenter)Proxmox VE
LicensingProprietaryFLOSS (AGPLv3) + optional subscription
Disk FormatVMDKQCOW2/RAW (import VMDK)
ESXi ImportBuilt-in (GUI/API), live-import
Cluster/HAvSphere HA/DRSCorosync + HA, live-migration

Common Mistakes (and How to Avoid Them)

  • UEFI without path: add EFI entry and EFI Disk.
  • Boot failure after switching to VirtIO: boot with IDEs/SATA, install VirtIO drivers, then switch to virtio-scsi.
  • vSAN: move disks to another datastore before importing.
  • vTPM: state does not migrate; disable temporarily and keep keys.
  • Stuck imports: too many tasks at once → limit concurrency, consolidate snapshots, avoid vCenter for peak performance.

Final Takeaway: “VMware to Proxmox” Without Surprises Is Possible (and Easier in 2025)

With the integrated ESXi importer, live-import, extensive storage options, and top-tier backup capabilities, Proxmox VE offers a practical pathway to migrate critical workloads with controlled downtime. The secret isn’t a magic button: it’s in the design (network/storage), the pre-migration prep, trial runs with test VMs, and a clear runbook for each migration method.


Frequently Asked Questions (SEO-focused)

What is the fastest way to migrate from VMware to Proxmox VE with minimal downtime?
Use Proxmox’s ESXi importer with live-import (or the Attach and Migrate pattern over shared datastore). Prepare VirtIO drivers and qemu-guest-agent before cutover.

How do I convert a VMDK to QCOW2 in Proxmox?
With qm disk import:

qm disk import  disk.vmdk  --format qcow2

Then attach the disk (SCSI/VirtIO) and set the boot order.

Can I import directly from vCenter?
Yes, but generally better performance is achieved by importing from the ESXi host. Alternatively, export an OVF/OVA using ovftool and import with qm importovf in Proxmox.

Do I need three nodes for a production Proxmox cluster?
For a stable quorum: yes, at least 3 votes. With two nodes, add QDevice for the third voting point. Isolate Corosync on a dedicated network.

What about vTPM and encrypted disks when migrating to Proxmox?
vTPM state does not migrate. Disable vTPM temporarily and keep keys. Encrypted disks in VMware should be decrypted before import.

If your goal is to migrate to Proxmox VE without shutting down half the business along the way, this guide is your roadmap: prepare, import, validate, and harden. From there, Proxmox provides room to grow: Ceph or NFS depending on the case, HA well wired, deduplicated backups with live-restore… and an open stack that reduces friction (and costs) in daily operations.

Scroll to Top