SSD Emulation in Proxmox: What’s Actually Changing and How to Leverage TRIM and I/O in Your VMs

In many virtualized infrastructures, the feeling of “slowness” in a virtual machine doesn’t always come from a lack of CPU or RAM but from how disk access is managed. In Proxmox VE, one of the most misinterpreted — and yet most useful when applied thoughtfully — settings is SSD emulation: an option that makes the guest operating system see the virtual disk as if it were a solid-state drive.

The key is to avoid the easy headline: enabling “SSD emulation” does not turn an HDD into NVMe. What it does is expose the guest to media type characteristics (flash vs. rotational) and, with that, enable the OS to make better planning and maintenance decisions (for example, regarding TRIM/discard, I/O queues, or certain housekeeping routines). In mixed environments — where SSD and HDD storage coexist, or where robust backends with thin provisioning layers are used — this “little flag” can make the difference between a sluggish VM and one that responds more consistently.

What is SSD emulation and why does it matter (without magic promises)

Practically speaking, Proxmox can present a virtual disk as “non-rotational” to the guest OS. That signal influences decisions made by the OS (for example, how it groups small writes or manages maintenance operations). This is why it’s often called “perceived better performance”: it optimizes the behavior of the stack, not the physical media itself.

This setting is important for two reasons:

  1. Modern operating systems change their behavior when they detect SSDs (e.g., maintenance policies, write optimizations, or block reclamation).
  2. TRIM/discard (when supported) allows reclaiming space in thin provisioning scenarios and helps maintain SSD performance by reducing unnecessary internal work.

The triangle you should understand: SSD emulation, TRIM, and discard

This is where most guides fall short: mixing up TRIM and discard as if they’re the same thing.

  • TRIM is the mechanism whereby the guest (the VM’s OS) marks blocks as “no longer used” in its filesystem.
  • Discard is how the hypervisor/backend receives and processes those requests so the underlying storage can reuse those blocks (or, if thin provisioning is used, release actual space).

On Linux, for example, a typical check involves verifying discard support and running reclamation with tools like lsblk (to see capabilities) and fstrim (to manually emit TRIM or schedule it). This is important because enabling discard without proper backend support may not provide benefits (or could even add I/O load in some cases).

What Proxmox offers (and what it limits): controller and “grey” options

Not all bus/controller combinations expose the same capabilities. In Proxmox, it’s common to find that SSD emulation option is disabled depending on the virtual device chosen. Discussions within the community note, for instance, that with VirtIO Block it might not be available, and that for certain functions it’s better to use VirtIO SCSI, especially in virtio-scsi-single mode, which isolates queues and enables options related to I/O path optimization.

Operationally speaking: if your goal is “doing it right” for latency-sensitive workloads (databases, queues, microservices with many small writes), it often makes sense to review the bus/controller before toggling flags.

Recommended configuration: a cautious and reversible approach

In a production environment, the sensible approach is incremental: change one thing, measure, verify, and document.

1) Choose the right bus/controller

  • For performance and compatibility, VirtIO SCSI is a common choice in Proxmox.
  • If you want to enable advanced options (like IOThreads, depending on setup), virtio-scsi-single is usually recommended to avoid shared queues between disks.

2) Enable “SSD emulation” (ssd=1)

This makes the guest treat the device as flash (at a logical level). It’s especially useful when:

  • The backend is a real SSD/NVMe.
  • Intending for the guest to apply policies better suited for flash storage.
  • Planning to use TRIM/discard where supported.

3) Enable “Discard” (discard=on) only if the backend supports it properly

This is particularly relevant in:

  • Thin provisioning, where freeing blocks can reclaim actual space.
  • SSDs, to help sustain performance over time.

4) IOThreads: separate I/O to reduce contention

Practically, IOThreads aim to prevent I/O processing from bottlenecking in a single thread/queue when access patterns are parallel. Technical analyses and vendor documentation show improvements in specific scenarios and queue depths, but actual impact depends on the workload (it’s not a universal magic wand).

5) Cache modes: performance vs. risk

This point requires careful discipline because there’s a clear “trade-off.”

  • Writeback generally improves perceived latency but can increase risk during power failures if not properly protected (UPS, protected storage, etc.). Technical debates emphasize that “faster” options aren’t always safer, especially if hardware doesn’t guarantee write durability in flight.
  • Writethrough / no cache / direct sync prioritize consistency at the expense of performance.

Additionally, with backends like ZFS, warnings exist about potential double caching (host + guest) if certain options are combined without a clear write path design.

Quick reference table: what to enable based on use case

Typical scenarioSSD emulationDiscard/TRIMIOThreadsRecommended cache (roughly)
VM on SSD/NVMe, general workloadsYesYes (if supported)OptionalDefault conservative
Databases with many small writesYesYes (if backend handles it well)YesEvaluate carefully with testing
HDD backend with focus on “integrity”OptionalUsually not criticalOptionalConservative
Thin provisioning and space recovery needsYesYesOptionalBackend dependent
ZFS focused on data integrityYes (if it benefits the guest)Depends on version/policyOptionalAvoid unjustified double caches

Editorial note: this table is a decision guide. In production, the final criterion is the environment: backend, UPS support, recovery policies, and real workload testing.

Verification: how to confirm TRIM/discard is working

After applying changes, a professional approach is to verify end-to-end:

  • On the guest (Linux): check discard capabilities and run fstrim to see if blocks are reclaimed.
  • In general: confirm that the guest detects the device as SSD (using system tools) and that the settings persist post-reboot or migration.

On Windows, validation typically involves checking that the system recognizes the drive as an SSD and that the associated optimizations (not to be confused with defragmentation, but SSD optimization) are active.

Risks and nuances worth noting in writing

  1. Not all backends implement discard equally: enabling it “by default” may not always be beneficial.
  2. Changing disk parameters on existing VMs should be done during maintenance windows, with backups and rollback plans.
  3. Passthrough (direct disk assignment) can be appropriate when native exposure and minimal latency are needed, but reduces flexibility (e.g., migration) and requires operational discipline.
  4. Perceived performance improves through signaling and management, not physical hardware changes.

Frequently Asked Questions

What exactly does “SSD emulation” in Proxmox do, and why can it speed up a VM?
It makes the guest OS treat the virtual disk as a non-rotational device (SSD). This can improve how the guest plans I/O and maintenance routines, and facilitates the use of TRIM/discard when the backend supports it.

When is it advisable to enable discard/TRIM in Proxmox to recover actual space?
It’s especially useful with thin-provisioned storage or when you want the backend to reuse blocks freed by the filesystem. Always verify real support on the backend and check functionality within the guest beforehand.

Which controller is better for SSD emulation and IOThreads in Proxmox?
For performance and advanced options, VirtIO SCSI (and sometimes virtio-scsi-single) is the preferred choice for compatibility and I/O path control, particularly over configurations with limited options.

Is enabling write-back cache “safe” if maximum performance is the goal in Proxmox?
It depends on the system design: power protection (UPS), storage guarantees, and risk tolerance. It can improve performance but involves debates about data safety during outages if hardware or configurations don’t ensure write durability.

Scroll to Top