Quantcast
Channel: VMware Communities : Document List - Performance & VMmark
Viewing all articles
Browse latest Browse all 360

Best Practices for IBM Lotus Domino

$
0
0

Introduction

This page provides the best practices for virtualizing IBM Lotus Domino using VMware Infrastructure. This list is based on my VMworld 2008 session EA2348.

 

General Recommendations

Use newer hardware

  • Supports latest hardware assist technologies, larger on-processor cache

  • 64-bit may not perform better with older hardware

 

Use VMware ESX , that uses bare-metal or hypervisor architecture

  • You can start with VMware ESXi - the free version.

  • Do not use the VMware Workstation or Server, that use the hosted architecture.

 

VMware ESX allows you the choice of virtualization technology best suited for your workload

  • Hardware Assist (AMD, Intel) (both CPU and MMU virtualization) if your hardware supports it

  • Paravirtualization (if you use SLES for your Domino deployment)

  • Binary Translation

 

Migrate to latest version of ESX

  • E.g. ESX 3.5 defaults to 2nd Generation Hardware Assist if available, has several I/O Performance improvements

 

Lotus Domino: Plan to migrate to version 8.0

  • Significant performance improvements, specially disk I/O

 

Provide Redundancy to the ESX host

  • Power supplies, HBAs, NICs, Network and SAN switches

  • E.g. NIC teaming, HBA multi-pathing

 

Leverage VMotion, Storage VMotion, DRS and HA for higher Domino availability

 

VM configuration

64-bit OS recommended

  • VI3 supports all x86 OSs that Domino supports: Windows, SLES, RHEL

  • Improved memory limits in 64-bit OS helps cache more data, and thus avoid disk IO. Reduces response times, and hence increasing the number of users

  • Increase VM memory when running in 64-bit guest OS

 

64-bit may not perform better with older hardware

  • E.g. 64-bit Windows more sensitive to onboard L2/L3 chip caches

  • Microsoft reports 10-15% degradation with older hardware

 

Guest Operating System:

  • Windows: Use 2003 SP2

    • Microsoft eliminated most APIC TPR accesses, improves virtual performance

  • Linux: Use 2.6.18-53.1.4 kernel or later to use divider patch

    • Some older Linux versions have a 1Khz timer rate

    • Put divider=10 on the end of the kernel line in grub.conf and reboot

 

VM Time Synchronization

  • Use VMware Tools time synchronization within the virtual machine

  • Enable ESX server NTP daemon to sync with external stratum NTP source (VMware Knowledge Base ID# 1339)

  • Disable OS Time Service

    • Windows: w32time service

    • Linux: NTP daemon

 

Storage

Storage configuration is absolutely critical; most performance problems traced to this

  • Number of spindles, RAID configuration, drive speed, controller cache settings, queue depths – all make a big difference

 

Align partitions

 

Use separate, dedicated LUNs for OS/Domino, data and transaction logs

  • Separate the IO at physical disk level, not simply logical LUNs

  • Make sure these LUNs have enough spindles to support the IO demands

  • Fewer spindles or too many VMDK files on single VMFS LUN can substantially increase disk IO latencies

  • Check Scalable Storage Performance to understand the details

 

RAID configuration

  • RAID 1+0 for Data, RAID 0 for Log

 

Cache settings

  • Write policy to "write back“, read policy to "read ahead“

 

Queue Depths

  • Increase to 255

 

Storage Protocol: Fibre Channel or iSCSI

 

Storage Partition: VMFS or RDM

 

VI3 supports latest storage technologies: leverage these if you have already invested or plan to invest

  • Fibre channel – 8Gbps connectivity

  • ISCSI – 10GigE network connectivity, Jumbo Frames

  • Infiniband support

 

Virtual CPUs

The number of vCPUs per VM depends on the number of users to be supported

  • Start with uni-processor, may be enough

  • Try not to over-provision vCPUs in the guest CPU

 

Verify CPU compatibility for VMotion

 

Memory

Increasing memory to avoid disk I/O is most technique to improve performance

More available memory = more Lotus Domino Cache

 

Increase NSF_DbCache_Maxentries value

 

Leverage the higher VI 3.5 support 64GB memory limit per VM in VI 3.5 when using 64-bit guest OS for Domino

  • 64-bit OSs can take advantage of larger memory limits for file caching

 

Leverage NUMA optimizations in VI3

  • When using NUMA, try to fit the VM within a single node to avoid latencies accessing memory on remote nodes

 

Networking

Use dedicated NICs based on the network traffic

  • E.g. separate NICs for mail and replication traffic

 

Use NIC Teaming & VLAN Trunking

 

Use Enhanced VMXNET driver with TSO and Jumbo Frames support

 

Enable TCP transmit coalescing

 

Co-located VMs outperform physical 1Gbps network speed

 

Resource Management

Use proportional and absolute mechanisms to control VM priorities

  • Shares, reservations, and limits for CPU and memory

  • Shares for virtual disks

  • Traffic shaping for network

 

Faster migration resulting in better load balancing when using

  • Smaller VMs

  • Lesser memory reservations for VMs

 

Affinity rules for VM placement

  • E.g. Directory, Mail Server VMs on same ESX

 

Deployment

Virtualization Assessment

  • Capacity Planner

  • Benchmark against Information Warehouse

 

Easy migration

  • VMware Converter – both hot and cold cloning

  • Start with RDM to point to existing data/ transaction log LUNs, but move to VMFS later

 

Easier change management and quicker provisioning

  • Templates and clones for easy provisioning

 


Viewing all articles
Browse latest Browse all 360

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>