Install on top of debian lenny like this:
wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -
echo "# PVE packages provided by proxmox.com" >> /etc/apt/sources.list
echo "deb http://download.proxmox.com/debian lenny pve" >> /etc/apt/sources.list
aptitude update
aptitude dist-upgrade
aptitude install pve-kernel
#Reboot and make sure to select Proxmox VE Kernel on the boot loader (grub).
#If you want to boot by default the Proxmox VE Kernel, please edit the following file:
# vi /boot/grub/menu.lst
#AFTER reboot only!!
#Install Proxmox VE packages
#Make sure you are running the Proxmox VE Kernel, otherwise the installation will fail.
aptitude install proxmox-ve ntpdate
==============================
update kernel on software raid:
cd /lib/modules
update-initramfs -k 2.6.24-8-pve -u -t
==============================
#update templates openvz
pveam update
==============================
#cluster management
#list status
pveca -l
#Create the master:
pveca -c
#add node to master
pveca -a -h IP-ADDRESS-MASTER
===============================
P2V
rsync -arvpz --numeric-ids --exclude=/dev --exclude=/proc --exclude=/tmp -e ssh root@a.b.c.d:/ /vz/private/123/
http://wiki.openvz.org/Physical_to_VE#rsync
===================================
speedup of LVM
add "cache=none" to the diskdefinition:
#eg cat /etc/qemu-server/101.conf
name: http
bootdisk: virtio0
ostype: l26
memory: 1024
onboot: 0
sockets: 1
boot: c
freeze: 0
cpuunits: 1000
acpi: 1
kvm: 1
cores: 4
description: dns,wiki,blog
10.0.1.3
vlan3: virtio=12:12:12:12:12:12
virtio0: vm-os:vm-101-disk-1,cache=none
==============================================
iSCSI Performance tests
don-t forget to tune
you network card to MTU 9000
and your /etc/sysctl.conf,
here mine (i can saturate my 2 gigabit links with multipath, around 220 mb/s)
Code:
# turns TCP timestamp support off, default 1, reduces CPU use
net.ipv4.tcp_timestamps = 0
# turn SACK support off, default on
net.ipv4.tcp_sack = 0
### tunning taille de fenetre
# maximum receive socket buffer size, default 131071
net.core.rmem_max = 16777216
# maximum send socket buffer size, default 131071
net.core.wmem_max = 16777216
# default receive socket buffer size, default 65535
net.core.rmem_default = 524287
# default send socket buffer size, default 65535
net.core.wmem_default = 524287
# maximum amount of option memory buffers, default 10240
net.core.optmem_max = 524287
# number of unprocessed input packets before kernel starts dropping them, default 300
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_rmem = 4096 524287 16777216
net.ipv4.tcp_wmem = 4096 524287 16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
==================================
migration from local to nfs
Proxmox KVM storage migration from local to NFS
In order to be able to use the live partition migration in Proxmox, the KVM partition needs to be on NFS or iSCSI.
Here is the process for an NFS storage.
- shut down vm
- rsync -av /var/lib/vz/images/ /mnt/pve//images/
- edit /etc/qemu-server/.conf
- change ide0: local:/vm--disk-1.raw to ide0: :/vm--disk-1.raw
- start up vm
==================
Pin kernel
proxmox-boot-tool kernel pin 6.5.13-3-pve