<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>proxmox &#8211; Giovanni F. Mazzeo De Santolo</title>
	<atom:link href="https://desantolo.com/tag/proxmox-2/feed/" rel="self" type="application/rss+xml" />
	<link>https://desantolo.com</link>
	<description>That italian IT guy</description>
	<lastBuildDate>Sun, 27 Dec 2020 05:38:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">123042357</site>	<item>
		<title>Allowing OpenVPN to create tun device on LXC / Proxmox</title>
		<link>https://desantolo.com/2018/11/allowing-openvpn-to-create-tun-device-on-lxc-proxmox/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Mon, 19 Nov 2018 01:56:57 +0000</pubDate>
				<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[lxc]]></category>
		<category><![CDATA[openvpn]]></category>
		<category><![CDATA[proxmox]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=569</guid>

					<description><![CDATA[Due to built-in security of LXC, trying to setup a tunnel interface inside a container is by blocked by default. ERROR: Cannot open TUN/TAP dev /dev/net/tun To allow this for a specific container in Proxmox, we need to make a &#8230; <a href="https://desantolo.com/2018/11/allowing-openvpn-to-create-tun-device-on-lxc-proxmox/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>Due to built-in security of LXC, trying to setup a tunnel interface inside a container is by blocked by default.</p>
<p><code>ERROR: Cannot open TUN/TAP dev /dev/net/tun</code></p>
<p>To allow this for a specific container in Proxmox, we need to make a few tweaks to allow this interface to work in a specific container (we don&#8217;t want to allow all containers to be able to setup a tunnel &#8211; hackers can hide their tracks using it).</p>
<p>How to do this:<br />
<code><br />
ADD these lines to /etc/pve/lxc/&lt;container-id&gt;.conf<br />
</code></p>
<pre>lxc.cgroup.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"</pre>
<p><code> </code></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">569</post-id>	</item>
		<item>
		<title>Fix zfs-mount.service failing after reboot on Proxmox</title>
		<link>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/</link>
					<comments>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/#comments</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sat, 01 Jul 2017 01:33:33 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Troubleshooting]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=545</guid>

					<description><![CDATA[In my new homelab migration to Proxmox I came across a bug that will prevent you from being able to mount all your ZFS mount points and be a pain in the ass even more if you host containers in &#8230; <a href="https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>In my new homelab migration to Proxmox I came across a bug that will prevent you from being able to mount all your ZFS mount points and be a pain in the ass even more if you host containers in that folder.<br />
<span id="more-545"></span><br />
<strong>Cause of the problem:</strong> When you use a different zpool than the default rpool, and setup a directory mount for PVE to use for ISO datastore, VZ dump, etc on reboot if the zfs mount points have not completed mounting at boot time. Proxmox will attempt to create the directory path structure.</p>
<p>The problem with creating a directory for something before is mounted is that when zfs-mount.service runs and attempts to mount the zfs mount points you will get these kind of errors:</p>
<p><code>root@pve:~# <strong>systemctl status zfs-mount.service</strong></code><br />
<code>● zfs-mount.service - Mount ZFS filesystems</code><br />
<code> Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)</code><br />
<code> Active: failed (Result: exit-code) since Fri 2017-06-30 18:10:21 PDT; 21s ago</code><br />
<code> Process: 6590 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)</code><br />
<code> Main PID: 6590 (code=exited, status=1/FAILURE)</code></p>
<p><code>Jun 30 18:10:19 pve systemd[1]: Starting Mount ZFS filesystems...</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-102-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-106-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-109-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: Failed to start Mount ZFS filesystems.</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Unit entered failed state.</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.</code></p>
<p><strong>Fixing the root of the problem:</strong> change how proxmox deals with mounts by editing /etc/pve/storage.cfg &#8211; you need to add &#8220;mkdir 0&#8221; and &#8220;is_mountpoint&#8221; to the directory mount. Example:</p>
<p><code>dir: gdata-dump</code><br />
<code> path /gdata/vz</code><br />
<code> content iso,vztmpl,backup</code><br />
<code> maxfiles 0</code><br />
<code> shared 0</code><br />
<code> mkdir 0</code><br />
<code> is_mountpoint 1</code></p>
<p>Now we need to do some system cleanup before we reboot and confirm the problem is fixed.</p>
<p>Let&#8217;s check which mount points have failed:<br />
<code>root@pve:~# <strong>zfs list -r -o name,mountpoint,mounted</strong></code></p>
<p>Now let&#8217;s umount all zfs mount points (except rpool of course &#8211; assuming the rootfs is zfs)</p>
<p><code># zfs umount -a</code></p>
<p>After making sure ZFS mount points are unmounted, now we can delete the empty folders. Recall the failed mount points that the zfs list command gave you and one by one delete them like so:</p>
<p><code># rm -rf /gdata/pve/subvol-102-disk-1</code></p>
<p>Do this for each folder that showed issues mounting. You have a choice to remount everything with zfs mount -O -a &#8212; or better&#8230; reboot the system and check its fixed. I like the later better. So reboot.</p>
<p>After it boots back up check that service was able to mount zfs without issues:</p>
<p><code># systemctl status zfs-mount.service</code><br />
<code># zfs list -r -o name,mountpoint,mounted</code></p>
<p>That&#8217;s all folks&#8230; if you made the edit to storage.cfg and added the two variables this should not occur again. This was an annoying bug to deal with but good to have found a better solution than a startup script doing some dirty tricks!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">545</post-id>	</item>
		<item>
		<title>Allow non-root processes to bind to privileged (ports </title>
		<link>https://desantolo.com/2017/06/allow-non-root-processes-to-bind-to-privileged-ports/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Wed, 28 Jun 2017 07:53:49 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Troubleshooting]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[debian]]></category>
		<category><![CDATA[networking]]></category>
		<category><![CDATA[proxmox]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=538</guid>

					<description><![CDATA[As I work on my homelab migration from FreeNAS into Linux containers, I need to move my freebsd jails to LXC. In *nix any usage of well-known ports (aka 1024 or less) requires special privileges or a kernel setting. In &#8230; <a href="https://desantolo.com/2017/06/allow-non-root-processes-to-bind-to-privileged-ports/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>As I work on my homelab migration from FreeNAS into Linux containers, I need to move my freebsd jails to LXC.</p>
<p>In *nix any usage of well-known ports (aka 1024 or less) requires special privileges or a kernel setting. In FreeBSD a simple sysctl net.inet.ip.portrange.reservedhigh =1 was enough to allow the BSD jail to use any port on the jail.</p>
<p>On LXC, I had to figure out how to do the same thing and its quite different. My environment is a debian stretch LXC container but should work on other linux versions.</p>
<p><code><strong># apt-get install libcap2-bin</strong></code><br />
<code><strong># setcap 'cap_net_bind_service=+ep' /usr/bin/transmission-daemon</strong></code></p>
<p>In the example above, the binary /usr/bin/transmission-daemon is now able to open any port, or port 80 http in my case all while running a service as a non-root user.</p>
<p>Hopefully these helps folks out there, the answer took some digging but I already had an idea on what was needed thanks to my FreeBSD experience in zones <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">538</post-id>	</item>
		<item>
		<title>Install proxmox on a partition instead of a full-disk</title>
		<link>https://desantolo.com/2017/06/zfs-proxmox-on-a-partition-instead-of-a-full-disk/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 11 Jun 2017 11:24:31 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[freenas]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[ssd]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=532</guid>

					<description><![CDATA[By default, installing Proxmox with ZFS during the installation process will force you to use the entire disk for the root zpool. For most installs this is good enough. However, I like to do things differently sometimes. I have a &#8230; <a href="https://desantolo.com/2017/06/zfs-proxmox-on-a-partition-instead-of-a-full-disk/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>By default, installing Proxmox with ZFS during the installation process will force you to use the entire disk for the root zpool. For most installs this is good enough. However, I like to do things differently sometimes.</p>
<p>I have a pair of Samsung 840 Pro 256GB SSDs that I wanted to use for my new homelab that I am currently building (moving from vmware to proxmox). You may be wondering why I want to install the operating system on a partition instead of an entire disk. Several reasons:<br />
<span id="more-532"></span><br />
1. Proxmox (ZFS-on-Linux) does not yet support SSD TRIM, FreeBSD does support it so migrating from FreeNAS into Proxmox I should be aware of it.<br />
2. Data redundancy for the root filesystem does not need to be large. Even if I do RAID1 with my two SSDs I won&#8217;t be storing my critical data or VMs in the rpool &#8211; I want a smaller sized root pool that has fault-tolerance (RAID1). A partition of 60GB mirrored in two SSDs should fit the bill here.<br />
3. ZIL Intent Log experimentation, I also want to experiment by using the same two SSDs to speed up my ZFS writes. I want a small partition in a stripe (RAID0) for performance, 45GB total (22.5gb per ssd) is plenty for this.<br />
4. The left over unused space will be left untouched so that the SSD will have more available blocks during the controller&#8217;s built-in garbage collection (not the same as TRIM)</p>
<p>I don&#8217;t have enough time to go into a lot of details (it&#8217;s past 4am), so I will get to how to do it. If you are trying to follow my same steps, you will need at least 3 hard drives.</p>
<p>1. On a hard drive or device you don&#8217;t care to use in the final outcome, install Proxmox as you would normally. Wipe the entire partition table and let it install RAID0 on the whole disk.<br />
2. Boot into your new installation, have the two new disks you want to keep attached to the system and ensure linux sees them fdisk should help with this.<br />
3. You will now need to create the partitions on the new disks (not rpool):</p>
<p>You will need to know how to calculate hard disk sectors and multiply by your block size. I don&#8217;t have time to go over it but I will do a quick TL;DR example to give you an idea:</p>
<p>We want 25GB slice so that is around 25000000000 bytes / 512 (block size) = 48828125 total sectors to allocate this storage amount.</p>
<p>Take a look at the partition table to make sure you create something similar, fdisk -l /dev/sd$ (your rpool disk). We will leave 8MB disk at the end of the partition, Proxmox by default creates 3 partitions: GRUB_BOOT, ZFS data, Solaris 8MB.</p>
<p>This command creates the partitions for my new array, I&#8217;ve described them for you by the -c command. It should be self-explanatory.</p>
<p># sgdisk -z /dev/sdb<br />
# sgdisk -a1 -n1:34:2047 -t1:EF02 -c1:&#8221;BIOS boot&#8221; -n2:2048:156252048 -t2:BF01 -c2:&#8221;mirror&#8221; -n3:156252049:205080174 -t3:BF01 -c3:&#8221;stripe&#8221; -n4:205080175:205096559 -t4:BF0 /dev/sda</p>
<p># sgdisk -a1 -n1:34:2047 -t1:EF02 -c1:&#8221;BIOS boot&#8221; -n2:2048:156252048 -t2:BF01 -c2:&#8221;mirror&#8221; -n3:156252049:205080174 -t3:BF01 -c3:&#8221;stripe&#8221; -n4:205080175:205096559 -t4:BF0 /dev/sdc<br />
# zpool create -f stripe -o ashift=13 /dev/sda3 /dev/sdc3<br />
# zpool create -f newroot -o ashift=13 mirror /dev/sda2 /dev/sdc2<br />
# grub-install /dev/disk/by-id/ata-Samsung_SSD_840_PRO_Series_S1ATNSADB46090M<br />
# grub-install /dev/disk/by-id/ata-Samsung_SSD_840_PRO_Series_S12RNEACC59063B</p>
<p>Backup &amp; moving stuff.<br />
# zfs snapshot -r rpool@fullbackup<br />
# zfs list -t snapshot<br />
# zfs send -R rpool@fullbackup | zfs recv -vFd newroot<br />
root@pve:/# zpool get bootfs<br />
NAME PROPERTY VALUE SOURCE<br />
newroot bootfs &#8211; default<br />
rpool bootfs rpool/ROOT/pve-1 local<br />
stripe bootfs &#8211; default<br />
root@pve:/# zpool set bootfs=newroot/ROOT/pve-1 newroot<br />
zpool export newroot<br />
zpool import -o altroot=/mnt newroot<br />
&#8212; rebooted with freenas live cd, enter shell, import newroot with new name rpool. rebooted<br />
&#8212; boot into proxmox recovery &#8212; once it boots, do recovery<br />
grub-install /dev/sdb<br />
grub-install /dev/sda<br />
update-grub2<br />
update-initramfs -u</p>
<p>#zpool set bootfs=newroot rpool could also work without renaming via FreeNAS but didn&#8217;t try.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">532</post-id>	</item>
		<item>
		<title>Homelab 2017 refresh</title>
		<link>https://desantolo.com/2017/06/homelab-2017-refresh/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sat, 10 Jun 2017 04:14:43 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[homelab]]></category>
		<category><![CDATA[hyperconverged]]></category>
		<category><![CDATA[openvswitch]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[supermicro]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=527</guid>

					<description><![CDATA[My faithful Lenovo TS440 home server has reached its peak potential as I have maxed out the 32gb memory limit of the Intel E3 v3 architecture. My needs for more CPU power and memory is driven by the idea of &#8230; <a href="https://desantolo.com/2017/06/homelab-2017-refresh/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>My faithful Lenovo TS440 home server has reached its peak potential as I have maxed out the 32gb memory limit of the Intel E3 v3 architecture.</p>
<p>My needs for more CPU power and memory is driven by the idea of hyperconvergence. Which means I use a single machine to be my router/firewall, VPN gateway, network storage as well as virtual machine host.</p>
<p>Those themes have been part of my home network design since 2010 or so, today&#8217;s hot technologies are focusing on containers (LXC), Docker, etc. So I need a more powerful server in order to be able to expand my playground into those technologies. The 32gb maximum on my old server is simply not enough when you have 5 different VMs that consume almost all your memory resources (windows 10 VM, OSX one and my FreeNAS one being the top users of 75%+).<span id="more-527"></span></p>
<p><a href="https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?ssl=1"><img data-recalc-dims="1" fetchpriority="high" decoding="async" data-attachment-id="526" data-permalink="https://desantolo.com/2017/06/homelab-2017-refresh/img_8487-jpg/" data-orig-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?fit=3024%2C4032&amp;ssl=1" data-orig-size="3024,4032" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;2.2&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;iPhone 6s&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1497041974&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;4.15&quot;,&quot;iso&quot;:&quot;40&quot;,&quot;shutter_speed&quot;:&quot;0.0625&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="img_8487.jpg" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?fit=225%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?fit=500%2C667&amp;ssl=1" class="alignnone size-full wp-image-526" src="https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?resize=500%2C667&#038;ssl=1" alt="" width="500" height="667" srcset="https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?w=3024&amp;ssl=1 3024w, https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?resize=225%2C300&amp;ssl=1 225w, https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?resize=768%2C1024&amp;ssl=1 768w, https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?resize=600%2C800&amp;ssl=1 600w, https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?w=1000&amp;ssl=1 1000w, https://i0.wp.com/desantolo.com/wp-content/uploads/2017/06/img_8487.jpg?w=1500&amp;ssl=1 1500w" sizes="(max-width: 500px) 100vw, 500px" /></a></p>
<p>On my new machine I have decided to move towards the Xeon E5 v4 CPU series and DDR4 which has lower memory consumption than my current LPDDR3 (1.2v vs 1.35v per ram stick).</p>
<p>The components of choice is a <strong>Supermicro X10SRL-F</strong> with remote management (IPKVM), and <strong>64gb DDR4</strong> to start.</p>
<p>For server chassis I&#8217;ll be reusing my Lenovo TS440, but first I&#8217;ll assemble and test my new server on a different chassis as to not impact my home router/network design.</p>
<p>Since I will most likely be moving away from VMware ESXi into Proxmox or another open source alternative this means that there will be a steep learning curve as I try to do the initial configuration of the network to run on a single node (hyperconverged). I will have to learn OpenVswitch with is a virtual switch that runs on unix.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">527</post-id>	</item>
		<item>
		<title>Troubleshooting networking issues after fresh install of proxmox VE 4.4</title>
		<link>https://desantolo.com/2017/02/troubleshooting-networking-issues-after-fresh-install-of-proxmox-ve-4-4/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Fri, 10 Feb 2017 06:04:02 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Troubleshooting]]></category>
		<category><![CDATA[debian]]></category>
		<category><![CDATA[networking]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[troubleshooting]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=504</guid>

					<description><![CDATA[Writing a quick troubleshooting guide and informative post to address an issue I came across when installing Proxmox VE 4.4 on two of my machines. On servers with more than two network interfaces Debian/Proxmox renames all interfaces and does not &#8230; <a href="https://desantolo.com/2017/02/troubleshooting-networking-issues-after-fresh-install-of-proxmox-ve-4-4/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>Writing a quick troubleshooting guide and informative post to address an issue I came across when installing Proxmox VE 4.4 on two of my machines.</p>
<p>On servers with more than two network interfaces Debian/Proxmox renames all interfaces and does not properly detect eth0 as the on-board ethernet as many other linux flavors. This may cause a mild headache if you just installed Proxmox with static IP addresses using the installer and upon reboot you can&#8217;t access any network resources.<span id="more-504"></span></p>
<p>I already explained the cause and you could argue that on the Proxmox installer they could add a built-in network detection check to properly label eth0 as eth0 as the device is named in many other linux distros. That currently does not exist so I will walk you around the troubleshooting.</p>
<p>Upon reboot or first boot after the installation is complete:<br />
<strong># ip link</strong></p>
<p>The bridge interface (<strong>vmbr0</strong>) should read &#8220;<strong>NO-CARRIER</strong>, MULTICAST, UP&#8221; as well as &#8220;<strong>state down</strong>&#8221; a few words further to the left of the results.</p>
<p><strong># dmesg | grep eth</strong></p>
<p>Read the entries in the dmesg logs, it tells you the name of network interfaces on your system.</p>
<p>&#8220;<strong>NO-CARRIER</strong>&#8221; indicates it does not detect an uplink, the interface is configured but none of its bridge members have a network cable or connection being detected.</p>
<p>To fix this you will want to run the following commands:<br />
<strong># ifdown -a</strong><br />
<strong># vi /etc/network/interfaces</strong></p>
<p>By default the installer sets up &#8220;eth0&#8221; as your only bridge member since the network card numbering got setup differently, the logical name on proxmox for eth0 is actually eth2.</p>
<p><strong>Edit the single instance of eth0 with eth2</strong> &#8211; save the file and exit the editor.</p>
<p><strong># ifup -a</strong><br />
should try to bring back up your interfaces. Trying pinging your network gateway, it should be working now. Cheers.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">504</post-id>	</item>
		<item>
		<title>A comprehensive list of hypervisors and cloud platforms</title>
		<link>https://desantolo.com/2017/01/comprehensive-list-of-hypervisors-and-cloud-platforms-opensource-free/</link>
					<comments>https://desantolo.com/2017/01/comprehensive-list-of-hypervisors-and-cloud-platforms-opensource-free/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 08 Jan 2017 23:21:11 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[cloudslang]]></category>
		<category><![CDATA[cockpit]]></category>
		<category><![CDATA[esxi]]></category>
		<category><![CDATA[kontena]]></category>
		<category><![CDATA[kubernetes]]></category>
		<category><![CDATA[mirantis]]></category>
		<category><![CDATA[panamax]]></category>
		<category><![CDATA[portainer]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[rancher]]></category>
		<category><![CDATA[shipyard]]></category>
		<category><![CDATA[smartos]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=477</guid>

					<description><![CDATA[In my last post I discussed Proxmox and SmartOS as possible alternatives to ditching vmware ESXi for my homelab. Given the amount of information that is out there on the internet and that I spent quite a few hours trying &#8230; <a href="https://desantolo.com/2017/01/comprehensive-list-of-hypervisors-and-cloud-platforms-opensource-free/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>In my last post I discussed <a href="https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/">Proxmox and SmartOS as possible alternatives to ditching vmware ESXi</a> for my homelab.</p>
<p>Given the amount of information that is out there on the internet and that I spent quite a few hours trying to find other open source projects / cloud platforms that could be other alternatives, I thought why not make a post linking to all the platforms I have come across during my search, this way it will help someone else to simply click thru opening new tabs.<br />
<span id="more-477"></span></p>
<div class="wpe-box wpe-box-note">
<p style="text-align: right;">List last updated 01/08/2017</p>
</div>
<p>The hypervisors (they only support VMs):</p>
<ul>
<li><a href="http://www.vmware.com/products/esxi-and-esx.html">vmware ESXi</a></li>
<li><a href="http://xenserver.org/">XenServer</a></li>
<li><a href="https://www.oracle.com/virtualization/vm-server-for-x86/index.html">Oracle VM</a> (based off Xen server project above)</li>
<li><a href="https://technet.microsoft.com/en-us/library/mt169373%28v=ws.11%29.aspx?f=255&amp;MSPPError=-2147217396">Microsoft Hyper-V</a></li>
</ul>
<p>The hybrids (allows VMs and containers at the same time under the same host &#8211; no need to spin up VMs for containers)</p>
<ul>
<li><a href="https://www.joyent.com/smartos">SmartOS</a></li>
<li><a href="https://pve.proxmox.com/">Proxmox VE</a></li>
<li><a href="http://mesos.apache.org/">Apache Mesos</a></li>
<li><a href="https://dcos.io/">Mesosphere DC/OS Open source</a></li>
</ul>
<p>Obviously you can also run containers on linux without using a bare-metal hypervisor like the options above. All you need to do is install Docker. But how are you going to manage/monitor/deploy your containers? command line is an option but there&#8217;s tools out there.</p>
<p>Container orchestration tools</p>
<ul>
<li><a href="http://cloudslang.io/">Cloudslang</a></li>
<li><a href="http://kubernetes.io/">Kubernetes </a>(the 500lb gorilla of orchestration tools)</li>
<li><a href="https://www.kontena.io/">Kontena</a></li>
<li><a href="https://cloudstack.apache.org/">Apache CloudStack</a> (this seems to manage only hypervisors and not containers)</li>
<li><a href="https://mesosphere.github.io/marathon/">Marathon </a>(for Mesos and DC/OS)</li>
<li><a href="http://portainer.io/">Portainer</a></li>
<li><a href="https://shipyard-project.com/">Shipyard</a></li>
<li><a href="http://panamax.io/">Panamax</a></li>
<li><a href="http://rancher.com/">Rancher</a> (a complete platform for running containers &#8211; highly complex)</li>
</ul>
<p>Kubernetes addons:</p>
<ul>
<li><a href="http://cockpit-project.org/">Cockpit </a>(multi-server web management)</li>
</ul>
<p>An interesting platform seems to be <a href="https://www.mirantis.com/software/openstack/">Mirantis OpenStack</a> &#8211; if you are willing to put in the effort and deploy several of its plugins it looks like you would be able to host VMs, containers and have a web front-end to manage them all. Since this is not a single solution and it requires you to deploy several plugins I am leaving this uncategorized for now.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/01/comprehensive-list-of-hypervisors-and-cloud-platforms-opensource-free/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">477</post-id>	</item>
		<item>
		<title>Virtualization hypervisor and containers all in one</title>
		<link>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/</link>
					<comments>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/#comments</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 08 Jan 2017 10:01:26 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[docker]]></category>
		<category><![CDATA[openindiana]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[smartos]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=475</guid>

					<description><![CDATA[I&#8217;m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years. Personally, I have been &#8230; <a href="https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>I&#8217;m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years.</p>
<p>Personally, I have been using virtualization circa 2004. It all took off after 2006 when chip manufacturer&#8217;s started bundling virtualization technologies in their processors (Intel VT-x or AMD-v). The reason why &#8220;cloud&#8221; computing is so popular can also be attributed to virtualization.</p>
<h3>In a container world&#8230;</h3>
<p>However, in the past couple of years a new technology has been making making the rounds everywhere, the words &#8220;containers&#8221;, &#8220;docker&#8221;, &#8220;orchestration&#8221; is picking up steam in the past year. They say that containers are changing the landscape for system administrators and application developers.</p>
<p>Claims that containers can be built and deployed in seconds, share a common storage layer and allow you to resize the container in real-time when you need more performance or capacity are really exciting concepts and I think the time is now for me to jump in and learn a thing of two about this new technology when its hot a new.<span id="more-475"></span></p>
<h3>Time to ditch vmware ESXi for a hybrid hypervisor?</h3>
<p>You may remember my blog entry <a href="https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/">building a low-power sandy bridge ESXi server with ZFS</a> &#8211; now 5 years later it is time to find a new platform that will allow me to keep my legacy virtual machines (VMs) as well as allow me to host containers using Docker.</p>
<p>The process of finding a suitable replacement for ESXi may take awhile and more than just a single entry on my blog. This is the first entry on my journey.</p>
<p>Before replacing something that works with a new platform I think it is good to point out the strengths and weaknesses of vmware ESXi (which has been my platform of choice for 6 years)</p>
<h4>Strengths of ESXi</h4>
<ul>
<li>awesome windows GUI vSphere client that allows you to manage your hypervisor without the need for console or ssh</li>
<li>a web-interface to manage it too if you <a href="https://labs.vmware.com/flings/esxi-embedded-host-client">install a plugin</a></li>
<li>virtual switch with VLAN support</li>
<li>support for PCI passthrough (Intel VT-d) allowing you to assign PCI devices to virtual guests</li>
</ul>
<h4>Weaknesses</h4>
<ul>
<li>does not support docker containers (unless you wish to create a virtual machine and run docker from there &#8211; but I prefer a central platform if possible)</li>
<li>vmware <a href="http://pubs.vmware.com/Release_Notes/en/vsphere/65/vsphere-client-65-html5-functionality-support.html">continues to remove features from the free version</a> of ESX &#8211; vSphere client interface is no longer availabel in their latest release</li>
<li>Nothing exciting has been released by vmware in the past 2 years (in terms of ESXi) and they push esxi users into paid licensees</li>
</ul>
<h3>The alternatives?</h3>
<p><strong>SmartOS</strong> is a fork off OpenIndiana/OpenSolaris, it seems to have a lot of great security features and features from Solaris that enjoy (you may have read of my love for the ZFS filesystem which is native to SmartOS). Joyent has recently open-sourced their SmartDataCenter &#8220;SDC&#8221; or they are now calling it Triton Enterprise.</p>
<p>What I like about it other than the fact it uses native Solaris and it uses the ZFS filesystem for storage is the fact that it is a <strong>hybrid hypervisor</strong>. It can host containers and VMs (using technology similar to virtualbox since virtualbox is also from solaris).</p>
<p>The downside of this platform seems to be the complexity needed to deploy containers with this tool. You need to have a &#8220;head node&#8221; to be the brains of the platform, the &#8220;head node&#8221; does a lot of critical things. It monitors the network, the other compute nodes (where you host your vms/containers), it also hosts the database for all the nodes. In dev mode you can force the head node to also be able to host VMs but this is not recommended or good practice.</p>
<p>The web interface (SmartDataCenter) to manage your containers and VMs is also very rudementary, there is no built-in console to your guests. You need to run a lot of commands in the head node&#8217;s shell to make JSON queries to grab the data you want like the VNC server and port address for your guests.</p>
<p>Honestly I have not dug much deeper into SmartOS but I probably should, it looks like an awesome project. I am sure for people that want to use their platform for scalable container/hypervisor deployments it makes sense, but to replace my single server at home doing virtualization it does not look very likely this may be a good choice given the complexity.</p>
<p><strong>Proxmox</strong> is another platform I am looking at, you may recall that 7 years ago I discovered proxmox virtual environment and started using it on my lab. That was Proxmox VE 1.5 I think and I recently discovered they have made a lot of strides in the right direction.</p>
<p>Just a few weeks ago they released their latest PVE 4.4 and they are now supporting ZFS data pools (via FUSE/zfsonlinux), not to mention that they have replaced OpenVZ with LXC (linux containers). It may be worth it for me to download their latest release and check out their platform again.</p>
<p>Other than Proxmox or SmartOS, I have not come across any other &#8216;hybrid&#8217; hypervisors. Please share in the comments if there is something else I should check out.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">475</post-id>	</item>
		<item>
		<title>Add additional IP&#8217;s on different subnets using same Ethernet card on PVE</title>
		<link>https://desantolo.com/2010/05/add-additional-ips-on-different-subnets-using-same-ethernet-card-on-pve/</link>
					<comments>https://desantolo.com/2010/05/add-additional-ips-on-different-subnets-using-same-ethernet-card-on-pve/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Fri, 21 May 2010 03:42:23 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[route]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/2010/05/21/add-additional-ips-on-different-subnets-using-same-ethernet-card-on-pve/</guid>

					<description><![CDATA[To do this, we need to add a custom route to the server, we need to add the network and netmask addresses, to test and see if it works: route add -net 10.5.0.0 netmask 255.255.255.0 dev vmbr0 if it works, &#8230; <a href="https://desantolo.com/2010/05/add-additional-ips-on-different-subnets-using-same-ethernet-card-on-pve/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>To do this, we need to add a custom route to the server, we need to add the network and netmask addresses, to test and see if it works:</p>
<blockquote><p>route add -net 10.5.0.0 netmask 255.255.255.0 dev vmbr0</p></blockquote>
<p>if it works, add the following to your /etc/network/interfaces file</p>
<blockquote><p>iface vmbr0 inet static<br />
&#8230;<br />
bridge_fd 0<br />
up route add -net 10.5.0.0 netmask 255.255.255.0 dev vmbr0<br />
down route del -net 10.5.0.0 netmask 255.255.255.0 dev vmbr0<br />
&#8230;</p></blockquote>
<p>did not work? Remove route with:</p>
<blockquote><p>route del -net 10.5.0.0 netmask 255.255.255.0 dev vmbr0</p></blockquote>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/add-additional-ips-on-different-subnets-using-same-ethernet-card-on-pve/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21</post-id>	</item>
	</channel>
</rss>
