<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>containers &#8211; Giovanni F. Mazzeo De Santolo</title>
	<atom:link href="https://desantolo.com/tag/containers/feed/" rel="self" type="application/rss+xml" />
	<link>https://desantolo.com</link>
	<description>That italian IT guy</description>
	<lastBuildDate>Sun, 27 Dec 2020 05:38:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">123042357</site>	<item>
		<title>Allowing OpenVPN to create tun device on LXC / Proxmox</title>
		<link>https://desantolo.com/2018/11/allowing-openvpn-to-create-tun-device-on-lxc-proxmox/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Mon, 19 Nov 2018 01:56:57 +0000</pubDate>
				<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[lxc]]></category>
		<category><![CDATA[openvpn]]></category>
		<category><![CDATA[proxmox]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=569</guid>

					<description><![CDATA[Due to built-in security of LXC, trying to setup a tunnel interface inside a container is by blocked by default. ERROR: Cannot open TUN/TAP dev /dev/net/tun To allow this for a specific container in Proxmox, we need to make a &#8230; <a href="https://desantolo.com/2018/11/allowing-openvpn-to-create-tun-device-on-lxc-proxmox/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>Due to built-in security of LXC, trying to setup a tunnel interface inside a container is by blocked by default.</p>
<p><code>ERROR: Cannot open TUN/TAP dev /dev/net/tun</code></p>
<p>To allow this for a specific container in Proxmox, we need to make a few tweaks to allow this interface to work in a specific container (we don&#8217;t want to allow all containers to be able to setup a tunnel &#8211; hackers can hide their tracks using it).</p>
<p>How to do this:<br />
<code><br />
ADD these lines to /etc/pve/lxc/&lt;container-id&gt;.conf<br />
</code></p>
<pre>lxc.cgroup.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"</pre>
<p><code> </code></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">569</post-id>	</item>
		<item>
		<title>Fix zfs-mount.service failing after reboot on Proxmox</title>
		<link>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/</link>
					<comments>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/#comments</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sat, 01 Jul 2017 01:33:33 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Troubleshooting]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=545</guid>

					<description><![CDATA[In my new homelab migration to Proxmox I came across a bug that will prevent you from being able to mount all your ZFS mount points and be a pain in the ass even more if you host containers in &#8230; <a href="https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>In my new homelab migration to Proxmox I came across a bug that will prevent you from being able to mount all your ZFS mount points and be a pain in the ass even more if you host containers in that folder.<br />
<span id="more-545"></span><br />
<strong>Cause of the problem:</strong> When you use a different zpool than the default rpool, and setup a directory mount for PVE to use for ISO datastore, VZ dump, etc on reboot if the zfs mount points have not completed mounting at boot time. Proxmox will attempt to create the directory path structure.</p>
<p>The problem with creating a directory for something before is mounted is that when zfs-mount.service runs and attempts to mount the zfs mount points you will get these kind of errors:</p>
<p><code>root@pve:~# <strong>systemctl status zfs-mount.service</strong></code><br />
<code>● zfs-mount.service - Mount ZFS filesystems</code><br />
<code> Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)</code><br />
<code> Active: failed (Result: exit-code) since Fri 2017-06-30 18:10:21 PDT; 21s ago</code><br />
<code> Process: 6590 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)</code><br />
<code> Main PID: 6590 (code=exited, status=1/FAILURE)</code></p>
<p><code>Jun 30 18:10:19 pve systemd[1]: Starting Mount ZFS filesystems...</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-102-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-106-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-109-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: Failed to start Mount ZFS filesystems.</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Unit entered failed state.</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.</code></p>
<p><strong>Fixing the root of the problem:</strong> change how proxmox deals with mounts by editing /etc/pve/storage.cfg &#8211; you need to add &#8220;mkdir 0&#8221; and &#8220;is_mountpoint&#8221; to the directory mount. Example:</p>
<p><code>dir: gdata-dump</code><br />
<code> path /gdata/vz</code><br />
<code> content iso,vztmpl,backup</code><br />
<code> maxfiles 0</code><br />
<code> shared 0</code><br />
<code> mkdir 0</code><br />
<code> is_mountpoint 1</code></p>
<p>Now we need to do some system cleanup before we reboot and confirm the problem is fixed.</p>
<p>Let&#8217;s check which mount points have failed:<br />
<code>root@pve:~# <strong>zfs list -r -o name,mountpoint,mounted</strong></code></p>
<p>Now let&#8217;s umount all zfs mount points (except rpool of course &#8211; assuming the rootfs is zfs)</p>
<p><code># zfs umount -a</code></p>
<p>After making sure ZFS mount points are unmounted, now we can delete the empty folders. Recall the failed mount points that the zfs list command gave you and one by one delete them like so:</p>
<p><code># rm -rf /gdata/pve/subvol-102-disk-1</code></p>
<p>Do this for each folder that showed issues mounting. You have a choice to remount everything with zfs mount -O -a &#8212; or better&#8230; reboot the system and check its fixed. I like the later better. So reboot.</p>
<p>After it boots back up check that service was able to mount zfs without issues:</p>
<p><code># systemctl status zfs-mount.service</code><br />
<code># zfs list -r -o name,mountpoint,mounted</code></p>
<p>That&#8217;s all folks&#8230; if you made the edit to storage.cfg and added the two variables this should not occur again. This was an annoying bug to deal with but good to have found a better solution than a startup script doing some dirty tricks!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">545</post-id>	</item>
		<item>
		<title>LXC allow non-root users to bind to port 80 (couchpotato example)</title>
		<link>https://desantolo.com/2017/06/lxc-allow-non-root-users-to-bind-to-port-80-couchpotato-example/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Thu, 29 Jun 2017 08:37:42 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[authbind]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[couchpotato]]></category>
		<category><![CDATA[debian]]></category>
		<category><![CDATA[linux]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=541</guid>

					<description><![CDATA[A follow-up to my last post dealing with unprivileged port access on linux containers. This time, I have a couchpotato container that I want to change its default port from 5050 to port 80, so that it is as simple &#8230; <a href="https://desantolo.com/2017/06/lxc-allow-non-root-users-to-bind-to-port-80-couchpotato-example/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>A follow-up to my last post dealing with unprivileged port access on linux containers.</p>
<p>This time, I have a couchpotato container that I want to change its default port from 5050 to port 80, so that it is as simple as http://mycouch/ to access from the local network.<br />
<span id="more-541"></span><br />
Since CouchPotato is a python script, my other method of whitelisting the binary won&#8217;t work, an alternative is to use <strong>authbind</strong> to get around this by granting a user/group privileges to bind to one of those restricted ports (non-root can&#8217;t bind to ports 1024 or less).</p>
<p><strong>Environment:</strong> LXC Container (Debian 9.0 Stretch) image, with couchpotato defaults running on port 5050 and systemd init script setup (couchpotato user is named gmedia)</p>
<p><code>#  groupadd -g 3200 gmedia</code><br />
<code># useradd -u 3200 -g gmedia -M gmedia</code><br />
<code># apt-get install authbind</code><br />
<code># touch /etc/authbind/byport/80</code><br />
<code># chown gmedia /etc/authbind/byport/80</code><br />
<code># chmod 500 /etc/authbind/byport/80</code></p>
<p>Now edit the startup settings (Exec/user/group):<br />
<strong><span style="color: #444444;"># nano /etc/systemd/system/couchpotato.service</span></strong></p>
<p>Should look something like this:</p>
<p><code>[Unit]</code><br />
<code>Description=CouchPotato application instance</code><br />
<code>After=network.target</code></p>
<p><code>[Service]</code><br />
<code>ExecStart=/usr/bin/authbind --deep /opt/CouchPotatoServer/CouchPotato.py</code><br />
<code>Type=simple</code><br />
<code>User=gmedia</code><br />
<code>Group=gmedia</code></p>
<p><code>[Install]</code><br />
<code>WantedBy=multi-user.target</code></p>
<p>Now its time to test:</p>
<p># systemctl daemon-reload<br />
# systemctl start couchpotato.service<br />
# systemctl status couchpotato.service</p>
<p>Confirm all is hunky dory.</p>
<p><code>root@couchpotato:~# systemctl status couchpotato.service</code><br />
<code>● couchpotato.service - CouchPotato application instance</code><br />
<code> Loaded: loaded (/etc/systemd/system/couchpotato.service; enabled; vendor preset: enabled)</code><br />
<code> Active: active (running) since Thu 2017-06-29 08:35:32 UTC; 2s ago</code><br />
<code> Main PID: 1203 (python)</code><br />
<code> Tasks: 9 (limit: 4915)</code><br />
<code> CGroup: /system.slice/couchpotato.service</code><br />
<code> └─1203 python /opt/CouchPotatoServer/CouchPotato.py</code></p>
<p><code>Jun 29 08:35:32 couchpotato systemd[1]: Started CouchPotato application instance.</code><br />
<code>root@couchpotato:~# lsof -i :80</code><br />
<code>COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME</code><br />
<code>python 1203 gmedia 49u IPv4 6008724 0t0 TCP *:http (LISTEN)</code><br />
<code>python 1203 gmedia 52u IPv4 6024843 0t0 TCP 192.168.200.140:http-&gt;192.168.200.5:56928 (ESTABLISHED)</code><br />
<code>root@couchpotato:~#</code></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">541</post-id>	</item>
		<item>
		<title>Allow non-root processes to bind to privileged (ports </title>
		<link>https://desantolo.com/2017/06/allow-non-root-processes-to-bind-to-privileged-ports/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Wed, 28 Jun 2017 07:53:49 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Troubleshooting]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[debian]]></category>
		<category><![CDATA[networking]]></category>
		<category><![CDATA[proxmox]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=538</guid>

					<description><![CDATA[As I work on my homelab migration from FreeNAS into Linux containers, I need to move my freebsd jails to LXC. In *nix any usage of well-known ports (aka 1024 or less) requires special privileges or a kernel setting. In &#8230; <a href="https://desantolo.com/2017/06/allow-non-root-processes-to-bind-to-privileged-ports/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>As I work on my homelab migration from FreeNAS into Linux containers, I need to move my freebsd jails to LXC.</p>
<p>In *nix any usage of well-known ports (aka 1024 or less) requires special privileges or a kernel setting. In FreeBSD a simple sysctl net.inet.ip.portrange.reservedhigh =1 was enough to allow the BSD jail to use any port on the jail.</p>
<p>On LXC, I had to figure out how to do the same thing and its quite different. My environment is a debian stretch LXC container but should work on other linux versions.</p>
<p><code><strong># apt-get install libcap2-bin</strong></code><br />
<code><strong># setcap 'cap_net_bind_service=+ep' /usr/bin/transmission-daemon</strong></code></p>
<p>In the example above, the binary /usr/bin/transmission-daemon is now able to open any port, or port 80 http in my case all while running a service as a non-root user.</p>
<p>Hopefully these helps folks out there, the answer took some digging but I already had an idea on what was needed thanks to my FreeBSD experience in zones <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">538</post-id>	</item>
		<item>
		<title>Running Windows containers in Docker</title>
		<link>https://desantolo.com/2017/01/running-windows-containers-in-docker/</link>
					<comments>https://desantolo.com/2017/01/running-windows-containers-in-docker/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Mon, 09 Jan 2017 00:15:51 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[docker]]></category>
		<category><![CDATA[windows server]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=479</guid>

					<description><![CDATA[Microsoft Windows Server 2016 now supports containers, this means we can now isolate windows applications and share the underlying kernel of windows much like we have been doing in Linux for years with OpenVZ or more recently LXC (linux containers). &#8230; <a href="https://desantolo.com/2017/01/running-windows-containers-in-docker/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>Microsoft Windows Server 2016 now supports containers, this means we can now isolate windows applications and share the underlying kernel of windows much like we have been doing in Linux for years with OpenVZ or more recently LXC (linux containers).</p>
<p>On January 4, 2017 Rancher announced experimental support for Windows containers (link below).</p>
<p>Official <a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/index">Microsoft documentation on containers</a>.<br />
<a href="http://rancher.com/rancher-1-3-experimental-windows-support/">Rancher v.1.3</a> has implemented experimental windows container support.</p>
<p>This is a good reason to spin up a Windows Server 2016 node and experiment in a lab. I&#8217;ll be looking forward to trying this when I get some time.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/01/running-windows-containers-in-docker/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">479</post-id>	</item>
		<item>
		<title>Virtualization hypervisor and containers all in one</title>
		<link>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/</link>
					<comments>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/#comments</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 08 Jan 2017 10:01:26 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[docker]]></category>
		<category><![CDATA[openindiana]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[smartos]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=475</guid>

					<description><![CDATA[I&#8217;m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years. Personally, I have been &#8230; <a href="https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>I&#8217;m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years.</p>
<p>Personally, I have been using virtualization circa 2004. It all took off after 2006 when chip manufacturer&#8217;s started bundling virtualization technologies in their processors (Intel VT-x or AMD-v). The reason why &#8220;cloud&#8221; computing is so popular can also be attributed to virtualization.</p>
<h3>In a container world&#8230;</h3>
<p>However, in the past couple of years a new technology has been making making the rounds everywhere, the words &#8220;containers&#8221;, &#8220;docker&#8221;, &#8220;orchestration&#8221; is picking up steam in the past year. They say that containers are changing the landscape for system administrators and application developers.</p>
<p>Claims that containers can be built and deployed in seconds, share a common storage layer and allow you to resize the container in real-time when you need more performance or capacity are really exciting concepts and I think the time is now for me to jump in and learn a thing of two about this new technology when its hot a new.<span id="more-475"></span></p>
<h3>Time to ditch vmware ESXi for a hybrid hypervisor?</h3>
<p>You may remember my blog entry <a href="https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/">building a low-power sandy bridge ESXi server with ZFS</a> &#8211; now 5 years later it is time to find a new platform that will allow me to keep my legacy virtual machines (VMs) as well as allow me to host containers using Docker.</p>
<p>The process of finding a suitable replacement for ESXi may take awhile and more than just a single entry on my blog. This is the first entry on my journey.</p>
<p>Before replacing something that works with a new platform I think it is good to point out the strengths and weaknesses of vmware ESXi (which has been my platform of choice for 6 years)</p>
<h4>Strengths of ESXi</h4>
<ul>
<li>awesome windows GUI vSphere client that allows you to manage your hypervisor without the need for console or ssh</li>
<li>a web-interface to manage it too if you <a href="https://labs.vmware.com/flings/esxi-embedded-host-client">install a plugin</a></li>
<li>virtual switch with VLAN support</li>
<li>support for PCI passthrough (Intel VT-d) allowing you to assign PCI devices to virtual guests</li>
</ul>
<h4>Weaknesses</h4>
<ul>
<li>does not support docker containers (unless you wish to create a virtual machine and run docker from there &#8211; but I prefer a central platform if possible)</li>
<li>vmware <a href="http://pubs.vmware.com/Release_Notes/en/vsphere/65/vsphere-client-65-html5-functionality-support.html">continues to remove features from the free version</a> of ESX &#8211; vSphere client interface is no longer availabel in their latest release</li>
<li>Nothing exciting has been released by vmware in the past 2 years (in terms of ESXi) and they push esxi users into paid licensees</li>
</ul>
<h3>The alternatives?</h3>
<p><strong>SmartOS</strong> is a fork off OpenIndiana/OpenSolaris, it seems to have a lot of great security features and features from Solaris that enjoy (you may have read of my love for the ZFS filesystem which is native to SmartOS). Joyent has recently open-sourced their SmartDataCenter &#8220;SDC&#8221; or they are now calling it Triton Enterprise.</p>
<p>What I like about it other than the fact it uses native Solaris and it uses the ZFS filesystem for storage is the fact that it is a <strong>hybrid hypervisor</strong>. It can host containers and VMs (using technology similar to virtualbox since virtualbox is also from solaris).</p>
<p>The downside of this platform seems to be the complexity needed to deploy containers with this tool. You need to have a &#8220;head node&#8221; to be the brains of the platform, the &#8220;head node&#8221; does a lot of critical things. It monitors the network, the other compute nodes (where you host your vms/containers), it also hosts the database for all the nodes. In dev mode you can force the head node to also be able to host VMs but this is not recommended or good practice.</p>
<p>The web interface (SmartDataCenter) to manage your containers and VMs is also very rudementary, there is no built-in console to your guests. You need to run a lot of commands in the head node&#8217;s shell to make JSON queries to grab the data you want like the VNC server and port address for your guests.</p>
<p>Honestly I have not dug much deeper into SmartOS but I probably should, it looks like an awesome project. I am sure for people that want to use their platform for scalable container/hypervisor deployments it makes sense, but to replace my single server at home doing virtualization it does not look very likely this may be a good choice given the complexity.</p>
<p><strong>Proxmox</strong> is another platform I am looking at, you may recall that 7 years ago I discovered proxmox virtual environment and started using it on my lab. That was Proxmox VE 1.5 I think and I recently discovered they have made a lot of strides in the right direction.</p>
<p>Just a few weeks ago they released their latest PVE 4.4 and they are now supporting ZFS data pools (via FUSE/zfsonlinux), not to mention that they have replaced OpenVZ with LXC (linux containers). It may be worth it for me to download their latest release and check out their platform again.</p>
<p>Other than Proxmox or SmartOS, I have not come across any other &#8216;hybrid&#8217; hypervisors. Please share in the comments if there is something else I should check out.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">475</post-id>	</item>
	</channel>
</rss>
