Eve-NG resource usage with Juniper labs

Some lab images and topologies are very resource-intensive. You can’t just run any lab you want on any host machine. For this reason, I thought it would be worthwhile to share some experience with running Juniper-based labs on Eve-NG.
eve-ng logo

The findings presented here are based on a lab with 14 nodes. The lab topology is taken from the Juniper book Day One: Routing the Internet Protocol. This lab consists of the following nodes:

  • 10x vMX 14.1R4.8, 1 vCPU, 2 GB RAM
  • 2x vSRX 12.1X47-D20.7, 2 vCPU’s, 2 GB RAM
  • 2x Ubuntu server 16.04.3 LTS, 1 vCPU, 1 GB RAM

The lab topology looks like this:

I run this lab on a Windows 10 laptop that has a quad core i7 and 32 GB of RAM. Using VMware Player, I provide the Eve-NG appliance with 24 GB RAM and 4 vCPU’s. A cool thing about Eve-NG is that it uses UKSM by default. UKSM provides kernel memory deduplication and in a setup like this with mostly the same machines, it can be quite a benefit. There is some more information about Eve-NG and UKSM at this blog.

I was only able to boot two VM’s at a time. This might be because UKSM is heavy on the CPU until the deduplication tasks settle in. After starting the whole lab up two machines at a time, I ended up with surprisingly little resource usage for such a large lab. Here is the output of htop on the Eve-NG appliance with every node booted:

At the same time, the Eve-NG status window looks like this:

I gave the Eve-NG appliance 4 vCPU’s and 24 GB of RAM. Out of them both, only about a third is in use after the labs boots up and settles in. These nodes normally consume 26 GB of RAM together, now the Eve-NG appliance consumes under 8 GB of RAM in total. That includes all of the nodes and the appliance itself. It’s apparently realistic to do some serious labbing on a laptop nowadays. The CPU bottleneck when booting nodes is a bit bad though. If you have a lot of nodes and you have to boot them two at a time it gets tedious, you don’t want to do this often. Resource usage like this is a challenge with running Juniper devices in labs.

For more serious stuff the result is probably the best if you use a real server with more than 4 physical cores. If RAM is unconstrained you could consider turning off UKSM (it can be done from the status window in your lab) and saving the CPU hit.

5 comments

  1. Hi there ,

    I am using vMX 14. In eve-ng …I am trying to configured logical system with *lt interface , i have check the config twise and it’s okay. I have configured tunnel service , *lt interface shows in vmx.

    But when I assigned *lt interface to logical system it’s not working anymore. Sub *lt interface didn’t shows under respective LS.

    Please suggest any setting when we add vmx node in eve-ng

    1. Hi Swapnil,

      Can you name specific interface numbers? A vMX in Eve-NG has some interfaces that can’t be used because their purpose is linking the internal RE and PFE.

      1. Hi Jaap,

        Thanks for revert…

        The inteface name and number is *lt-0/0/10 interface , which enable after configuring tunnel services on vMX-14. and I can see the same *lt interface on vMX master, but when i am assigning *lt interface to respective logical systems, it didnt shows me under LS.

        Currently i am reading one blogpost link “http://sk1f3r.ru/jlab-2” where this guy is using the same vMX version which i have, and he fortunatly able to configured the *lt interface and his labs just working fine. (I try to contact him but didnt get any response).

        so i want to know before starting the vMX box on eve-ng does i need to change the interface setting on vMX box under edit option ( QEMU-Arch / QEMU-Nic )

    1. Hi Retno,

      Thanks for your compliments. A spoiler for my next subject: it will be about using Vagrant and on-demand cloud resources to temporarily deploy big labs in a cost-effective way. I’m working on a new version of my site right now, hoping to release it with one new post before the end of 2018.

Comments are closed.