Characterize the relationship between sustained requests / sec to the pulp-content app and the number of pulp-content apps without increases in latency.
Identify rate limiting components in various test scenarios.
1 Load generator - m4.2xlarge instance:
1 Pulp test machine - m4.2xlarge instance:
Minio providing artifact storage also running on the system under test.
Pulp will be deployed as containers on the "Pulp Test Machine" hardware above:
Pulp will be configured to Mini which is also deployed on the same machine as Pulp. These artifacts aren't going to actuall be served during testing, so this is expected to use a minimum amount of hardware resources on the system under test.
The versions under test are: pulpcore 3.22.0 and pulp_rpm 3.18.9
Components:
Locust –-HTTP-Requests–-> pulpcore-content
Locust <–-302-Redirect–- pulpcore-content
Locust will not follow the redirect. Mini is not the software under test, only pulp-content is.
The reverse proxy is entirely bypassed as we do not want to test nginx, only Pulp.
Deployed with 16 workers on the load generator machine.
The urls from the RHEL9 baseos are gathered into a text file using this script:
import argparse
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
def process_page(url):
page = requests.get(url)
soup = BeautifulSoup(page.content, "html.parser")
links = soup.find_all("a")
for link in links:
if link.text == "../":
continue
link_url = urljoin(url, link["href"])
if link_url.endswith(".rpm"):
print(link_url)
elif "://" in link_url:
process_page(link_url)
parser = argparse.ArgumentParser(description='Get the full URLs of .rpm files at a specific url.')
parser.add_argument('base_url', help='The base url to fetch links from')
args = parser.parse_args()
base_url = args.base_url
process_page(base_url)
Here is the locustfile.py
import random
import re
import time
from locust import FastHttpUser, task
NUM_CONTENT_APPS = 2
START_PORT = 24737
END_PORT = START_PORT + NUM_CONTENT_APPS
class QuickstartUser(FastHttpUser):
def on_start(self):
with open("/mnt/locust/urls.txt") as file:
self.urls = file.readlines()
@task
def random_rpm(self):
original_url = random.choice(self.urls)
new_port = str(random.randint(START_PORT, END_PORT - 1))
new_url = re.sub(r':\d+', ':' + new_port, original_url)
self.client.get(
new_url,
allow_redirects=False
)
For each scenario increase the number of Locust users until the pulp-content app no longer shows increases in req/sec with additional increases in locust users.
Sync with policy="immediate" a RHEL9 baseos. To do this:
Create a debug CDN cert with these instructions.
Create and sync rhel9 baseos with these commands
pulp rpm remote create --name rhel9_baseos --url https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/ --policy immediate --client-cert @cdn_1056.pem --client-key @cdn_1056.pem --ca-cert @redhat-uep.pem
pulp rpm repository create --name rhel9_baseos --autopublish --remote rhel9_baseos
pulp rpm distribution create --base-path rhel9_baseos --name rhel9_baseos --repository rhel9_baseos
pulp rpm repository sync --name rhel9_baseos
Install docker https://docs.docker.com/engine/install/fedora/#install-using-the-repository
sudo dnf install docker-compose git
setup portainer via docker-compose
setup pulp via docker compose
install minio and configure pulp to talk to it
Make sure to create the bucket manually for pulp to use also
Install docker https://docs.docker.com/engine/install/fedora/#install-using-the-repository
sudo dnf install docker-compose
setup portainer via docker-compose
setup locust via docker-compose https://docs.locust.io/en/stable/running-in-docker.html