jeudi 21 janvier 2021

Testing a Large Number of Apache Virtual Hosts That Use SNI

I am trying to figure out a good way to setup an identical "test" deployment tier for a somewhat complex Apache/PHP/Tomcat/Python setup. Creating identical servers is easy enough. DNS and SNI are sorta my hangup.

How do you allow web developers to test on a separate set of development systems when you have 200+ different FQDN's involved? Individual sites have been handled with things like /etc/hosts overrides. This doesn't scale well. I have a work-around in place (described below), but I would be interested in better ideas.

Details:

I have set of cloud vm's setup through automation. They consist of a cluster of identical Apache nodes that serve both static content and dynamic content (proxied from separate clusters of php/tomcat/python/etc servers).

There are a large number of Apache virtual hosts (200+) configured for this cluster all with separate FQDN's and separate SSL certificates for each virtual host. All the FQDN's ultimately resolve to the same network load balancer -- for now at least.

It is desired that a group of web content developers can publish and test content to a test tier of servers that is separate from production. The kicker is that, though the sites are separate, they reference content off each other -- and those references should also be confined to the same tier. That part makes it harder because of things like embedded links.

Creating an identical set of servers is trivial with automation. Accessing, testing, and demoing it less so.

My current work-around/hack:

I setup tinypoxy on a single Linux VM in the same subnet with the test tier.

The devs configure tinyproxy as their Firefox network proxy. (Firefox was chosen because it seemed the easiest one to document the proxy setup for users).

The box running tinyproxy has iptables configure with DNAT rules that basically take any IP traffic bound for the production load balancer and DNAT it to the test load balancer.

The DNAT rules look like this:

iptables -t nat -I OUTPUT -d <prod.nlb.ip> -p tcp -j DNAT --to-destination <test.nlb.ip>

So basically the proxy hijacks traffic going to the production site and sends it to the destination site (with the needed host headers / SNI / whatever).

Only idea I could come up with but wasn't popular because it was too "hacky".

Aucun commentaire:

Enregistrer un commentaire