Integrating DNS between Oracle Cloud and Oracle Cloud VMware Solution.
If in doubt, you blame the Firewall. But, if you know your stuff, and particularly if you manage a firewall, you’ll know that the real culprit is always DNS… Yes, DNS is often the villain of the piece. Seeming to be innocently minding its own business, until it does that thing it does when it doesn’t do what you think it should! Something you typically discover after hours of exasperated hair-pulling and teeth gnashing. In the interests of restoring some balance, here’s a short post exploring how we can get the best out of DNS within an OCI/OCVS environment.
In the last exciting instalment, we saw our heroes bravely reaching out to explore the Internet! But what if the Internet wants to come visit us? Or in slightly less Sci-Fi movie language, we want to publish services from OCVS out to the Internet? Well, we have a bunch of the parts already assembled, but we’ll need to change some, scale some, and do even more NAT! Read on for the conclusion next instalment* of our gripping story…
Okay, so we have our Oracle Cloud VMware Solution (OCVS) up and running, we can connect to the vCenter, NSX Manager etc. and have started to deploy workloads inside the SDDC, that’s awesome! But, now we want to be able to connect to the Internet from those workloads, and, just maybe, connect to them from the Internet. In this post, we’ll look at the steps needed to enable our workload VMs to access the Internet. Then, in the second post in this series, we’ll err… break that, hurriedly fix it (while hoping nobody noticed it was us), and enhance the Internet access to allow us to publish services to the Internet from the SDDC.
As a first class citizen of Oracle Cloud Infrastructure, the network layers of the Oracle Cloud VMware Solution are interlinked with those of the parent OCI Virtual Cloud Network (VCN). Understanding how the networks of each layer relate to each other is important, but difficult to visualize, so, to help with that, here’s a Reference Architecture poster.
Because Oracle Cloud VMware Solution (OCVS) is a fully customer-managed implementation of the VMware Cloud Foundation stack, hosted on Oracle’s global Cloud Infrastructure platform, it’s arguably the Hyperscaler VMware offering which is most like your existing on-premises deployment. However, unless your on-prem deployment is on an “OCI Dedicated Region Cloud @ Customer” it’s unlikely that your network underlay is quite the same as the one underpinning OCVS. In this post we’ll see one of the ways this shows up, and how you can work around it.
The VMware Cloud Provider Pod Designer produces a bunch of documentation which is really useful (even if you don’t use the Deployer to actually, err… deploy). One of the options in the Designer is to forgo having NSX deployed in the management stack. Instead, we leave a load of dangling VLANs and let you connect them together with your own external network/security stack. The “icing on the cake” would be if you could not only find the network topology in the Designer downloads (which you can), but also the firewall policies necessary to make the whole shebang work (which you can’t).
Fortunately, all the information we need is inside the configData.cfg
file in the Designer bundle, but not in a very friendly format. What we need is a slick tool to extract the firewall policy data and present it in a neat way. Sadly, we don’t have that, but what we do have is a PowerShell script/cmdlet that I wrote to do the job instead…
In the first two posts of this
series we looked at what could possibly go wrong when we tried to connect a service designed to face the Internet, to not one, not many, but lots of customers’ wide area networks (WANs). We learned that NAT was a great tool, but that it was only part of the solution. We also learned that to connect two networks which use the same addresses, we need an intermediate set of addresses to hide the two networks from each other. In this post, we’ll find some suitable addresses for that intermediate stage, and look at the scaling of this model for multiple customers connecting to multiple services.
In Part 1 of this series we looked at the problem Cloud Providers and their customers face when accessing provider services over their Wide Area Networks (WANs). In this second post in the series we’ll explore a number of possible solutions. In some instances, services were designed from the beginning to face multiple tenant networks, each with possible overlapping address space. Where this is the case, we don’t really need to “design” a solution, just ensure that our service, and the connectivity model which connects our tenant WAN environments to it, follows the way the service’s designer envisioned it.
In some cases of course, this isn’t the case, and we need to hide the complexity of those overlapping customer networks from our service, so that we don’t confuse or scare it. We’ll look at both scenarios in this post.
A number of Cloud Services require the transfer of large volumes of data. In the Cloud Provider world, that could be uploading VMs in the form of OVA files, ISO disk images, or sending backup data to a DRaaS service. Customers can connect their on-prem SDDCs to the Cloud Provider with only an Internet connection, and make use of these great services. How easy is that! In locations where the Internet is readily available, fast and reliable, this is great. But what about locations where that’s not the case? Well, that’s where the trusty WAN steps in. Using a Communication Provider’s services, customers can get direct network links to their Cloud Provider’s datacenters which, while more costly, may offer the speed and reliability which local Internet services lack.
Excellent, problem solved! We’ll just connect our services to our customers’ WANs and go back to watching Netflix, right? Hang on, surely it can’t be that simple? Of course not, for one thing, if the Internet is poor, how will we watch Netflix????
Using Global Load Balancing to access Multisite vCloud Director (vCD) has been possible since vCD 9.0 but only worked if the tenants using it had services in each site. If a Provider had, say five sites, but a tenant only had a presence in two of them, connecting to a site in which the tenant did not have service would result in a failed login. All this has changed in vCloud Director 10.0.
A couple of years ago, in the Architecting Multisite vCloud Director white paper I wrote about the way we allowed vCloud Director sites to be federated (we call it “associated”) with each other. Back in vCD 9.0 this didn’t do much, but we had big plans for it. How things have moved on across two years and four major releases.