Assessing monitoring and alerting policies – Walk-Through – Assessing Change Management, Logging, and Monitoring Policies

Assessing monitoring and alerting policies

As we covered in Chapter 7, Tools for Monitoring and Assessing, cloud monitoring is a method of reviewing, observing, and managing the health and security of a cloud. Using monitoring tools, organizations can proactively monitor their cloud environments to identify issues before they become security risks. AWS, Azure, and GCP offer native solutions that an IT auditor can leverage to monitor and assess cloud environments. Let us start by looking at AWS.

AWS

The first monitoring tool an IT auditor can leverage in AWS is Amazon CloudWatch.

Amazon CloudWatch

Amazon CloudWatch is an AWS native monitoring and management service that is designed for the purpose of monitoring the services and resources that are used. Amazon CloudWatch can be used to collect and track metrics, monitor log files, and set alarms, among many other functions. To review these findings, we will need to perform the following steps to launch Amazon CloudWatch, as seen in Figure 10.24:

  1. Navigate to the AWS Management Console.
  2. Select CloudWatch | Dashboards.

Figure 10.24 – Amazon CloudWatch

Under Dashboards, an IT auditor can create custom dashboards. Under Automatic dashboards, you can pick further options. In this scenario, we have picked Billing and CloudWatch Logs to add to our custom packtestdashboard dashboard, as seen in Figure 10.25:

Figure 10.25 – CloudWatch | Dashboards

You can see the dashboard displays billing information for different services as well as CloudWatch logs, as seen in Figure 10.26:

Figure 10.26 – CloudWatch | packtestdashboard

Amazon CloudWatch also has a feature named CloudWatch Alarms that an IT auditor can leverage. CloudWatch Alarms has the functionality to monitor defined metric changes that have crossed a specified threshold. To launch Alarms within Amazon CloudWatch, as seen in Figure 10.27, perform the following steps:

  1. Navigate to the AWS Management Console.
  2. Select CloudWatch | Alarms.

Figure 10.27 – CloudWatch | Alarms

An IT auditor can create an alarm that triggers when a certain metric changes. I will provide examples of two rules an IT auditor can create.

Note

For detailed instructions on creating CloudWatch alarms, go to:

In our first example, we select a metric that triggers an alarm when an AWS Simple Storage Service (S3) bucket permission changes, as seen in Figure 10.28. An IT auditor could use this rule to monitor changes in S3 buckets. They could also use this rule to look for misconfigured S3 buckets that allow public access. This is one of the most common security misconfiguration risks within AWS.

Figure 10.28 – Amazon S3 Bucket Permissions metric

In our second example, we can select the Large Number of EC2 Security Group Rules Applied to an Instance metric to trigger an alarm, as seen in Figure 10.29:

Figure 10.29 – The Large Number of EC2 Security Group Rules Applied to an Instance metric An IT auditor could use this rule to monitor for malicious activity or insider threat activity where a user would add security groups to an EC2 instance, bypassing the regular process.

Now that we have looked at monitoring tools in AWS, let us look at tools we can leverage in Azure.

Azure

One of the tools an IT auditor can leverage for monitoring in Azure is Azure Monitor.

Azure Monitor

As we mentioned earlier, Azure Monitor aggregates and correlates data across Azure cloud resources. Within Azure Monitor, there is a useful feature named Change Analysis. Change Analysis detects and helps monitor various types of changes, from the infrastructure layer through application deployment, as seen in Figure 10.30:

To launch Change Analysis within Azure Monitor, perform the following steps:

  1. Navigate to the Microsoft Azure portal.
  2. Select Monitor | Change Analysis.

Figure 10.30 – Azure Monitor | Change Analysis

Azure Monitor also has the ability to trigger alerts. This can be done through the configuration of alert rules. Perform the following steps to launch Alerts within Azure Monitor, as seen in Figure 10.31:

  1. Navigate to the Microsoft Azure portal.
  2. Select Monitor | Alerts.

An IT auditor can set up alerts for various conditions. In this example, we are setting up alerts for All Administrative Operations over the last week, as seen in Figure 10.31:

Figure 10.31 – Azure Monitor | Alert rules

This type of rulecan be useful to an IT auditor to monitor administrative operations and ensure they are authorized.

For illustration, we went ahead and performed some administrative operations. The alerts were triggered, as seen in Figure 10.32. The IT auditor can perform further investigations on the alerts:

Figure 10.32 – Azure Monitor | Alerts

In addition, an IT auditor can create an activity log alert rule from the Activity log plane. The Activity log plane contains information about Azure resource changes. Use the following steps to launch Activity log within Azure Monitor, as seen in Figure 10.33:

  1. Navigate to the Microsoft Azure portal.
  2. Select Monitor | Activity log.

Figure 10.33 – Azure Monitor | Activity log

To create an alert, select any activity within Activity log. In this example, I selected one of the events,

Create or Update Network Security Group, as seen in Figure 10.34:

Figure 10.34 – Alert Rule: Create or Update Network Security Group

Now that we have looked at monitoring tools in Azure, let us look at tools we can leverage in GCP.

Assessing monitoring and alerting policies 2 – Walk-Through – Assessing Change Management, Logging, and Monitoring Policies

GCP

One of the tools an IT auditor can leverage to perform monitoring in GCP is Google Cloud Monitoring.

Google Cloud Monitoring

Google Cloud Monitoring collects metrics of Google Cloud resources. IT auditors can leverage Google Cloud Monitoring to gain real-time visibility into GCP, as seen in Figure 10.35. To launch the Monitoring explorer, take the following steps:

  1. Navigate to GCP.
  2. Select Monitoring | Dashboards.

Figure 10.35 – Google Cloud Monitoring

The Dashboards feature within Google Cloud Monitoring provides dashboards of various resources, such as Disks, Firewalls, Infrastructure Summary, and VM instances. As an example of an assessment, let us review the FIREWALLS dashboard, as seen in Figure 10.36:

Figure 10.36 – Google Cloud Monitoring | Dashboards

If we dig deeper, we can note that there is an ingress/inbound rule that allows traffic from any IP address on the internet (0.0.0.0/0) to port TCP 22 (SSH), as seen in Figure 10.37:

Figure 10.37 – Security Rules

This particular rule should pique an IT auditor’s interest as port 22 is a network protocol that has system administrator capability. Attackers can use various brute force techniques to gain access to GCP resources using remote server administration ports, such as 22; therefore the IT auditor should inquire about the business need to have port 22 open to anyone on the internet.

Another useful feature of Google Cloud Monitoring is Alerting. The Alerting feature allows you to trigger an alert based on a predefined metric, as seen in Figure 10.38. An IT auditor can create an alerting policy so that they are notified when the performance of a resource doesn’t meet the criteria defined. To launch the Cloud Monitoring explorer, take the following steps:

  1. Navigate to the Google Cloud portal.
  2. Select Monitoring | Alerting.

Figure 10.38 – Google Cloud Monitoring | Alerting

As an example, we can add a metric, such as Audited Resource, as seen in Figure 10.39:

Figure 10.39 – Create alerting policy | Audited Resource metric

We’ve now completed our walk-through of monitoring and alerting policies within AWS, Azure, and GCP. IT auditors should now have a repertoire of toolsets they can use to effectively perform their audits in the cloud.

Summary

In this chapter, we performed a walk-through of change management, logging, and monitoring policies for the AWS, Azure, and GCP platforms. We specifically covered how to assess change management controls, audit and logging configurations, and change management and configuration policies. Finally, we reviewed how an IT auditor can leverage monitoring and alerting policies.

We have reached the end of the book. Well done! I want to thank you for sharing this journey with us. The book has provided a roadmap for how to build and execute effective cloud auditing plans for AWS, Azure, and GCP. We hope this will be a valuable resource that you can utilize, and that it enables you to secure and add real value to the organizations that you audit.

The Classic Software Model – The SaaS Mindset

The Classic Software Model

Before we can dig into the defining SaaS, we need to first understand where this journey started and the factors that have driven the momentum of the SaaS delivery model. Let’s start by looking at how software was traditionally built, operated, and managed. These pre-SaaS systems were typically delivered in an “installed software” model where companies were hyper-focused on the features and functions of their offerings.

In this model, software was sold to a customer and that customer often assumed responsibility for installing the system. They might install it in some vendor-provided environment or they might install it on self-hosted infrastructure. The acquisition of these offerings would, in some cases, be packaged with professional services teams that could oversee the installation, customization, and configuration of the customer’s environment.

Figure 1-1 provides a conceptual view of the footprint of the traditional software delivery model.

Figure 1-1. The installed software model

Here, you’ll see a representation of multiple customer environments. We have Customers 1 and 2 that have installed and are running specific versions of the software provider’s product. As part of their onboarding, they also required one-off customizations to the product that were addressed by the provider’s professional services team. We also have other customers that may be running different versions of our product that may or may not have any customizations.

As each new customer is onboarded here, the provider’s operations organization may need to create focused teams that can support the day-to-day needs of these customer installed and customized environments. These teams might be dedicated to an individual customer or support a cross-section of customers.

This classic mode of software delivery is a much more sales driven model where the business focuses on acquiring customers and hands them off to technology teams to address the specific needs of each incoming customer. Here, landing the deal often takes precedence over the need for agility, scale, and operational efficiency. These solutions are also frequently sold with long-term contracts that limit a customer’s ability to easily move to any other vendor’s offering.

The distributed and varying nature of these customer environments often slowed the release and adoption of new features. Customers tend to have control in these settings, often dictating how and when they might upgrade to a new version. The complexity of testing and deploying these environments could also become unwieldy, pushing vendors toward quarterly or semi-annual releases.

The Natural Challenges of the Classic Model – The SaaS Mindset

The Natural Challenges of the Classic Model

To be completely fair, building and delivering software in the model described above is and will continue to be a perfectly valid approach for some businesses. The legacy, compliance, and business realities of any given domain might align well to this model.

However, for many, this mode of software delivery introduced a number of challenges. At its core, this approach focused more on being able to sell customers whatever they needed in exchange for trade offs around scale, agility, and cost/operational efficiency.

On the surface, these tradeoffs may not seem all that significant. If you have a limited number of customers and you’re only landing a few a year, this model could be adequate. You would still have inefficiencies, but they would be far less prominent. Consider, however, a scenario where you have a significant installed base and are looking to grow your business rapidly. In that mode, the pain points of this approach begin to represent a real problem for many software vendors.

Operational and cost efficiencies are often amongst the first areas where companies using this model start to feel the pain. The incremental overhead of supporting each new customer here begins to have real impacts on the business, eroding margins and continually adding complexity to the operational profile of the business. Each new customer could require more support teams, more infrastructure, and more effort to manage the one-off variations that accompany each customer installation. In some cases, companies actually reach a point where they’ll intentionally slow their growth because of the operational burdens of this model.

The bigger issue here, though, is how this model impacts agility, competition, growth, and innovation. By its very nature, this model is anything but nimble. Allowing customers to manage their own environments, supporting separate versions for each customer, enabling one-off customization–these are all areas that undermine speed and agility. Imagine what it would mean to roll out a new feature in these environments. The time between having the idea for a feature, iterating on its development, and getting it in front of all your customers is often a slow and deliberate process. By the time a new feature arrives, the customer and market needs may have already shifted. This also can impact the competitive footprint of these companies, limiting their ability to rapidly react to emerging solutions that are built around a lower friction model.

While the operational and development footprint were becoming harder to scale, the needs and expectations of customers were also shifting. Customers were less worried about their ability to manage/control the environment where their software was running and more interested in maximizing the value they were extracting from these solutions. They demanded lower friction experiences that would be continually innovating to meet their needs, giving them more freedom to move between solutions based on the evolving needs of their business.

Customers were also more drawn to pricing models that better aligned with their value and consumption profile. In some cases, they were looking for the flexibility of subscription and/or pay-as-you-go pricing models.

You can see the natural tension that’s at play here. For many, the classic delivery model simply didn’t align well with the shifting market and customer demands. The emergence of the cloud also played a key role here. The cloud model fundamentally altered the way companies looked at hosting, managing, and operating their software. The pay-as-you-go nature and operational model of the cloud had companies looking for ways to take advantage of the economies of scale that were baked into the cloud experience. Together, these forces were motivating software providers to consider new business and technology models.

The Move to Shared Infrastructure – The SaaS Mindset

The Move to Shared Infrastructure

By now, the basic challenges of the traditional model should be clear. While some organizations were struggling with this model, others already understood this approach would simply not scale economically or operationally. Larger business-to-consumer (B2C) software companies, for example, knew that supporting thousands or even millions of customers in a one-off model simply wouldn’t work.

These B2C organizations really represented the early days of SaaS, laying the groundwork for future SaaS evolution. Achieving scale for these organizations was, from the outset, about building systems from the ground up that could support the massive scale of the B2C universe. They thrived based on their ability to operate in a model where all customers were presented with a single, unified experience.

This shift to sharing infrastructure amongst customers opened all new opportunities for software providers. To better understand this, Figure 1-2 provides a conceptual view of how applications can share infrastructure in a SaaS model.

Figure 1-2. A shared infrastructure model

In Figure 1-2 you’ll see a simplified view of the traditional notion of SaaS. You’ll notice that we’ve completely moved away from the distributed, one-off, custom nature of the classic model we saw in Figure 1-1. In this approach, you’ll see that we have a single SaaS environment with a collection of infrastructure. In this example, I happened to show microservices and their corresponding storage. A more complete example would show all the elements of your system’s application architecture.

If we were to take a peek inside one of these microservices at run-time, we could potentially see any number of tenants (aka customers) consuming the system’s shared infrastructure. For example, if we took three snapshots of the Product microservice at three different time intervals, we might see something resembling the image in Figure 1-3.

Figure 1-3. Shared microservices

In snapshot 1, our product microservice has two tenants consuming our service (tenant1 and tenant2). Another snapshot could have another collection of tenants that are consuming our service. The point here is that the resource no longer belongs to any one consumer, it is a shared resource that is consumed by any tenant of our system.

This shift to using shared infrastructure also meant we needed a new way to describe the consumers of our software. Before, when every consumer had its own dedicated infrastructure, it was easy to continue to use the term “customer”. However, in the shared infrastructure model of SaaS, you’ll see that we describe the consumers of our environment as “tenants”.

It’s essential that you have a solid understanding of this concept since it will span almost every topic we cover in this book. The notion of tenancy maps very well to the idea of an apartment complex where you own a building and you rent it out to different tenants. In this model, the building correlates to the shared infrastructure of your solution and the tenants represent the different occupants of your apartments. These tenants of your building consume shared building resources (power, water, and so on). As the building owner, you manage and operate the overall building and different tenants will come and go.

You can see how this term better fits the SaaS model where we are building a service that runs on shared infrastructure that can accommodate any number of tenants. Yes, tenants are still customers, but the term “tenant” lets us better characterize how they land in a SaaS environment.

The Advantage of Shared Infrastructure – The SaaS Mindset

The Advantage of Shared Infrastructure

So, we can see how SaaS moves us more toward a unified infrastructure footprint. If we contrast this with the one-off, installed software model we covered above, you can see how this approach enables us to overcome a number of challenges.

Now that we have a single environment for all customers, we can manage, operate, and deploy all of our customers through a single pane of glass. Imagine, for example, what it would look like to deploy an update in the shared infrastructure model. We would simply deploy our new version to our unified SaaS environment and all of our tenants would immediately have access to our new features. Gone is the idea of separately managed and operated versions. With SaaS, every customer is running the same version of your application. Yes, there may be customizations within that experience that are enabled/disabled for different personas, but they are all part of a single application experience.

Note

This notion of having all tenants running the same version of your offering represents a common litmus test for SaaS environments. It is foundational to enabling many of the business benefits that are at the core of adopting a SaaS delivery model.

You can imagine the operational benefits that come with this model as well. With all tenants in one environment, we can manage, operate, and support our tenants through a common experience. Our tools can give us insights into how tenants are consuming our system and we can create policies and strategies to manage them collectively. This brings all new levels of efficiency to our operational model, reducing the complexity and overall footprint of the operational team. SaaS organizations take great pride in their ability to manage and operate a large collection of tenants with modestly sized operational teams.

This focus on operational efficiency also directly feeds the broader agility story. Freed from the burden of one-off, custom versions, SaaS teams will often embrace their agility and use it as the engine of constant innovation. These teams are continually releasing new features, gathering more immediate customer feedback, and evolving their systems in real-time. You can imagine how this model will directly impact customer loyalty and adoption.

The responsiveness and agility of this SaaS model often translates into competitive advantages. Teams will use this agility to reach new market segments, pivoting in real-time based on competitive and general market dynamics.

The shared infrastructure model of SaaS also has natural cost benefits. When you have shared infrastructure and you can scale that infrastructure based on the actual consumption patterns of your customers, this can have a significant impact on the margins of your business. In an ideal SaaS infrastructure model, your system would essentially only consume the infrastructure that is needed to support the current load of your tenants. This can represent a real game-changer for some organizations, allowing them to take on new tenants at any pace knowing that each tenant’s infrastructure costs will only expand based on their actual consumption activities. The elastic, pay-as-you-go nature of cloud infrastructure aligns nicely with this model, supporting the pricing and scaling models that fit naturally with the varying workloads and consumption profiles of SaaS environments.

Thinking Beyond Infrastructure – The SaaS Mindset

Thinking Beyond Infrastructure

While looking at SaaS through the lens of shared infrastructure makes it easier to understand the value of SaaS, the reality is that SaaS is much more than shared infrastructure. In fact, as we move forward, we’ll see that shared infrastructure is just one dimension of the SaaS story. There are economies of scale and agility can be achieved with SaaS–with or without shared infrastructure.

In reality, we’ll eventually see that there are actually many ways to deploy and implement your SaaS application architecture. The efficiencies that are attributed to SaaS can certainly be maximized by sharing infrastructure. However, efficiency starts with surrounding your application with constructs that can streamline the customer experience and the management/operational experience of your SaaS environment. It’s these constructs that–in concert with your SaaS application architecture–enable your SaaS business to realize its fundamental operational, growth, and agility goals.

To better understand this concept, let’s look at these additional SaaS constructs. The diagram in Figure 1-4 provides a highly simplified conceptual view of these common SaaS services.

Figure 1-4. Surrounding your application with shared services

At the center of this diagram you’ll see a placeholder for application services. This is where the various components of your SaaS application are deployed. Around these services, though, are a set of services that are needed to support the needs of our overall SaaS environment. At the top, I’ve highlighted the onboarding and identity services, which provide all the functionality to introduce a new tenant into your system. On the left, you’ll see the placeholders for the common SaaS deployment and management functionality. And, on the right, you’ll see fundamental concepts like billing, metering, metrics, and analytics.

Now, for many SaaS builders, it’s tempting to view these additional services as secondary components that are needed by your application. In fact, I have seen teams that will defer the introduction of these services, putting all their initial energy into supporting tenancy in their application services.

In reality, while getting the application services right is certainly an important part of your SaaS model, the success of your SaaS business will be heavily influenced by the capabilities of these surrounding services. These services are at the core of enabling much of the operational efficiency, growth, innovation, and agility goals that are motivating companies to adopt a SaaS model. So, these components–which are common to all SaaS environments–must be put front and center when you are building your SaaS solution. This is why I have always encouraged SaaS teams to start their SaaS development at this outer edge, defining how they will automate the introduction of tenants, how they’ll connect tenants to users, how they’ll manage your tenant infrastructure, and a host of other considerations that we’ll be covering throughout this book. It’s these building blocks–which have nothing to do with the functionality of your application–that are going to have a significant influence on the SaaS footprint of your architecture, design, code, and business.

So, if we turn our attention back to the diagram in Figure 1-4, we can see this big hole in the middle that represents where we’ll ultimately place our application services. The key takeaway here is that, no matter how we design and build what lands in that space, you’ll still need some flavor of these core shared services as part of every SaaS architecture you build.

Re-Defining Multi-Tenancy – The SaaS Mindset

Re-Defining Multi-Tenancy

Up to this point, I’ve avoided introducing the idea of multi-tenancy. It’s a word that is used heavily in the SaaS space and will appear all throughout the remainder of this book. However, it’s a term that we have to wander into gracefully. The idea of multi-tenancy comes with lots of attached baggage and, before sorting it out, I wanted to create some foundation for the fundamentals that have driven companies toward the adoption of the SaaS delivery model. The other part of the challenge here is that the notion of multi-tenancy–as we’ll define it in this book–will move beyond some of the traditional definitions that are typically attached to this term.

For years, in many circles, the term multi-tenant was used to convey the idea that some resource was being shared by multiple tenants. This could apply in many contexts. We could say that some piece of cloud infrastructure, for example, could be deemed multi-tenant because it was allowing tenants to share some resource under the hood. In reality, many services running in the cloud may be running in a multi-tenant model to achieve their economies of scale. As a cloud consumer, this may be happening entirely outside of your view. Even outside the cloud, teams could build solutions where compute, databases, and other resources could be shared amongst customers. This created a very tight connection between multi-tenancy and the idea of a shared resource. In fact, in this context, this is a perfectly valid notion of multi-tenancy.

Now, as we start thinking about SaaS environments, it’s entirely natural for us to bring the mapping of multi-tenancy with us. Afterall, SaaS environments do share infrastructure and that sharing of infrastructure is certainly valid to label as being multi-tenant.

To better illustrate this point, let’s look at a sample SaaS model that brings together the concepts that we’ve been discussing in this chapter. The image in Figure 1-5 provides a view of a sample multi-tenant SaaS environment.

Figure 1-5. A sample multi-tenant environment

Here we have landed the shared infrastructure of our application services inside surrounding services that are used to introduce tenancy, manage, and operate our SaaS environment. Assuming that all of our tenants are sharing their infrastructure (compute, storage, and so on), then this would fit with the classic definition of multi-tenancy. And, to be fair, it would not be uncommon for SaaS providers to define and deliver their solution following this pattern.

The challenge here is that SaaS environments don’t exclusively conform to this model. Suppose, for example, I create a SaaS environment that looks like the drawing in Figure 1-6.

Figure 1-6. Multi-tenancy with shared and dedicated resources

Re-Defining Multi-Tenancy 2 – The SaaS Mindset

Here you’ll see that we’ve morphed the footprint of some of our application microservices. The Product microservice is unchanged. Its compute and storage infrastructure is still shared by all tenants. However, as we move to the Order microservice, you’ll see that we’ve mixed things up a bit. Our domain, performance, and/ or our security requirements may have required that we separate out the storage for each tenant. So, here the compute of our Order microservice is still shared, but we have separate databases for each tenant.

Finally, our Fulfillment microservice has also shifted. Here, our requirements pushed us toward a model where each tenant is running dedicated compute resources. In this case, though, the database is still shared by all tenants.

This architecture has certainly added a new wrinkle to our notion of multi-tenancy. If we’re sticking to the purest definition of multi-tenancy, we wouldn’t really be able to say everything running here conforms to the original definition of multi-tenancy. The storage of the Order service, for example, is not sharing any infrastructure between tenants. The compute of our Fulfillment microservices is also not shared here, but the database for this service is shared by all tenants.

Blurring these multi-tenant lines is common in the SaaS universe. When you’re composing your SaaS environment, you’re not sticking to any one absolute definition of multi-tenancy. You’re picking the combinations of shared and dedicated resources that best align with the business and technical requirements of your system. This is all part of optimizing the footprint of your SaaS architecture around the needs of the business.

Even though the resources here are not shared by all tenants, the fundamentals of the SaaS principles we outlined earlier are still valid. For example, this environment would not change our application deployment approach. All tenants in this environment would still be running the same version of the product. Also, the environment is still being onboarded, operated, and managed by the same set of shared services we relied on in our prior example. This means that we’re still extracting much of the operational efficiency and agility from this environment that would have been achieved in a fully shared infrastructure (with some caveats).

To drive this point home, let’s look at a more extreme example. Suppose we have a SaaS architecture that resembles the model shown in Figure 1-7. In this example, the domain, market, and/or legacy requirements have required us to have all compute and storage running in a dedicated model where each tenant has a completely separate set of infrastructure resources.

Figure 1-7. A multi-tenant environment with fully dedicated resources

Re-Defining Multi-Tenancy 3 – The SaaS Mindset

While our tenants aren’t sharing infrastructure in this model, you’ll see that they continue to be onboarded, managed, and operated through the same set of shared services that have spanned all of the examples we’ve outlined here. That means that all tenants are still running the same version of the software and they are still being managed and operated collectively.

This may seem like an unlikely scenario. However, in the wild, SaaS providers may have any number of different factors that might require them to operate in this model. Migrating SaaS providers often employ this model as a first stepping stone to SaaS. Other industries may have such extreme isolation requirements that they’re not allowed to share infrastructure. There’s a long list of factors that could legitimately land a SaaS provider in this model.

So, given this backdrop, it seems fair to ask ourselves how we want to define multi-tenancy in the context of a SaaS environment. Using the literal shared infrastructure definition of multi-tenancy doesn’t seem to map well to the various models that can be used to deploy tenant infrastructure. Instead, these variations in SaaS models seem to demand that we evolve our definition of what it means to be multi-tenant.

For the scope of this book, at least, the term multi-tenant will definitely be extended to accommodate the realities I’ve outlined here. As we move forward, multi-tenant will refer to any environment that onboards, deploys, manages, and operates tenants through a single pane of glass. The sharedness of any infrastructure will have no correlation to the term multi-tenancy.

In the ensuing chapters, we’ll introduce new terminology that will help us overcome some of the ambiguity that is attached to multi-tenancy.

Avoiding the Single-Tenant Term

Generally, whenever we refer to something as multi-tenant, there’s a natural tendency to assume there must be some corresponding notion of what it means to be single-tenant. The idea of single tenancy seems to get mapped to those environments where no infrastructure is shared by tenants.

While I follow the logic of this approach, this term doesn’t really seem to fit anywhere in the model of SaaS that I have outlined here. If you look back to Figure 1-7 where our solution had no shared infrastructure, I also noted that we would still label this a multi-tenant environment since all tenants were still running the same version and being managed/operated collectively. Labeling this a single-tenant would undermine the idea that we aren’t somehow realizing the benefits of the SaaS model.

With this in mind, you’ll find that the term single-tenant will not be used at any point beyond this chapter. Every design and architecture we discuss will still be deemed a multi-tenant architecture. Instead, we’ll attach new terms to describe the various deployment models that will still allow us to convey how/if infrastructure is being shared within a given SaaS environment. The general goal here is to disconnect the concept of multi-tenancy from the sharing of infrastructure and use it as a broader term to characterize any environment that is built, deployed, managed, and operated in a SaaS model.

This is less about what SaaS is or is not and more about establishing a vocabulary that aligns better with the concepts we’ll be exploring throughout this book.