Episode 49 — Secure containers and serverless production workloads effectively

In this episode, we’re going to make sense of containers and serverless workloads by focusing on what they are, why people use them, and what it means to secure them in production without getting lost in tool details. Containers and serverless are popular because they help teams move faster, scale more easily, and run applications in a more flexible way than traditional servers. The security challenge is that speed and flexibility can also create blind spots if you treat these technologies like magic boxes that run themselves safely. In payment environments, the stakes are higher because workloads may be connected to systems that handle sensitive data, and small mistakes can create big exposure. Containers package applications with their dependencies, while serverless lets you run code in response to events without managing servers directly, but both still require careful control over identity, configuration, and data flow. Effective security means understanding where your responsibility is, what your risks are, and what normal safe operation looks like. By the end, you should be able to explain the major risks and protections for containerized and serverless workloads in plain language, including how to reduce attack surface, protect secrets, control permissions, and keep production behavior predictable.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Let’s start with a clear definition of containers, because the word can be confusing if you imagine physical boxes. A container is a way to package an application and its supporting libraries so it can run consistently across different environments. Instead of installing software directly on a server and hoping the right versions exist, you build a container image that includes what the application needs, and then you run that image as a container. Containers share the host operating system kernel, which makes them lighter than full virtual machines, but it also means isolation is not as complete as many beginners assume. The security model depends on treating images as software artifacts that must be built carefully and run with restrictions, rather than treating them as trusted by default. For payment-related systems, containers might host web services, application logic, or supporting utilities, and those services may connect to databases or payment components. Beginners should understand that containers do not automatically make an app secure; they mainly make it portable. Security comes from what you put in the container, how you configure it, and how you control its access to other systems.

Serverless is different, and it can also be misunderstood because it sounds like servers disappear. Serverless means you write code that runs on demand, such as when a web request arrives, a message is posted to a queue, or a scheduled task triggers, and the cloud platform handles the underlying server management. On first mention, serverless is often implemented as Function as a Service (F a a S), where small pieces of code, called functions, run in short bursts in response to events. You do not patch the underlying operating system in the traditional sense, but you still must secure your code, your configuration, and your permissions, because those remain your responsibility. Serverless can reduce some risks, like long-lived servers that are never patched, but it introduces other risks, like complex event chains, hidden dependencies, and permission sprawl. In payment environments, serverless functions might process transactions, validate orders, generate tokens, or move data between systems, meaning they can touch sensitive workflows. Beginners should see serverless as a different way to run code, not as a way to skip security work. The security questions shift from server maintenance to identity, data handling, and safe event design.

A useful way to approach security for both containers and serverless is to think in terms of a lifecycle, because risks appear at each stage. There is the build stage, where images and code are created, dependencies are chosen, and configurations are set. There is the deploy stage, where workloads are released into production, connected to networks, and given permissions. There is the runtime stage, where workloads actually execute, interact with data, and respond to events, and where attackers try to exploit weaknesses. There is also the update stage, where patches and new versions are rolled out, and where mistakes can introduce new vulnerabilities. Effective security is not just one control; it is a set of controls across the lifecycle that reduce the chance of shipping vulnerable artifacts and reduce the impact if something goes wrong. In payment environments, lifecycle thinking matters because the environment must stay secure over time, not just on launch day. Beginners should understand that most breaches are not one-time accidents; they are the result of drift and unmaintained dependencies. A lifecycle mindset keeps security continuous rather than occasional.

Container image security begins with what you include in the image, because every extra component increases attack surface. Attack surface means the amount of code and functionality that could contain vulnerabilities or be misused, and smaller images generally offer fewer opportunities for attackers. Images should include only what is needed to run the application, not debugging tools, unused libraries, or extra services that do not belong in production. It also matters where images come from, because pulling random images from unknown sources can introduce malicious or outdated components. A strong approach is to use trusted base images and to keep them updated so known vulnerabilities are removed. Beginners should think of this like cooking with ingredients; if you start with spoiled ingredients, the meal will not be safe no matter how carefully you cook. Image building should be controlled and repeatable so you can trace what went into a production workload. When image content is minimized and controlled, you reduce both the number of vulnerabilities and the complexity of investigating issues later.

Identity and permissions are central for both containers and serverless, because modern cloud attacks often focus on stealing credentials or abusing overly broad access. Workloads should have only the permissions they need, which is the principle of least privilege, and they should not rely on hard-coded credentials embedded in code or images. Secrets like passwords, tokens, and keys are especially sensitive because if they leak, attackers can move laterally and access data even if they never exploit a software vulnerability. Effective security means keeping secrets out of images and code repositories, and using controlled mechanisms to provide secrets at runtime with logging and access restrictions. In serverless, permission sprawl can happen when functions are given broad rights for convenience, like being allowed to access many data stores or manage infrastructure, and attackers can exploit that if the function is compromised. Beginners should understand that the easiest permission model is often too permissive, and convenience is a frequent cause of major incidents. Tight permissions reduce blast radius, meaning even if one workload is compromised, it cannot easily reach everything else. This is particularly important in payment environments, where sensitive systems should not be reachable by general-purpose components.

Network boundaries and segmentation still matter in these modern environments, even though the infrastructure is more dynamic. Containers often run in clusters or managed platforms where many workloads share underlying resources, and you need to control which workloads can talk to which services. Serverless functions also interact with services through network pathways, and those pathways should be restricted to only what is necessary. The concept is the same as separating rooms in a building, where you do not want every door to open into every other room. In payment environments, the cardholder data environment should be tightly controlled, and only specific approved services should be able to communicate with it. Beginners should understand that connectivity is a form of privilege, because if a workload can reach a database, it might be able to attack it or exfiltrate data. Controlling network pathways reduces the chance that a compromised workload becomes a stepping stone to sensitive systems. Strong network design also makes monitoring easier because unexpected connections stand out more clearly.

Runtime security is about detecting and limiting what workloads can do while they are running, because even well-built artifacts can be exploited if new vulnerabilities appear. For containers, runtime security includes preventing containers from running with excessive privileges, limiting access to host resources, and monitoring for unusual behavior like unexpected outbound connections or processes that do not match the normal application profile. For serverless, runtime concerns include monitoring function invocations, unusual event triggers, repeated failures that suggest probing, and unexpected downstream calls that could indicate abuse. Logging is essential, but it must be designed so it captures useful security signals without capturing sensitive data like full card numbers. Beginners should understand that runtime visibility is often harder in these environments because workloads can be short-lived and rapidly scaled, so logs and monitoring must be consistent and centralized. When you cannot see what is happening, you cannot respond effectively, and attackers benefit from that darkness. A practical mindset is to define what normal looks like and alert on deviations, rather than trying to guess every possible attack pattern. Runtime controls help reduce dwell time by turning silent exploitation into visible signals.

Supply chain risk is a big deal for containers and serverless because both often rely heavily on open-source libraries and third-party dependencies. A vulnerability in a widely used library can quickly become a vulnerability in many images or functions, and updating can be hard if dependency management is not disciplined. Supply chain risk also includes compromised packages or build systems, where attackers insert malicious code into what appears to be a normal dependency. Effective security includes controlling where dependencies come from, tracking what versions are used, and being able to rebuild and redeploy quickly when an issue is discovered. Beginners should think of this like a recipe that uses ingredients from many suppliers; if one supplier has a contamination issue, you need to know whether you used that ingredient and where. That requires documentation and inventory of dependencies, not just guesswork. In production payment environments, the ability to patch quickly matters because attackers often exploit publicly known vulnerabilities soon after they are disclosed. When your build and deployment process is repeatable, responding to supply chain issues becomes manageable rather than chaotic.

Another important area is configuration and environment variables, because many serverless and container incidents come from misconfiguration rather than code flaws. Misconfiguration might include exposing a service publicly when it should be internal, disabling authentication for convenience, using overly permissive storage access, or leaving debug settings enabled in production. Serverless functions may rely on environment variables for configuration, and if those variables include secrets or sensitive values, they must be protected and audited. Containers may also use configuration injection at runtime, and improper handling can lead to accidental exposure or insecure defaults. Beginners should recognize that configuration is part of security posture, and posture can drift over time as teams make changes to solve immediate problems. A strong approach includes clear standards for production configurations and a process to review and validate changes before they go live. In payment environments, configuration mistakes can accidentally route sensitive data through the wrong pathway or store it where it should not be stored. Effective security treats configuration as controlled, documented, and reviewed, not as a casual tweak.

As we wrap up, the main lesson is that securing containers and serverless production workloads effectively means applying clear, layered protections across the lifecycle, from build to deployment to runtime monitoring. Containers package applications for portability, but security depends on minimizing image content, using trusted sources, and running with strong restrictions rather than excessive privileges. Serverless, often delivered as F a a S, reduces some infrastructure maintenance but shifts responsibility toward secure code, safe configuration, and tight permissions, because functions can still be abused if access is too broad. Least privilege, secrets protection, and controlled network pathways reduce blast radius and prevent compromised workloads from reaching sensitive payment systems. Runtime visibility through logging and monitoring is essential in dynamic environments where workloads are short-lived and scale quickly, and it helps detect abnormal behavior early. Supply chain discipline and configuration management prevent vulnerabilities and misconfigurations from quietly entering production and lingering there. For a new learner, the mindset shift is that modern platforms change how we run software, but they do not change the need for careful control of identity, data flow, and change; those fundamentals are what make containers and serverless safe enough to use in real production payment environments.

Episode 49 — Secure containers and serverless production workloads effectively
Broadcast by