In a cloud-native architecture, the network should be considered untrustworthy. The status quo - network segmentation with VPNs, where all "internal" applications are allowed to be supplanted by identity-aware, always encrypted connections. This has caused the use of mTLS and private PKI to expand rapidly, causing operational and compliance headaches for developers and security engineers alike. Such users often turn to service mesh products as the risk of managing all these private keys themselves is untenable.
Using cert-manager, its CSI driver and trust root distribution capabilities, this workshop will show you how cert-manager can issue, manage and rotate mTLS certs, allowing users to have strongly attested and verified Machine Identities between their Kubernetes Pods. All without the workload private keys leaving node memory!
Jake Sanders, cert-manager maintainer
Failover across clusters is a great way to improve the overall uptime and reliability of Kubernetes applications. While whole-cluster failover can be accomplished at the global ingress layer, failing over individual services is a little more difficult. During this session, Linkerd maintainer Eliza Weisman will walk you through how to use Linkerd, the CNCF graduated service mesh, to enable traffic failover for individual services across clusters. Attendees will learn how to combine service mesh metrics, traffic shifting, and cross-cluster communication in a cohesive and automated way using purely open-source while preserving fundamental security guarantees such as mutual TLS.
Eliza Weisman, Linkerd maintainer
Learn how to scan your Kubernetes workloads to improve your resource utilization and security using open source tools Polaris and Goldilocks. You will watch Andy Suderman, Director of R&D and Technology and Rachel Sweeney, SRE at Fairwinds as they show how to correctly configure your clusters based on Kubernetes best practices for security and efficiency.
Andy Suderman & Rachel Sweeney, Fairwinds
Serverless promises to change the way we consume software. It allows us to pay for only what we use and helps drive down operational costs to the minimum amount of resources necessary.
Architecting for serverless requires a unique look at app logic and the way it is deployed. It takes a combination of the logical and physical worlds. An architectural pattern has emerged where we can scale ephemeral compute separate from services that need to persist.
We use Kubernetes to deliver exactly this. A “serverless” experience that is driven and enabled by compute pods and storage pods. We also have used our experience running thousands of database clusters on Kubernetes to automate the operational expertise of managing a distributed database.
In this talk, we will take a dive deep into the architecture of our application and share:
* A definition and outline of the challenges of serverless
* How we reworked our logic for a serverless approach
* How we use Kubernetes to gain serverless autoscaling
Jim Walker, Cockroach Labs
Business constraints and customer requests can often lead to the need to stand up new Kubernetes environments across multiple cloud providers. This growing complexity in computing infrastructure will incur greater operational costs for your organization when coordinating across the multiple teams involved.
Pulumi engineers Aaron Friel and Guinevere Saenger will demonstrate standing up Kubernetes clusters, deploying applications, and automating ops tasks by building a CLI using the Pulumi Automation API. These tools empower every engineer - from application developer to site reliability engineer - to be a cloud engineer.
Aaron Friel & Guinevere Saenger, Pulumi