Kubernetes Community in Amsterdam: An Interview with Alessandro Vozza

Body

KBE Insider recently had the opportunity to interview Alessandro Vozza, Developer Relations at solo.io, Founder at Kubernetes Community Days Amsterdam, and CNCF Ambassador. We discussed Alessandro’s work in fostering the Kubernetes and cloud-native communities in Amsterdam and how these communities and conferences contribute to the explosion of Kubernetes in the area.

 

Alessandro’s Background and Move to Amsterdam

Alessandro grew up in Southern Italy and moved to Amsterdam 20 years ago to pursue his PhD in Chemistry. Although his background was in science, Alessandro has been involved in open source for a long time, installing Linux in 1999 and attending Linux user groups. After finishing his PhD, Alessandro reinvented himself as an IT professional. For the past 10 years, Alessandro has been organizing tech communities and meetups in Amsterdam, starting with DevOpsDays Amsterdam 10 years ago, just a year after the first DevOpsDays event.

Alessandro explained, “Community is really what drives me...seeing people connecting to each other and being happy, that’s absolutely my thing that I do best and I think that I enjoy the most.” From there, Alessandro moved on to work with different communities, including OpenStack, and eventually founded the Kubernetes meetup in Amsterdam. Alessandro enjoyed the work so much that he didn’t want to stop, saying, “It’s my thing.”

 

Work at Microsoft and Transition to Solo.io

Alessandro spent 6 years working as a software engineer for Microsoft. He was part of a customer-facing engineering team working on big projects and seeing production workloads. Although Alessandro loved the culture and people at Microsoft, he left to take on a new challenge at solo.io, a company he felt inspired by due to their technical product and fit in service mesh, which Alessandro sees as what “Kubernetes felt like a few years ago.” This KBE Insider E11 with Idit Levine, founder and CEO of Solo.io, is a good watch to learn more about their stories and products.

Solo.io provides a suite of tools and platforms for modern application development and deployment. With its capabilities to integrate various different requirements for connectivity, observability, and security into a unified, multi-cloud application networking platform, Solo.io has become a key player in the service mesh ecosystem. In addition, the company was one of the earliest contributors to the Istio project, an open source service mesh and recently a graduated Cloud Native Computing Foundation (CNCF) project. To learn more about Istio and service mesh, you can check out KBE's Istio Fundamentals learning path!

At solo.io, Alessandro works as a DevRel, helping developers and platform engineers understand solo.io’s open source technology. For Alessandro, “You talk about it because you love it and you want other people to understand it...so they can get the benefit too.” Explaining technology to others in an easy to digest way is what Alessandro finds most rewarding. With both an engineering and community background, Alessandro is in an ideal position to understand both the technical aspects of the products as well as how to convey them to others.

 

The Kubernetes Community in Amsterdam

According to Alessandro, the Kubernetes and cloud-native communities in Amsterdam are very popular, driven by both the large companies investing in the area as well as the large community of developers. Alessandro says it feels like living in a big village where the same people are seen at all the big events and meetups. The communities in Amsterdam are tight-knit, with people establishing bonds over time and treating each other like family.

Alessandro sees further growth of the communities in Amsterdam as pivotal to the continued rise of Kubernetes and cloud-native technologies. He believes more education and training are needed, comparing the situation to OpenStack several years ago which suffered from a lack of people who fully understood the project. The limiting factor is always people and their level of understanding. 

The move to hybrid cloud and rise of cloud-native technologies like Kubernetes have been hugely disruptive, requiring people to think about software in entirely new ways. Platforms like Kubernetes introduce event-driven architectures, serverless functions, and service meshes that are far removed from traditional, serial software development. While these new approaches come with significant benefits, they can be difficult to conceptualize. That's why continued education at all levels will be key to overcoming adoption barriers.

 

Contribution of Kubernetes Community Days Amsterdam

In 2020, KubeCon Europe was supposed to come to Amsterdam but had to be closed just a few weeks before the event due to COVID. After waiting three years, KubeCon EU 2022 in Amsterdam was a cathartic moment for Alessandro and validated his belief that community conferences are vital for spreading knowledge about new technologies.

Alessandro founded Kubernetes Community Days (KCD) Amsterdam to bring the local community together, saying “Community is really what drives me.” KCD Amsterdam features talks that dive deep into real use cases to provide practical knowledge and lessons learned. The event is all about community, with an unconference format that allows attendees to suggest and lead discussions on the topics they care about.

 

Closing Thoughts

Alessandro has been instrumental in growing the Kubernetes and cloud-native communities in Amsterdam through his work organizing meetups, conferences, and now at solo.io. His passion for open source and desire to spread knowledge are inspiring, showing how one person really can make a difference. Although new technologies can be disruptive, communities help make the transition smoother by providing education and bringing people together. Alessandro and the Kubernetes Community Days conference are shining examples of how to build an inclusive community focused on learning and sharing.

Full video at: KBE Insider Amsterdam

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

Summary

KBE Insider recently had the opportunity to interview Alessandro Vozza, Developer Relations at solo.io, Founder at Kubernetes Community Days Amsterdam, and CNCF Ambassador. We discussed Alessandro’s work in fostering the Kubernetes and cloud-native communities in Amsterdam and how these communities and conferences contribute to the explosion of Kubernetes in the area.

Open Source vs. Proprietary Software: Key Advantages and Disadvantages in Enterprise Adoption

Body

In this article, Erik van Weert, Solution Architect of OpenShift at Achmea, one of the largest insurance companies in the Netherlands, recently shared his perspectives on open source and proprietary enterprise software. Erik has many years of experience working with both open source and proprietary enterprise software systems.

 

Advantages of Open Source Software

According to Erik, the main advantage of open source software is the speed of innovation. The open source community is able to iterate and improve open source software much faster than commercial software vendors. With open source software, Erik said, “I can help look at the software, understand how it works [and] helps me use the software right.” In contrast, with proprietary software, “It's a big package, it's complex, I don't understand it and I can probably use 10 or 20% of it.”

The openness and transparency of open source software also makes it more usable, according to Erik. “There are so many people that are working around it. I can contact them, ask them for help, look at the documentation, [and] contribute.” The open source community provides more opportunities for support and collaboration compared to the closed-source model of proprietary software.

Erik also mentions that users can align their trade-offs with open source software trade-offs by examining the source code. This allows the software to "think like them," enabling more efficient usage. By understanding the design choices and trade-offs made in an open source project, users can adapt that software in a way that aligns with their own goals and trade-offs. This synergy leads to more effective usage of the software. Proprietary software vendors make trade-offs for the "average" user that may not align well with specific organizations. Open source gives power to the user rather than the vendor by exposing trade-offs and source code.

 

Disadvantages of Open Source Software

However, the open source model also comes with some significant downsides, especially for enterprise adoption. The most obvious disadvantage is the lack of a single commercial entity that can provide technical support and ensure continuity. With so many open source software tools to choose from, it can also be difficult for enterprises to determine which solutions are the most mature, stable, and suitable for their needs. Proprietary software vendors aim to address these types of challenges, offering enterprises more standardized solutions and dedicated commercial support.

 

Adopting Open Source at Achmea

According to Erik, adopting open source software successfully within an enterprise requires highly skilled engineers who can assess different open source solutions, determine how to best leverage and possibly contribute to them, and provide guidance to other teams. Without proper expertise, an organization can end up simply replicating their legacy infrastructure in the cloud rather than benefiting from more modern open source technologies built for the cloud native era.

At Achmea, Erik’s team provides a Red Hat OpenShift platform as a service to enable software development teams to adopt container and Kubernetes technologies. However, Erik noted that development teams have varying levels of skills and interests in infrastructure and container orchestration. His team is working to provide more customized offerings and services to address the specific needs of different teams. This includes both offloading engineering work from development teams when needed as well as giving more control to highly skilled teams that want it. 

Erik's team initially gave developers namespaces on OpenShift but are now diversifying their portfolio to help less technical teams. They provide namespaces for technical teams that "want control" and services for less technical teams that "don't need to think about technology". This dual approach caters to Achmea's "diversified set of engineers" with varying skill levels. 

 

Closing Thoughts

The journey to cloud native is ongoing at most enterprises, but open source software and community support have been instrumental in enabling it. According to Erik, with the right in-house expertise, open source solutions can significantly accelerate an organization’s digital transformation. However, as with any technology, understanding the pros and cons in the context of your organization’s needs and capabilities is key to success.

Erik's insights provide a valuable glimpse into how Achmea leverages open source technologies and manages the associated complexities. Overall, open source software has clearly been very beneficial but must be approached strategically, especially within large enterprises. With platforms like Red Hat OpenShift and guidance from experts like Erik, organizations can more confidently embrace open source solutions while still maintaining the reliability and support they depend on.

Full video at: KBE Insider Amsterdam

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

Summary

In this article, Erik van Weert, Solution Architect of OpenShift at Achmea, one of the largest insurance companies in the Netherlands, recently shared his perspectives on open source and proprietary enterprise software. Erik has many years of experience working with both open source and proprietary enterprise software systems.

Amplifying End-User Voices in Cloud Native Communities

Body

Catherine Paganini, Head of Marketing at Buoyant and CNCF TAG Contributor Strategy co-chair, recently sat down with KBE Insider for a car interview to share her community building experience. Catherine explains how prioritizing end-user voices at conferences, and building cloud native resources for non-technical audiences has been a big focus for her. She also shared her perspective on the challenges faced by project maintainers and how they can save time and enable future growth. This in-depth post will recap key highlights from the interview and provide additional context around Catherine's work at the CNCF and the wider cloud native ecosystem.

 

Background on Catherine Paganini and Linkerd

Catherine is the head of marketing at Buoyant, the creator of Linkerd where she is heavily involved in growing the Linkerd community. She is also co-chair of the TAG Contributor Strategy and Business Value subcommittee at CNCF.

Linkerd, the CNCF-graduated service mesh, is the first service mesh and the project that coined that term. Known for its operational simplicity, Linkerd eases service mesh adoption for organizations around the globe. 

When asked about Linkerd Day, a day-zero event at KubeCon, Catherine is excited to share that the event had 100% end user content. In her opinion, end users provide the most powerful stories because, unlike vendors, end users have real-world production experience and are generally not biased. A compelling example Catherine gives is a talk given by an end user who managed to successfully implement Linkerd in production with just a team of one, powerfully demonstrating its simplicity and ease of use.

 

Fostering Cross-Project Collaboration Through CNCF

Shifting gears and moving to her work within the CNCF's TAG Contributor Strategy. The TAG's main goal is to foster a broad, cross-project open source community for CNCF projects. Today, most projects tend to be siloed, focusing on their own communities. However, project maintainers across the cloud native ecosystem are facing many of the same challenges and repeatedly reinventing the wheel rather than collaborating.

The CNCF provides a "common home" for projects to come together, break down silos, and share ideas and best practices. For example, when she first joined the TAG, Catherine needed to understand how to grow and foster an engaged community. So she set out to interview maintainers from various projects to document their advice in a guide to help her and other projects follow best practices and avoid pitfalls. A great benefit of helping develop these resources within the TAG, is that it provides opportunities to access people who wouldn't otherwise be so generous with their time. They are much more likely to share experiences and lessons learned if they know it will help the community rather than just one individual. 

 

The Cloud Native Glossary

Catherine also spearheaded a CNCF initiative that has become a great resource for anyone new to cloud native: the Cloud Native Glossary. The Glossary defines cloud native terms in simple words so technical and non-technical audiences can understand them. It is community-driven and vendor-neutral, and anyone is welcome to contribute. There are also multiple localization efforts, with teams translating it into German, Spanish, Korean, Chinese, among others (an effort that got a shoutout on the keynote stage an hour after the interview was recorded!) — expanding global access to cloud native concepts for non-native English speakers worldwide.

 

Empowering the Non-Technical Community

As a non-technical individual, Catherine faced challenges when learning about Kubernetes after joining a Kubernetes company in 2017. Existing content assumed too much context, unfamiliar to beginners. At some point, she decided to buy a computer science textbook to build baseline knowledge and vocabulary. 

As adoption started spreading, acquaintances outside cloud native started asking Catherine to explain Kubernetes. That's when she realized that many non-technical practitioners are in the same boat, needing to learn core cloud native concepts without having a technical background.

After one more "can you explain Kubernetes to me," Catherine decided to write a Kubernetes primer for non-technical readers, leading to various other intro articles. Positive feedback indicated a broader need, so she reached out to the CNCF to ask if they'd be interested in creating this type of content. This led to the formation of the CNCF Business Value subcommittee which focuses on creating cloud native resources for business audiences. Just like the Glossary, these resources aim to be easy to understand for anyone new to cloud native, whether they are non-technical or junior technical practitioners. 

 

Connecting End User Voices, Cross-Project Collaboration, and Non-Technical Communities

Ultimately, Catherine's work demonstrates the power of amplifying end user voices, building a cross-project community, and empowering non-technical community members. The initiatives she works on bring these elements together, fostering a more diverse, accessible cloud native ecosystem — valuable lessons for organizations looking to engage with cloud native. Looking ahead, Catherine plans to continue evangelizing initiatives that prioritize end-user voices at KubeCon and beyond. She also hopes to organize another Linkerd Day in the future showcasing more end user stories.

Full video at: KBE Insider Amsterdam

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

Summary

Catherine Paganini, Head of Marketing at Buoyant and CNCF TAG Contributor Strategy co-chair, recently sat down with KBE Insider for a car interview to share her community building experience. Catherine explains how prioritizing end-user voices at conferences, and building cloud native resources for non-technical audiences has been a big focus for her. She also shared her perspective on the challenges faced by project maintainers and how they can save time and enable future growth. This in-depth post will recap key highlights from the interview and provide additional context around Catherine's work at the CNCF and the wider cloud native ecosystem.

Data Protection Challenges for Kubernetes Databases

Body

In a car interview at KubeCon Europe, Gaurav Rishi, VP of Product and Cloud Native Partnerships at Kasten by Veeam, discusses the data protection challenges that customers face as databases on Kubernetes-based applications grow exponentially in variety and complexity. Kasten provides Kubernetes application backup and recovery, disaster recovery, and application mobility. Gaurav's role involves managing product and partnerships that allow their technology pieces to come to life in a simple way.

 

Early Kubernetes Focused on Stateless Systems

In the early days of Kubernetes, the focus was on stateless systems and the "pets vs cattle" analogy, according to Gaurav. This emphasized simplicity and dispensability of individual containers. However, databases require the state to store data generated by applications. Cloud-native applications also leverage a variety of data services, beyond a single relational database.

Gaurav notes that as Kubernetes has evolved, databases have become the most popular workload running on the platform. This shift toward stateful systems and the rise of "polyglot persistence" - using multiple data services - has created new data protection challenges for Kubernetes. 

In the monolithic, on-prem world, enterprises had a single relational database to protect. Now, cloud-native applications may utilize several different databases ranging from SQL to NoSQL with their own native tools and best practices for backups. Eventual consistency models also complicate backups, requiring database vendors to define how to achieve consistency. All of this variety and state has made data protection an important issue for Kubernetes workloads. What started as a platform for simple, stateless containers must now deal with the complexities of stateful systems and databases at scale.

This transition illustrates how quickly Kubernetes has evolved from its early days and the shifting realities that platform vendors now face in providing solutions for data-centric workloads. The rise of databases as the dominant Kubernetes workload shows how state has become inescapable - and must be managed - within cloud-native environments.

 

Backups at the Storage and Database Layers

Within Kubernetes clusters, there are two main layers where backups can be performed:

  1. The storage layer through snapshots

Taking snapshots of persistent volumes is a simple way to backup data. Snapshots provide a point-in-time copy of the raw data blocks. However, snapshots may not capture data that is cached in memory but not flushed to disk. This can lead to data loss during recovery if changes were made between the snapshot and system failure.

  1. The database layer through logical backups

Logical backups use native tools provided by each database to backup the data in a consistent state. This captures data that is in memory as well as on disk. However, since there are over 300 different databases supported on Kubernetes, there are over 300 different logical backup tools. This adds complexity for platforms aiming to provide database backups across a variety of workloads.

Eventual consistency models further complicate backups by requiring careful handling to achieve a consistent state. This often necessitates using the native tools and best practices defined by the database vendor.

The combination of these factors - applications as the unit of atomicity (that includes Kubernetes objects), snapshots vs logical backups, hundreds of database-specific tools, and eventual consistency - makes data protection for Kubernetes databases an increasingly complex challenge.

 

Platforms must balance:

  1. Providing a unified interface that hides complexity
  2. Integrating with each database's native backup tools
  3. Following best practices for different databases and consistency models

In the end, platforms that can achieve simplicity through flexibility by integrating well with diverse database tools may succeed in this space. But the variety of backup techniques, tools and features represents a minefield that database-as-a-service platforms must navigate. Kasten's approach is to provide extensible templates that use native database tools, while giving freedom of choice. Kasten works with storage, Kubernetes distributions, and security partners to make their platform work across diverse data protection needs.

 

Backup/Recovery, Disaster Recovery, and Mobility Solutions

Kasten K10 provides application backup and recovery for Kubernetes environments. It discovers applications running in clusters, defines policies for how often backups should occur, and specifies retention periods. K10 intelligently selects between snapshot-based or logical database backups to achieve consistency. During recovery, K10 rehydrates microservices in the correct order based on application requirements. For complex environments with stateful applications and databases, K10 automates the entire backup/recovery lifecycle.

Kasten also provides disaster recovery solutions to ensure business continuity. K10 covers disaster recovery through snapshot replication, storage mirroring and native cloud DR capabilities. Kasten helps replicate backups across different storage types and cloud/on-prem environments for increased redundancy.

Finally, Kasten enables application mobility through data portability across Kubernetes environments and clouds. K10 allows customers to backup applications and restore them to different Kubernetes distributions or clouds. Kasten works across managed and self-managed Kubernetes deployments, hybrid clouds, and edge environments.

 

Conclusion

As data protection needs for Kubernetes databases grow increasingly complex, platforms that can manage this complexity through simplicity and flexibility will determine success. 

Data protection platforms like Kasten K10 that integrate native database tools, provide templates, and allow flexibility can help simply yet powerfully address the complex challenges of protecting applications on Kubernetes. Complementary open source projects such as Ceph and Kanister, can accelerate innovation to manage state and offer a variety of persistence and protection techniques within these dynamic environments.

Full video at: KBE Insider Amsterdam

 

Learn more about Kasten: kasten.io

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

 

Summary

In a car interview at KubeCon Europe, Gaurav Rishi, VP of Product and Cloud Native Partnerships at Kasten by Veeam, discusses the data protection challenges that customers face as databases on Kubernetes-based applications grow exponentially in variety and complexity. Kasten provides Kubernetes application backup and recovery, disaster recovery, and application mobility. Gaurav's role involves managing product and partnerships that allow their technology pieces to come to life in a simple way.

Navigating Digital Transformation While Reskilling Workforces

Body

Digital transformation has become a top priority for enterprises looking to keep pace with changing technology and customer demands. However, transforming legacy systems and processes requires new skills that many companies don't have in-house. This skills gap is driving enterprises to look for ways to reskill and upskill their existing workforces to fill critical roles. 

In this webinar hosted by GlobalData, experts from Red Hat, Cisco, and GlobalData explore how enterprises are approaching reskilling and training to enable digital transformation strategies.

 

The Skills Gap Driving Reskilling Efforts 

Charlotte Dunlap, Research Director at GlobalData, explains that she first noticed the skills gap a few years ago when operations teams struggled to migrate advanced applications to distributed environments. The rise of DevOps exposed the need for new roles and skills across the application lifecycle. Frontend, backend, DevOps, and security skills were all required to build, deploy, and manage modern distributed applications. 

Meanwhile, Amy Larsen DeCarlo, Principal Analyst at GlobalData, noticed skills shortages emerging around cloud and security. As organizations adopted more distributed, virtualized environments, security expertise didn’t keep pace. In fact, the International Information System Security Certification Consortium (ISC)2 projects the global cybersecurity workforce needs to expand by 75% to meet future demands. In its 2022 Cybersecurity Workforce Study, it stated the field needs 2.4 million more cybersecurity professionals globally beyond the current workforce of 4.7 million.

With massive layoffs and economic uncertainty, organizations are looking inward to reskill existing employees to fill open roles. According to GlobalData's jobs analytics database, US job vacancies went from under 50,000 in 2020 to an expected 130,002 in 2021 and 138,022 in 2022.

 

Top In-Demand Skills Driving Training Efforts

According to GlobalData’s analysis of company filings and hiring trends, the top skills driving training efforts include: 

  • Application lifecycle management 
  • Application platforms and containers 
  • DevOps 
  • Kubernetes 
  • Microservices 
  • Cloud security 
  • Automation including robotic process automation (RPA) 
  • Low code/no code development 
  • Observability 

An analysis of open job postings shows the majority are in DevOps, Kubernetes, microservices, cloud security, low-code automation, and observability. GlobalData’s research also reveals AI, machine learning, data analytics, and data management are frequently mentioned in relation to training needs.

 

How Vendors Are Enabling Reskilling 

By providing interactive, hands-on resources, technology vendors like Red Hat and Cisco enable enterprises to rapidly reskill teams into highly sought-after cloud, DevOps, security, and data analytics roles needed to support transformation efforts.

Gordon Tillmore, Product Marketing Director at Red Hat, discusses how the skills gap led them to launch Kube By Example (KBE), a free Kubernetes and Cloud Native learning community supported by Red Hat. KBE addresses skill gaps across development, DevOps, security, site reliability engineering, and more with 19 different training paths that range from beginner to advanced level. The training focuses on an "absorb by doing" approach with hands-on labs and projects so learners can immediately apply skills. Red Hat customers like Ford are even contributing training content based on their own experiences. 

Ray Stephenson, Head of Developer Relations at Cisco, explains how their Developer Relations program is reskilling network engineers through guided learning, labs, and sandboxes. Many network engineers need to add automation, programming, and other software development skills to manage the scale and complexity of modern networks. Ray shares an example where automating network provisioning reduced the time to set up a new retail store location from 6 hours manually to just 7 minutes. Developer Relations creates tailored learning journeys for audiences ranging from complete beginners to experts looking to expand specific skills like automation.

Meanwhile, GlobalData offers valuable insights into the most in-demand IT skills that enterprises need to focus their reskilling efforts on. Through comprehensive analysis of key data sources, GlobalData identifies current and emerging skill gaps to provide data-driven guidance to enterprises on where they need to focus reskilling initiatives for maximum impact. Equipped with these strategic insights, enterprises can shape comprehensive reskilling programs that tightly align to actual skills gaps.

 

How Enterprises Are Evolving Training 

According to Gordon from Red Hat, enterprise training programs are becoming more flexible and people-centric. Rather than only hiring new talent for skills like AI/ML, companies are looking inward. They’re assessing how existing employees can be upskilled or reskilled to take on new roles. 

Companies are realizing they can retain more of their culture and experience by investing in reskilling internal teams. With budget constraints and a tight job market, indefinitely hiring new people for every new skill that emerges is simply not sustainable. Training programs are shifting their focus to unlocking the potential of what already exists within workforces. Courses are being designed to meet learners where they are currently at in their skills and knowledge, and progress them to proficiency in new technologies through hands-on, experiential learning. Lectures and theory are taking a back seat to actively applying concepts through real-world projects and scenarios.

Enterprises are also becoming much more flexible in how they identify transferable skills within their workforces. Employees in non-technical roles may possess analytical strengths, communication abilities, and other talents that can translate well into more technical specialties with the proper training approach tailored to their aptitudes. 

The World Economic Forum projects that 50% of the global workforce will require reskilling as technology adoption accelerates. Admittedly, some jobs will be replaced by automation but wholly new roles will emerge at the intersection of people and cutting-edge technologies like AI. Enterprises that embrace comprehensive reskilling programs today will ensure they have the adaptable talent needed to compete and innovate in the future.

 

The Role of AI in Reskilling 

When asked about how AI like ChatGPT will impact reskilling, Ray from Cisco believes it will accelerate learning. Generative AI can provide code examples and templates that learners can then try out and customize. This application of AI enables faster skill building compared to learning programming from scratch.

Gordon from Red Hat notes that AI assistants like ChatGPT are the next evolution of learning tools like textbooks, online training, and Stack Overflow. However, he cautions that generative AI is only one piece of the puzzle. Humans still need to define problems, apply reasoning, and assemble solutions. AI-generated code assists with rote coding tasks but isn’t a substitute for human coders' problem-solving abilities.

 

Key Takeaways 

  • Digital transformation is driving demand for new technical skills that enterprises need to develop internally through reskilling and upskilling. 
  • Operations, security, cloud, and automation skills are particularly scarce across industries. 
  • Training vendors provide flexible learning paths, sandboxes, and community forums to help technologists learn in-demand skills. 
  • Enterprises are evolving training strategies to focus more on reskilling current employees over hiring. 
  • Generative AI can accelerate learning but humans are still needed to frame problems and assemble solutions.

Investing in workforce training and skills development is crucial for enterprises undergoing digital transformation. With the right strategies and tools, companies can reskill employees into critical roles needed to support new technologies and processes. But technology alone is not enough - enterprises need to take a people-centric approach to reskilling focused on unlocking the potential of current employees. This human-centered training, enabled by resources from leading technology vendors, will fuel the next phase of digital transformation.

Full video at: GlobalData - Reskilling IT Workers Into a Digital Age

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

Summary

Digital transformation has become a top priority for enterprises looking to keep pace with changing technology and customer demands. However, transforming legacy systems and processes requires new skills that many companies don't have in-house. This skills gap is driving enterprises to look for ways to reskill and upskill their existing workforces to fill critical roles. 

In this webinar hosted by GlobalData, experts from Red Hat, Cisco, and GlobalData explore how enterprises are approaching reskilling and training to enable digital transformation strategies.

An Introduction to Podman Desktop: The GUI for Podman Container Engines

Body

Podman Desktop provides a graphical interface for Podman, a container engine designed to be a drop-in replacement for Docker. Podman Desktop makes it easy to build, deploy and manage containers and container-based applications on desktop systems. KBE Insider had the opportunity to feature Podman Desktop in Episode 21 with Urvashi Mohnani, Langdon White, and Josh Wood. In this blog post, we’ll explore and summarize the key features of Podman Desktop and how it enhances the container experience.

 

Accessible Containers for All

Podman Desktop expands the reach of containers by providing an intuitive graphical interface. This makes containers and Podman more accessible to non-Linux and non-CLI users. The visual interface helps new users understand containers and how to work with them. For example, Podman Desktop was introduced to teach students about containers. While the command line interface initially confused students, the graphical interface of Podman Desktop allowed them to grasp concepts more easily and retain more information. They could click buttons to see the results of their actions visually, helping build understanding.

For users unfamiliar or uncomfortable with the command line, Podman Desktop provides an approachable on-ramp to using containers. The graphical interface lowers the barrier to entry, allowing more users to benefit from containers and Podman.

 

Integrated Kubernetes Support

Podman Desktop provides built-in support for deploying workloads to Kubernetes. It can generate Kubernetes YAML files from Podman workloads, allowing you to migrate workloads between the two platforms. Podman Desktop also allows you to connect to and deploy to existing Kubernetes clusters.

You can spin up a local Kind (Kubernetes in Docker) or MicroShift (OpenShift in containers) cluster directly within Podman Desktop. This makes it easy to test workloads locally before deploying to production clusters. You can make changes, re-deploy, and experiment without worrying about impacting a production environment.

 

Support for Docker Compose

Podman recently added support for running Docker Compose files. Podman Desktop allows you to run Compose files locally, then generate the necessary Kubernetes YAML to deploy the application to Kubernetes. This provides a familiar starting point for developers used to Docker, which can then be adapted as needed to suit a Kubernetes-based environment.

The Docker/Podman integration is still ongoing, but the addition of Docker Compose support in Podman is a big step forward. We can likely expect to see more updates focused on bridging these two container platforms.

 

Bug Fixes and Runtime Improvements

Podman and Podman Desktop are actively developed open source projects. Ongoing improvements are being made to fix bugs, improve stability and security, and enhance runtime features. Podman 4.6 is releasing soon, with an RC1 expected in late July 2022. While tightly coupled, Podman and Podman Desktop release on independent schedules. New versions of Podman will be picked up by Podman Desktop to leverage any new features or fixes. But Podman Desktop also has its own stream of updates and feature additions.

Over the next 6 months, Podman will likely focus on optimizing to run on edge devices and in automotive environments. This includes improving performance, memory usage, and resource utilization for these use cases. Additional use case specific features may also emerge.

 

Summary

Podman Desktop provides a graphical interface to enhance the Podman container engine experience. It expands the reach of containers and makes them more accessible to new and non-technical users. Built-in support for Kubernetes, including generating YAML and connecting to clusters, provides a path for testing locally and deploying to production environments. 

Support for Docker Compose in Podman helps bridge the gap for Docker users. And active development of Podman and Podman Desktop means ongoing improvements and new features. Over the next months, we can expect a focus on edge and automotive use cases, as well as tighter integration with Kubernetes. 

Get involved with the Podman and Podman Desktop communities and projects on GitHub to learn more and contribute! The Podman and Podman Desktop communities can be found on IRC, Matrix, Discord, Kubernetes Slack, and the newly revamped Podman website.

Full video at: KBE Insider

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

Summary

Podman Desktop provides a graphical interface for Podman, a container engine designed to be a drop-in replacement for Docker. Podman Desktop makes it easy to build, deploy and manage containers and container-based applications on desktop systems. KBE Insider had the opportunity to feature Podman Desktop in Episode 21 with Urvashi Mohnani, Langdon White, and Josh Wood. In this blog post, we’ll explore and summarize the key features of Podman Desktop and how it enhances the container experience.

Overcoming Challenges in the Transition to Cloud Native

Body

We had the opportunity to interview Jim Wittermans, Chapter Area Lead at ABN AMRO Clearing Bank, while in Amsterdam for KubeCon Europe. We discussed various challenges and myths surrounding the transition to cloud native platforms, as well as how Jim's team is helping ABN AMRO Clearing Bank solve these problems.

 

Overcoming Vendor Lock-in and Security Concerns

One key challenge Jim mentioned in adopting cloud native technologies is the concern over vendor lock-in. While many assume that vendor lock-in only applies to cloud-based solutions, Jim says that traditional on-premises infrastructure can also lead to lock-in. The key is visibility into where dependencies exist so you can make informed decisions.

Security concerns also often hold organizations back from moving to the cloud. Jim notes that moving to the cloud can actually expose existing security holes that were previously hidden on-premises. While cloud platforms provide robust security controls, organizations need to learn how to leverage them properly. 

An effective way to overcome these challenges without hassle is to take advantage of KBE's Kubernetes Security learning path. This learning path is highly recommended for team training, as it educates the entire team on potential security risks and essential best practices. By completing this tutorial, developers can learn to deploy applications safely and respond promptly to potential threats, ensuring the security of their Kubernetes environments. Visit Kubernetes Security to learn more.

 

Focus on Value Delivery, Not Infrastructure Management

Jim believes development teams should focus on where they can deliver value, not on managing their own infrastructure. While hosting their own servers and VMs may have been a part of operations in the past, it takes developers away from writing code that delivers value to customers. Moving to cloud native services and platforms allows development teams to abstain from managing the underlying infrastructure, freeing them up to focus on shipping features and code that matters.

 

Training and Mindset Shifts Needed for Cloud Native Adoption

Jim notes that simply re-platforming applications into containers does not make them cloud native. There are fundamentally different skills and mindsets needed for developing cloud native applications. ABN AMRO Clearing Bank provides training to help their teams develop these skills and understand the architectural changes required. They also employ "proof of concepts" to get teams experimenting with new technologies in an isolated environment.

 

Development Services Team Accelerates Adoption

To accelerate cloud native adoption, ABN AMRO Clearing Bank created a "Development Services" team that sits between their platform and product teams. This team produces reusable templates, Dockerfiles, YAML files, and other consumable constructs that are pre-approved for their environment. This removes friction for development teams who would otherwise each have to navigate security, governance, and architecture requirements from scratch. It has significantly boosted the speed of cloud native adoption across the bank.

 

Rotating Teams Through Development Services

Jim also rotates some developers from product teams through the Development Services team for a period of time. This gives developers exposure to user perspectives and provides a reminder of the value their work delivers. It also provides valuable feedback to the Development Services team on how they can better enable product teams.

As Jim and his team are tackling the shift to cloud native platforms, their experiences provide valuable insights for any organization looking to successfully make the transition to cloud native technologies at scale. With patience, training, and a focus on enabling developers, even traditionally risk-averse organizations like banks can begin reaping the benefits of cloud native platforms.

Full video at: KBE Insider Amsterdam

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

Summary

We had the opportunity to interview Jim Wittermans, Chapter Area Lead at ABN AMRO Clearing Bank, while in Amsterdam for KubeCon Europe. We discussed various challenges and myths surrounding the transition to cloud native platforms, as well as how Jim's team is helping ABN AMRO Clearing Bank solve these problems.

Bringing Innovation to Finance with Cloud-Native Technologies

Body

Finance has traditionally been an industry slow to adapt to new technologies. However, innovative companies like Ortec Finance are utilizing cloud-native technologies like Kubernetes and microservices to provide Software-as-a-Service (SaaS) solutions to their clients. Joining us to discuss how Ortec Finance has leveraged these technologies is Joris Cramwinckel, Technologist at Ortec Finance.

 

Modernizing a 40-Year-Old Fintech Company

Ortec Finance has been a fintech company for the last 40 years, providing software made by econometricians for econometricians. However, about seven years ago, Ortec Finance’s CTO realized the technology landscape was changing rapidly and more research needed to be done. Joris was given a part-time position to investigate new technologies that could benefit Ortec Finance.

Through experiments in Ortec Finance’s innovation lab, Joris and his team started working with Kubernetes and cloud technologies as early as 2016. They were also early adopters of serverless computing, researching the cost efficiency of solutions like AWS Lambda. Eventually, Joris and Ortec Finance’s CTO revised the entire enterprise technology strategy and made the case for adopting cloud-native and event-driven technologies company-wide.

Adapting cloud-native technologies company-wide is now widely supported by learning communities like Kube By Example (KBE), which addresses training needs across skill levels and a broad range of skill sets. First, completing fundamental training, such as KBE's Kubernetes Fundamentals course, is highly recommended. Gaining familiarity with Kubernetes and cloud-native concepts also helps maximize training efficiency, which you can. The availability of open educational resources to upskill engineers and transform organizational culture has made the transition to modern architectures more feasible. Going through initial training establishes a baseline understanding and common vocabulary across teams. With a shared understanding in place, teams can explore more advanced topics at their own pace. They can then put these concepts into practice through collaborative hands-on workshops and hackathons. This community-first learning approach accelerates upskilling and fosters an internal community of practice.

 

Standardizing Processes with Containers

One of the main benefits Ortec Finance has gained from adopting container technologies like Kubernetes is the standardization of processes. According to Joris, “Now our CI/CD architecture is almost the same for Java web as for the back end of desktop applications. And that's way easier to maintain if everyone ships their stuff into containers.”

Ortec Finance has a mix of web applications, APIs, and traditional desktop software. In the past, they had separate tooling and processes for each type of product. By putting everything into containers, they’ve been able to reuse tooling and streamline CI/CD pipelines regardless of the application type.

 

Easing the Transition for Teams and Customers

While the technology behind Kubernetes and cloud-native architectures has become straightforward to implement, transitioning teams and customers is still challenging. When onboarding a new product or team to their cloud-native stack, Ortec Finance runs “college tours” with tailored training for both engineers and end customers.

Joris acknowledges that the transformation to cloud-native technologies takes time for most companies. For example, even Netflix took 6-7 years to fully transition from on-premises to the cloud. The real challenges are updating delivery processes, sales practices, and helping the broader workforce adapt. Ortec Finance is still working to optimize their GitHub flow and release processes to match the needs of different product teams, like those dealing with configuration and slower release cycles.

 

Visualizing Complex Cloud Architectures

Given the complexity of cloud-native architectures, Joris and his team do a lot of diagramming using tools to make processes tangible. Icons for Kubernetes, cloud providers, and other technologies help quickly denote how different parts of the architecture fit together. While YAML is the standard for configuration, visualizations and whiteboarding are still important for discussing how the overall system should function.

Joris believes “it's about visioning the possible and then tailoring the processes and technologies.” Diagramming helps make that vision concrete before determining how to execute it.

 

The Future of Cloud in Finance

While finance has traditionally been slower to adopt new technologies, companies like Ortec Finance show that innovation is possible. According to Joris, “The technology is actually pretty easy at some point. The biggest challenge now is to change the humans, the organization.”

As Ortec Finance continues transitioning teams and products to their cloud-native stack, they have a lot of work left onboarding customers and ensuring all parts of the organization adapt. However, the benefits of standardization, ease of experimentation, and faster delivery far outweigh the challenges. The future of Fintech will depend on companies embracing the cloud, and Ortec Finance is positioning themselves ahead of the curve. Overall, Joris sees a lot of promise in how cloud-native technologies can bring innovation to finance.

Full video at: KBE Insider Amsterdam

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

Summary

Finance has traditionally been an industry slow to adapt to new technologies. However, innovative companies like Ortec Finance are utilizing cloud-native technologies like Kubernetes, Red Hat OpenShift, and microservices to provide Software-as-a-Service (SaaS) solutions to their clients. Joining us to discuss how Ortec Finance has leveraged these technologies is Joris Cramwinckel, Technologist at Ortec Finance.

The Ins and Outs of Keycloak: An Interview with Alexander Schwartz

Body

Alexander Schwartz is a Principal Software Engineer at Red Hat working on the Keycloak team. He has been working with Keycloak for over 8 years, starting as a community contributor in 2015 before being employed by Red Hat in January 2022. Alexander works fully remotely from his home office near Frankfurt, Germany.

Keycloak is an open source identity and access management solution aimed at modern applications and services. KBE Insider recently had the opportunity to talk with Alex about Keycloak during his visit to Amsterdam for KubeCon Europe.

 

The Future of Keycloak

Keycloak has targeted both cloud and non-cloud environments for many years. You can download Keycloak and run it on OpenJDK, deploy it on Kubernetes and OpenShift, or use it with various cloud services. Keycloak integrates with technologies like OpenID Connect and OAuth 2.0 to secure applications and APIs. Alex sees opportunities to further integrate Keycloak with cloud-native technologies, for example, by adding support for CloudEvents to send notifications when users log in. Alex also mentions that there is a desire for providing more deployment examples and documentation for securing Kubernetes clusters, serverless environments, and other cloud native platforms with Keycloak.

 

Exciting Features Coming Soon

The next release Keycloak 22 will be based on Quarkus 3 and Hibernate 6 to benefit from the improvements of these frameworks. After the web UI for administrators has been updated in a previous release, the account console for users will be updated as well to the latest set of technologies with an improved user experience.

Support for cross-site Keycloak deployments, a highly requested feature, is currently a preview feature. The Keycloak team is working to make active-passive, and later active-active setups fully supported for a future release.

 

The Need for Observability 

Observability and monitoring are passions for Alex. Keycloak already supports standard metrics out-of-the-box. After adding the OpenTelemetry agent, Keycloak provides additional metrics and tracing, which provides details like the most used Keycloak URLs, and traces calls with detailed timings down to database queries. Improving metrics and observability to help users optimize their installations is an ongoing goal. 

Alex wants to provide more predefined dashboards, alerts, and monitoring tools specific to Keycloak. Observability is key for well-running software systems, especially when operating at large scale. Knowing details about usage patterns and performance helps determine when and how to scale Keycloak, optimize configurations, and troubleshoot issues. 

 

Serving Two Types of Users 

Keycloak serves two main types of users. Some want an IAM solution without much customization, using Keycloak out of the box to secure applications and services. Others want to deeply customize Keycloak for their needs by developing custom authenticators, event listeners, user federation mappings, and more. Keycloak aims to support both use cases, providing an easy to use solution as well as extension points for custom integrations. An ecosystem of extensions and integrations has developed around Keycloak to meet the needs of more customized installations. Keycloak also has a vibrant community building tutorials, writing blog posts, and sharing best practices. 

 

The Road Ahead 

Cross-site support will hopefully be released soon, along with the other features mentioned previously. The community around Keycloak continues to drive new feature requests and find innovative ways to deploy Keycloak, keeping the project active and vibrant. Keycloak is also broadly used within Red Hat to secure many public-facing services, providing valuable real-world usage and feedback. 

Overall, the future looks bright for Keycloak, with an active community and virtually endless ways to use an IAM solution. It was an insightful discussion about Keycloak. Improving in areas like zero downtime upgrades, cloud native integration, customization, and observability will help serve all Keycloak users, whether they want an out-of-the-box solution or require heavy customization. The project aims to continue balancing these needs and improving the technology overall. Many thanks to Alex for sharing his time and knowledge!

Full video at: KBE Insider Amsterdam

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

Summary

Alexander Schwartz is a Principal Software Engineer at Red Hat working on the Keycloak team. He has been working with Keycloak for over 8 years, starting as a community contributor in 2015 before being employed by Red Hat in January 2022. Alexander works fully remotely from his home office near Frankfurt, Germany.

Keycloak is an open source identity and access management solution aimed at modern applications and services. KBE Insider recently had the opportunity to talk with Alex about Keycloak during his visit to Amsterdam for KubeCon Europe.

Closing the Skills Gap with AI & Automation Technologies and Training Programs

Body

The rise of cloud technologies and Kubernetes has increased demands on IT teams while also exacerbating the talent shortage. Low-code platforms, AI & automation technologies, and self-paced training programs like Kube by Example (KBE) training provides resources to help close this skills gap through upskilling and reskilling.

In this KBE Insider car interview, Charlotte Dunlap from GlobalData Analyst Group notes that while Kubernetes offers a scalable and elastic platform, enterprises struggle with configuring new application architectures on the platform. This has led to a large global technology skills gap that vendors are working to address. Before Kubernetes, developers quickly deployed AWS services, but operations teams lacked visibility. Kubernetes bridges that divide while introducing new challenges.

 

Upskilling and Reskilling Existing Talent

According to Charlotte, instead of seeking candidates with specific experience, companies should leverage the institutional knowledge of existing employees. For example, KBE learning paths help with both upskilling - expanding knowledge of new technologies and reskilling - preparing employees for new roles using core talents.

So instead of requiring 7 years of Python experience, companies should focus on software design and problem-solving skills. With time, employees can learn new languages and tools. Close collaboration between developers and operations teams from the start and automation can also help applications transition smoothly to production.

 

Democratizing Technology Adoption

Low-code platforms, RPA, and AI have broadened access to previously "elite" technologies, enabling vendor training programs. Products like Microsoft Power Platform, IBM Cloud Packs for Business Automation, and Salesforce Flow include technologies that were once reserved only for experienced developers and data scientists. These technologies have expanded the audience for high productivity tools by allowing non-coding users to leverage intelligent automation and cognitive services. In this way, low-code platforms and AI have made it possible for more workers to create applications and automate processes. This in turn has created demand for training programs to help users adopt and implement these tools effectively within their organizations. Low-code and AI technologies represent an inflection point where previously "difficult" capabilities are becoming accessible to the mainstream. With guidance and best practices, low-code platforms and AI can enable digital transformation and close skills gaps by empowering a broader range of workers. 

KBE offers free, on-demand learning paths to dive deeper into how developers and IT staff can start leveraging AI with Kubernetes to automate scaling and manage resources. Visit AI/ML with Jupyter on Kubernetes: JupyterHub for more information.

 

Bridging the Developer-Operations Divide

While DevOps is ideal, silos still exist. Developers create applications, but operations teams struggle with configuration and deployment. More abstraction is needed to bridge this divide. Balancing pendulum swings will be key - providing developer platforms with enough abstraction for operations teams without sacrificing maintainability. Better communication across roles will help break down silos and enable teams to effectively utilize each other's strengths.

In short, the focus should shift from only seeking candidates with specific experience levels to leveraging existing talent and institutional knowledge within companies. Solutions to the skills gap that help with both upskilling existing workers on new technologies like Kubernetes as well as reskilling employees for new roles using their core talents are critical. With the right approach, technologies like AI and training programs can close the skills gap by empowering workers with tools to learn and adapt. 

Full video at: KBE Insider Amsterdam

 

Upcoming Webinar on “Reskilling IT Workers Into a Digital Age”

GlobalData’s webinar, “Reskilling IT Workers Into a Digital Age”, will be live on June 13, 2023, at 11:00 AM Eastern Time. This webinar will provide further insights on the topic of closing the skills gap with AI and automation technologies. 

Image
Reskilling IT Workers Into a Digital Age

The webinar will examine how enterprises are navigating digital transformation while dealing with internal resource constraints by reskilling and upskilling their existing workforces. Attendees will gain valuable perspectives and advice on how their organizations can leverage AI and automation strategically, develop impactful training programs, and build a workforce equipped to support digital transformation initiatives.

Register here for GlobalData’s webinar, “Reskilling IT Workers Into a Digital Age”.

 

Follow us: @kubebyexample

Leave anonymous feedback

Join the KBE community forum

 

Summary

The rise of cloud technologies and Kubernetes has increased demands on IT teams while also exacerbating the talent shortage. Low-code platforms, AI & automation technologies, and self-paced training programs like Kube by Example (KBE) training provides resources to help close this skills gap through upskilling and reskilling.

In this KBE Insider car interview, Charlotte Dunlap from GlobalData Analyst Group notes that while Kubernetes offers a scalable and elastic platform, enterprises struggle with configuring new application architectures on the platform. This has led to a large global technology skills gap that vendors are working to address. Before Kubernetes, developers quickly deployed AWS services, but operations teams lacked visibility. Kubernetes bridges that divide while introducing new challenges.

Subscribe to