Showing posts with label DevOps. Show all posts
Showing posts with label DevOps. Show all posts

Tuesday, 13 June 2023

5G network rollout using DevOps: Myth or reality?

Dell EMC, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Guides, Dell EMC Learning, Dell EMC Preparation Exam

The deployment of Telecommunication Network Functions had always been a largely manual process until the advent of 5th Generation Technology (5G). 5G requires that network functions be moved from a monolithic architecture toward modularized and containerized patterns. This opened up the possibility of introducing DevOps-based deployment principles (which are well-established and adopted in the IT world) to the network domain.

Even after the containerization of 5G network functions, they are still quite different from traditional IT applications because of strict requirements on the underlying infrastructure. This includes specialized accelerators (SRIOV/DPDK) and network plugins (Multus) to provide the required performance to handle mission-critical, real-time traffic. This requires a careful, segregated network deployment process into various “functional layers” of DevOps functionality that, when executed in the correct order, provides a complete automated deployment that aligns closely with the IT DevOps capabilities.

This post provides a view of how these layers should be managed and implemented across different teams.

The need for DevOps-based 5G network rollout


5G rollout is associated with the below requirements that make it mandatory to brutally automate the deployment and management process (as opposed to the traditional manual processes in earlier technologies such as 4G):

◉ Pace of rollout: 5G networks are deployed at record speeds to achieve coverage and market share.

◉ Public cloud support: Many CSPs use hyperscalers like AWS to host their 5G network functions, which requires automated deployment and lifecycle management.

◉ Hybrid cloud support: Some network functions must be hosted on a private data center, but that also the requires ability to automatically place network functions dynamically.

◉ Multicloud support: In some cases, multiple hyperscalers are necessary to distribute the network.
Evolving standards: New and evolving standards like Open RAN adoption require continuous updates and automated testing.

◉ Growing vendor ecosystems: Open standards and APIs mean many new vendors are developing network functions that require continuous interoperability testing support.

All the above factors require an extremely automated process that can deploy/re-deploy/place/terminate/test 5G network functions on demand. This cannot be achieved with the traditional way of manually deploying and managing network functions.

Four layers to design with DevOps principles


There are four “layers” that must be designed with DevOps processes in mind:

Dell EMC, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Guides, Dell EMC Learning, Dell EMC Preparation Exam

1. Infrastructure: This layer is responsible for the deployment of cloud (private/public) infrastructure to host network functions. This layer will automate the deployment of virtual private cloud, clusters, node groups, security policies, etc. that are required by the network function. This layer will also ensure the correct infrastructure type is selected with the CNIs required by the network function (e.g., SRIOV, Multus, etc.)

2. Application/network function: This layer is responsible for installing network functions on the infrastructure by running helm-type commands and post-install validation scripts. It also takes care of the major upgrades on the network function.

3. Configuration: This layer takes care of any new Day 2 metadata/configuration that must be loaded on the network function. For example, new metadata to be loaded to support slice templates in the Policy Charging Function(PCF).

4. Testing: This layer is responsible for running automated tests against the various functionalities supported by network functions.

Each of the above layers has its own implementation of DevOps toolchains, with a reference provided in the diagram above. Layer 1 and 2 can be further enhanced with a GitOps-based architecture for lights-out management of the application.

Best practices


It is very important that there is a well-defined framework with the scope, dependencies, and ownership of each layer. The following table is our view on how it should be managed:

As you can see, there are dependencies between these pipelines. To make this end-to-end process work efficiently across multiple layers, you need an intent-based orchestration solution that can manage dependencies between various pipelines and perform supported activities in the surrounding CSP ecosystem, such as Slice Inventory and Catalog.

   Pipeline
Infrastructure Application  Configuration  Testing 
Scope
(Functionality to automate)
VPC, subnets, EKS cluster, security groups, routes CNF installation, CNF upgrades CSP slice templates, CSP RFS templates, releases and bug fixes Release testing, regression testing
Phase
(Applicable network function lifecycle phase)
Day 1 (infrastructure setup)  Day 0 (CNF installation), Day 1 (CNF setup)  Day 2+, on-demand  Day 2+, on-demand 
Owner
(Who owns development and maintenance of pipeline?) 
IBM/cloud vendor  IBM/SI  IBM/SI  IBM/SI 
Source control
(Place where source artifacts are stored. Any change triggers the pipeline, depending on the use case) 
Vendor detailed design  ECR repo (images), Helm package  Code commit (custom code)  Code commit (test data) 
Target integration (How the pipeline will interact during the execution process)  IaaC (e.g., Terraform), AWS APIs  Helm-based  RestConf/APIs   RestConf/APIs 
Dependency between pipelines  None  Infrastructure pipeline completed  Base CNF installed  Base CNF installed, release deployed 
Promotion of different environments  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod 

Telecommunications solutions from IBM


This post provides a framework and approach that, when orchestrated correctly, enables a completely automated DevOps-/GitOps-style deployment of 5G network functions.

In our experience, the deciding factor in the success of such a framework is the selection of a partner with experience and a proven orchestration solution.


Source: ibm.com

Tuesday, 15 March 2022

DevOps or DevSecOps?

DevOps, DevSecOps, IBM Exam Study Materials, IBM Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Exam Preparation

Currently, security attacks are getting more sophisticated and targeting a wider array of system components. This makes preventing and recovering from them more difficult especially when security knowledge and responsibilities are siloed within an organization. It is increasingly more important to ensure that everyone in an organization has a stake in security and that the company’s experts integrate more deeply with other teams. Many companies claim to make security a pillar of culture, but rarely do they invest in more than the occasional training. To truly make security a fundamental pillar, it must be embedded deeper within the organization’s engineering teams and software development life cycles (SDLCs). The latest trend in operationalizing security within tech organizations is the melding of DevOps and security professionals into a joint DevSecOps team and bringing automation, along the domain of quality assurance, into the security toolset to further reduce risk.

DevOps, DevSecOps, IBM Exam Study Materials, IBM Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Exam Preparation
Experienced DevOps professionals have long understood their responsibility for keeping tabs on the security risks in their purview, but in many organizations they don’t necessarily have the tools or backing to delve more broadly into security. Often team members have I-shaped skillsets and responsibility areas and their window of involvement in security are kept very narrow and strictly within the operational activities of their team. This makes operationalizing security across the organization much more difficult. This is especially true in larger organizations where the collaboration barriers between teams and departments are more rigid and having security personnel siloed is itself a vulnerability. When security is only a small sliver of individual or team responsibilities, issues are found much later in the process and the risk level and cost of remediation rises.

This is mitigated by embedding security within other teams in the organization. Lately, the trend towards blended DevSecOps teams with broader security oversight helps address the challenges organizations face by removing the silos and barriers of collaboration within this area of the organization. This allows security to be shifted within company workflows facilitating discovery and mitigation of risks and vulnerabilities. It acknowledges the scenario that the later a vulnerability is discovered, timely remediation, spent effort and risk exposure are more costly. Ensuring security guardrails are in place earlier in the development cycle reduces the cost of security and compliance programs and reduces the likelihood of high risk issues making it to production without  a mitigation strategy.

A lot of companies claim security is everyone’s job. But often the culture of security ends at annual anti-phishing trainings and/or the occasional confidentiality discussion. Embedding security experts into teams like DevOps and ensuring experts are hired with enough security knowledge in their toolbox brings it deeper into the company’s culture. Once planted, those roots will grow. In many cases converting to the DevSecOps model will be mostly painless. The actual team process will largely stay the same regardless of the development model the team is using although this shift is especially effective in agile organizations. Workloads should also remain stable—if not decrease—as the time is no longer spent on fixing vulnerabilities in production.

DevOps, DevSecOps, IBM Exam Study Materials, IBM Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Exam Preparation
While putting security experts in roles on a DevOps team is a start, it is ever more important to hire DevOps engineers with T-shaped skillsets. While they may not be deep experts, the critical part is that they have security and compliance skills in their repertoires. Their primary duties day-to-day may not be security focused, however in their work with other teams, their security consciousness and conscientiousness will rub off. and spread. They will find issues earlier in the pipeline and the teams they work with will will learn from them and gain the knowledge to spot and even prevent issues. They bring a concern for security standards into their interactions with other engineering teams and over time they help transform the overall culture of engineering into one that is more security conscious.

Automation also plays a key role in risk reduction. Much like automating tests to find bugs earlier, shifting security monitoring left and automating vulnerability-checks and adherence to security policies will save additional effort and money later down the line. While shifting to this DevSecOps model reduces risk, relying on manual processes only goes so far. Automation reduces the human element and the chance something will be overlooked. Properly designed automation tools shift the possibility of human error even further to where it is very easy to remediate. Having skilled DevOps engineers with security experience who can automate these activities pays off.

In addition, team empowerment is incredibly important. By shifting activities left and automating, risk can be reduced incredibly. However, it’s impossible to completely negate it. It’s critical for teams to be empowered to speak up and have proper ways to report any issues found. Making the changes to bring security into DevOps and automate the process mentioned will go a long way towards building the culture of security consciousness that is needed for this. Embedding security deeper in teams and making it part of their daily workflows and conversations shows employees that the company prioritizes security in their operations and cares about vigilance and conscientious reporting.

Adding security consciousness to the DevOps services available to the rest of the company will ensure that risks are considered earlier in the pipeline. This saves companies time and money in remediating those risks and will come at little to no cost to team workflows and velocities. Automation further reduces risk by shifting security even farther in the process and it reduces the risk of human error. These changes also make security a deeper part of company culture. Disseminating knowledge—and attention to the details related to security to the teams and employees with whom DevOps works closely—empowers the entire organization to make security a priority. Vulnerabilities and compliance issues will be found much earlier in the process reducing the cost of fixing them and the risk of production incidents.

Source: ibm.com

Tuesday, 30 November 2021

DevOps for IBM Z Firmware – with Red Hat OpenShift

IBM DevOps, IBM Z Firmware, Red Hat OpenShift, IBM Exam Prep, IBM Tutorial and Materials, IBM Certification, IBM Preparation, IBM Career, IBM Jobs

Red Hat OpenShift is perhaps best known for providing a platform for developing and deploying cloud-native, microservices-based applications.

But when the IBM Z Firmware development team in Germany were looking to modernize their DevOps process, they found the ideal environment in Red Hat OpenShift running on IBM Z.

Scalability and Security

IBM Z systems sit at the heart of many of the world’s biggest companies and most critical workloads. Optimized for performance, security and reliability, IBM Z is designed to handle billions of transactions without missing a heartbeat.

The IBM Z firmware layer sits between the physical hardware and the operating system, and executes many of the low-level operations of the IBM Z system. Creating and maintaining this layer is the responsibility of the IBM Z Firmware development organization, which includes hundreds of developers.

The challenges the team faced were similar to those faced by many large development organizations – flexibility, security, and availability, especially combined with the need to scale. How could existing Jenkins setups be easily extended to add new workers? How could the existing login server be updated to support new security requirements? And how could runtime environments be designed with high availability in mind?

“The DevOps process is based around a large code pipeline, moving from source code management to binary repositories to automated testing, all managed by Jenkins scripts and workers”, commented Ralf Schaufler, IBM Z Firmware Integration Architect. “Supporting this are tools for bug tracking, access control, and backup.”

Technical Solution

The team looked at various options and decided to go with Red Hat OpenShift as this provided a secure enterprise DevOps capability, as well as a CI/CD pipeline. Although most of the IBM Z Firmware artifacts run on the IBM Z architecture (s390x), some run on x86 – and so OpenShift’s support for heterogeneous environments could offer additional benefits in the future.

The next question the team faced was whether to run OpenShift in the cloud or on-premises on Z. They determined that the cloud would be a more expensive option, especially as they have a fairly static environment of hundreds of users. In addition, running OpenShift on-prem on IBM Z enabled them to co-locate the development environment next to the test environments. This dramatically reduced the time taken to transfer IBM Z firmware images between development, simulation, and new hardware – and increased security by locating all these environments in the same protected zone with local access only.

“The first use case we implemented was to migrate their multi-user development server to ‘interactive containers’ running on Red Hat OpenShift on IBM Z”, said Edmund Breit, Senior IT Specialist, IBM Z Firmware Delivery & Suppprt. “This enabled us to use the access control features of OpenShift and meet the IBM security requirements for developers.”

The next use case they deployed was to use Jenkins for the Continuous Integration and Development (CID) pipeline within Red Hat OpenShift, supporting greater scalability and enabling updates to be packaged and then included in the next driver update. This simplified the pipeline automation, and can also potentially enable future multi-arch support for both s390x and x86 firmware production in the future.

“We’re now looking at further use cases, including supporting virtual machines as well as containers, wider options for persistent storage, and additional CID services”, added Edmund.

“The migration of the DevOps process to OpenShift on Z has proved very successful and delivered a more secure and scalable approach,” continued Ralf. “This has also been helped by the Red Hat OpenShift for Z development team being close by in the IBM Boeblingen lab.”

Source: ibm.com

Wednesday, 7 July 2021

Making storage simple for containers, edge and hybrid cloud

IBM Exam Preparation, IBM Tutorial and Material, IBM Career, IBM Learning, IBM Prep, IBM Learning, IBM Guides

As more companies turn to hybrid clouds to fuel their digital transformation, the need to ensure data accessibility across the enterprise—from the data center to the edge—grows.

Often geographically dispersed and disconnected from the data center, edge computing can strand vast amounts of data that could be otherwise brought to bear on analytics and AI. According to a recent report from IDC, the number of new operational processes deployed on edge infrastructure will grow from less than 20% to over 90% in 2024 as digital engineering accelerates IT/OT convergence.

IBM is taking aim at this challenge with several innovative storage products. Announced today, IBM Spectrum® Fusion is a container-native software defined storage (SDS) solution that fuses IBM’s trusted general parallel file system technology (IBM Spectrum® Scale) and its leading data protection software (IBM Spectrum® Protect Plus). This integrated product simplifies data access and availability from the data center to the edge of the network and across public cloud environments.

In addition, we announced the new IBM Elastic Storage® System 3200, an all-flash controller storage system, equipped with IBM Spectrum Scale. The new 2U model offers 100% more performance than its predecessor and up to 367TB of capacity per node.

We are committed to helping customers propel their transformations by providing solutions that make it easier to discover, access and manage data across their increasingly complex hybrid cloud environments. Today’s announcements are testaments to this strategy.

IBM Spectrum Fusion

IBM Spectrum Fusion is a hybrid cloud container-native data solution for Red Hat® OpenShift® and Red Hat OpenShift Data Foundation (formerly known as Red Hat OpenShift Container Storage). It “fuses” the storage platform with storage services and is built on the market-leading technology of IBM Spectrum Scale with advanced file management and global data access.

IBM Exam Preparation, IBM Tutorial and Material, IBM Career, IBM Learning, IBM Prep, IBM Learning, IBM Guides
IBM Spectrum Fusion will be offered in two iterations: a hyperconverged infrastructure (HCI) system, due in the second half of 2021, and an SDS software solution, due in 2022.

The HCI edition will be the industry’s first container-centric, hyperconverged system. Although competitive HCI systems support containers, most are VM-centric. IBM Spectrum Fusion will come out of the box, built for and with containers, running on Red Hat OpenShift. Characteristics of IBM Spectrum Fusion HCI include:

◉ Integrated HCI appliance for both containers and VMs using Red Hat OpenShift

◉ Global data access with active file management (AFM)

◉ Data resilience for local and remote backup and recovery

◉ Simple installation and maintenance of hardware and software

◉ Global data platform stretching from public clouds to on-premises or edge locations

◉ IBM Cloud® Satellite and Red Hat ACM integration

◉ Starts small with 6 servers and scales up to 20 (with NVIDIA HPC GPU enhanced options)

“IBM Spectrum Fusion HCI will provide our customers with a powerful container-native storage foundation and enterprise-class data storage services for hybrid cloud and container deployments,” said Bob Elliott, Vice President Storage Sales, Mainline Information Systems. “In today’s world, our customers want to leverage their data from edge to core data center to cloud and with IBM Spectrum Fusion HCI our customers will be able to do this seamlessly and easily.”

In 2022, IBM Spectrum Fusion will also be available as a stand-alone software-defined storage solution.

Next-generation storage built for high-performance AI and hybrid cloud

IBM Exam Preparation, IBM Tutorial and Material, IBM Career, IBM Learning, IBM Prep, IBM Learning, IBM Guides
IBM is also introducing the highest-performing scale-out file system node ever released for IBM Spectrum Scale. Including advanced file management and global data access, the new Elastic Storage System (ESS) 3200 is a two-rack unit enclosure with all-NVMe flash that delivers 80GB/s throughput, 100% faster than the previous ESS 3000 model.

“IBM’s newest member for enterprise-class storage offerings, the ESS 3200 with IBM Spectrum Scale, provides a faster, reliable data platform for HPC, AI/ML workloads enabling my clients to expedite time to results.” – John Zawistowski, Global Systems Solution Executive, Sycomp

This solution is designed to be easy to deploy and can start at 48TB configurations and scale up to 8YB (yottabytes) of global capacity in a single global name space seamlessly spanning edge, core data center and hybrid cloud environments. With options for 100Gbps ethernet or 200Gbps InfiniBand, this system is designed for the most demanding high-performance enterprise, analytics, big data and AI workloads.

Source: ibm.com

Tuesday, 29 June 2021

IBM Z Virtual Test Platform takes big stride forward with version 2.0

IBM Z, IBM Tutorial and Material, IBM Learning, IBM Career, IBM Preparation, IBM Certification, IBM Prep

The growing need for digital transformation has driven organizations to aggressively transform themselves and adopt DevOps to enable accelerated application delivery, which is a key part of the transformation. While it’s great to see the adoption rate growing significantly for DevOps, testing remains a major bottleneck to a continuous software development lifecycle.

Read More: C2090-424: IBM InfoSphere DataStage v11.3

The recent survey results from Gitlab on DevSecOps confirm just that.

“Testing remains tough — for the third year in a row, a majority of survey-takers resoundingly pointed to testing as the area most likely to cause delays. The other bottlenecks include planning, code development, and code review, again reflecting what we’ve seen in our 2019 and 2020 surveys.”

In our continuous and sincere effort to address this issue, last year we announced the general availability of IBM Z® Virtual Test Platform V1.0, which fundamentally changed the game by providing testing capabilities that allow application integration, transaction and batch testing to be shifted earlier in the development cycle on a fully virtualized test platform.

For the first time, developers were able to test complete application integration before the code is even deployed to production while automating testing as part of their build process in a CI/CD pipeline. With this week’s V2.0 release, we have further strengthened the solution, incorporating key feedback from our clients by providing an html-based test results viewer and the ability to effectively version control your test case and results ensuring auditability and pipeline integration.

Further with ZVTP V2.0, we are taking a big stride forward in adding test automation and deep integration testing capabilities with IBM Distribution for Galasa. Based on the open-source project ‘Galasa’, the solution allows you to test applications at scale regardless of platform — including z/OS. Galasa enables deep integration testing across platforms and technologies within a DevOps pipeline. Galasa is designed to support repeatable, reliable, agile testing at scale across your enterprise.

IBM Z, IBM Tutorial and Material, IBM Learning, IBM Career, IBM Preparation, IBM Certification, IBM Prep

IBM distribution for Galasa is not simply about providing the ability to write a test and run it in automation. It extends that to enable you to understand and manage your tests. This makes it easy to schedule the right tests to run at the right time and Galasa support is provided with VTP.

We will continue to strengthen our testing capabilities and you can look forward to some exciting and new capabilities coming up in our subsequent releases.

Source: ibm.com

Thursday, 11 March 2021

Expanding our DevOps portfolio with GitLab Ultimate for IBM Cloud Paks

IBM Exam Prep, IBM Learning, IBM Certification, IBM Career, IBM Preparation

In the era of hybrid cloud, our clients are facing constant pressure to adapt to and adopt new models for application development and IT consumption in order to unlock the speed and agility of hybrid cloud. With even tighter budgets and more pressure to transform given the impact of our new normal, there is no better time to reimagine the way mission-critical workloads run on IBM Z.

Many clients such as State Farm, RBC and BNP Paribas have benefited by adopting DevOps on IBM Z which allows them to unlock new potential for increased speed and agility, directly influencing their digital transformation initiatives. Clients are able to leverage their investments in and the strength of their existing IT infrastructure, clouds and applications in a seamless way with people, platforms and experience they already have on hand.

Over the past year, IBM Z has taken significant strides to bring new DevOps tools to the platform, with seamless hybrid application development including IBM Wazi Developer for Red Hat CodeReady Workspaces and Red Hat Ansible Certified Content for IBM Z, as well as a series of new hybrid cloud container offerings for Red Hat OpenShift on IBM Z and LinuxONE.

Today IBM announced GitLab Ultimate for IBM Cloud Paks, expanding our DevOps offerings across the business allowing clients to get a comprehensive solution with a DevOps toolchain for modernization.  This is another significant milestone for IBM Z and IBM LinuxONE clients, bringing even more choice with cloud-native development on IBM Z.

IBM Exam Prep, IBM Learning, IBM Certification, IBM Career, IBM Preparation
This is a significant step forward for IBM Z clients and is designed to unlock their software innovations by reducing the cycle time between having an idea, seeing it in production and monitoring it to ensure optimal performance. It provides an innovative, open and hybrid solution for DevOps.

With GitLab Ultimate for IBM Cloud Paks, developers will be able to compose their DevOps solution using GitLab, write in any language they want and deploy it in any hybrid cloud environment they choose. Developers will also be able to take advantage of GitOps and GitLab’s orchestration automation technology, which can be used in conjunction with GitLab pipelines.

Source: ibm.com

Tuesday, 16 February 2021

How to win at application modernization on IBM Z

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Preparation, IBM Career

If you want to create new user experiences, improve development processes and unlock business opportunities, modernizing your existing enterprise applications is an important next step in your IT strategy. Application modernization can ease your overall transition to a hybrid cloud environment by introducing the flexibility to develop and run applications in the environment you choose. (If you haven’t already done so, check out our field guide to application modernization, where we’ve outlined some of the most common modernization approaches.)

Whether you’re focusing more on traditional enterprise applications or cloud-native applications, you’ll want to make sure that you are fully leveraging hybrid cloud management capabilities and DevOps automation pipelines for app deployment, configuration and updates.

With a cloud-native approach to modernization, your developers can take advantage of a microservice-based architecture. They can leverage containers and a corresponding container orchestration platform (such as Kubernetes and Red Hat® OpenShift®) to develop once and run applications anywhere — including on premises in your data center or off premises in one or more public clouds.

The benefits of modernizing on an enterprise hybrid cloud platform

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Preparation, IBM Career
At every stage of modernization, you can minimize risk and complexity by working from a platform that lets you develop, run and manage apps and workloads in a consistent way across a hybrid cloud environment. This will help ensure that everything on the app is done in a reliable, repeatable and secure manner, and it will help you to remove barriers to productivity and integration.

To that end, consider working from IBM Z® or IBM LinuxONE as your primary platform for application modernization. On either of these platforms you can continue running your existing apps while connecting them with new cloud-native applications at your own rate and pace, reducing risk and expense. You will also be able to leverage the inherent performance, reliability and security of the platform as you modernize your technology stack. They also provide a foundation for modern container-based apps (for example: web and middleware, cloud and DevOps, modern programming languages and runtimes, databases, analytics and monitoring).

Flexible, efficient utilization. IBM Z and IBM LinuxONE provide three approaches to virtualization to manage spikes and support more workloads per server with: IBM Logical Partitions, IBM z/VM®, and KVM. The advanced capabilities of these hypervisors contribute to the foundation of the typically high utilization achieved by IBM Z and IBM LinuxONE.

More performance from software with fewer servers. Enable 2.3x more containers per core on an IBM z15 LPAR compared to a bare metal1 x86 platform running an identical web server load, and co-locate cloud-native apps with z/OS and Linux virtual machine-based apps and enterprise data to exploit low-latency API connections to business-critical data. This translates into having to use fewer IBM Z and IBM LinuxONE cores to run an equivalent set of applications at comparable throughput levels than on competing platforms.

Co-location of cloud-native applications and mission-critical data. IBM Z and IBM LinuxONE house your enterprise’s mission-critical data. Running Red Hat OpenShift in logical partition adjacent to your z/OS® partitions provides low-latency secure communication to your enterprise data via IBM z/OS Cloud Broker. This provides superior performance due to fewer network hops. It also allows for highly secure communication between your new cloud-native apps and your enterprise data stores since network traffic never has to leave the physical server.

Proven security and resiliency. Utilize the most reliable mainstream server platform — with the only hypervisor among its major competitors — that is certified at the highest level of EAL5+.

Source: ibm.com

Thursday, 17 December 2020

Three common approaches to app modernization

IBM Exam Study, IBM Exam Prep, IBM Certification, IBM Learning

If you’re like many IT organizations, you’ve got application modernization on your mind. Maybe you’ve already assessed your enterprise applications and are ready to put the pedal to the pavement on your next project. If so, it’s time to begin building your roadmap.

In previous posts one and two. we help you get started by exploring some steps you can take right now to modernize at your own pace. Becoming familiar with some common strategies can help you to minimize complexity along the way. So, let’s explore three common app modernization patterns and use cases to propel your modernization effort forward.

Case 1: Embrace containers and surround existing enterprise applications

Containers give you all sorts of technological benefits. You can isolate individual components, refactor and test them, redeploy and scale as needed without disrupting or updating the entire app. Plus, containers carry common sets of standards and security as they travel across your hybrid cloud. They’re lightweight, quick to start, and have consistent and portable app runtime. Now, developers can easily share these assets with each other, reducing the time to build.

To that end, containers give you an easier way to approach app modernization, which is to continue running existing traditional apps while you incrementally surround them with new and innovative cloud-native services.

For example, imagine that you’re a bank that wants to create a new mobile front-end interface or leverage cloud-based location services to find the nearest ATM using a banking app. Containers provide an approachable low-risk path that won’t disrupt your existing apps, yet also pave the way for innovation and skill development with new programming languages and development methodologies.

Adopting this strategy on IBM Power Systems or IBM Z® gives you a trusted platform where you can develop, run and manage apps and workloads in a consistent way across your hybrid cloud environment.

Case 2: Transition to containers

As your app modernization journey advances further and you grow comfortable with the technology, tools and practices involved, you can evaluate packaging apps inside containers, paving a path to more

portable applications across the cloud and more frequent software updates by leveraging DevOps practices.

Assuming your apps are based on portable technology (Java, for example), this is a fairly straightforward process. You usually do not have to make many changes to the app itself to reap the operational, management and monitoring benefits of containers paired with Red Hat OpenShift. For apps running native IBM AIX or IBM i technology (RPG or COBOL, for example), consider leaving them as-is and focusing on the “surround with containers” approach described previously. This provides a path to maximize innovation with new technologies while eliminating the large risk and expense of re-platforming.

Case 3: Rearchitect to cloud-native, microservices and API-first architecture

IBM Exam Study, IBM Exam Prep, IBM Certification, IBM Learning
As described, the second step to application modernization is to transition your apps into containers. That does not necessarily mean those apps are truly cloud native. Each cloud-native application has a set of microservices representing each logical capability. Each microservice also has a well-defined API that sits on top of it to expose its capability. Because this approach typically requires changes to the application, it can take longer to complete than just moving your app into containers. With that in mind, taking an iterative approach to the process will keep things manageable.

Leveraging these approaches as part of your modernization journey will open doors to tremendous benefits. These include a quicker time to market, increased developer efficiency, app deployment flexibility, seamless integration with DevOps automation and access to the latest technology innovations.

Thursday, 10 December 2020

5 steps to start your enterprise app modernization

IBM Certification, IBM Exam Prep, IBM Tutorial and Material, IBM Career, IBM Preparation

Prepare to modernize! Every well-executed plan begins with a bit of preparation. Your enterprise application modernization project should be no different. As a first question, we like to ask, “Is our project aligned with the priorities of the business?” Sure, it’s simple—but it’s an important first thought.

Understanding and articulating the business value of modernization clearly will go a long way in helping to align your project scope and deliverable goals with that of your leadership. You can only go so far alone. So, to help keep you on track, we’ve put together a rundown of some of the top tips we have for keeping your modernization project moving forward.

Step 1: Assess your applications

Are they traditional, composite or cloud-native applications? Categorize them. This will help you see the full scope of your application landscape so you can start making decisions about where to focus your efforts. Identify applications that can be readily deployed in the cloud and take note of those that will require refactoring. This is an ongoing process. And, as you continually reassess your journey and address the most impactful projects, there will be new prioritizations that must take place. 

Step 2: Be realistic with your scope

As you prepare to build your business case, narrow the scope. For example, it’s not advisable to create one massive business case to modernize hundreds of apps in one fell swoop and to create a project timeline that spans several years. Rather, focus your initial effort on a specific application . . . or even a specific component of a more complex application. By narrowing your project scope, you can make an immediate impact and lay the groundwork for modernizing other applications.

Step 3: Build your business case

Build your case around an app that will provide the biggest ROI. This will help you secure executive approval for the modernization project. For example, an online retailer may need to get a mobile user interface into the hands of users as soon as possible, while a financial institution might need to release new versions of a web interface weekly instead of monthly, without sacrificing software quality. Ensure that your own business case includes the desired outcomes and benefits from both a business perspective (that is, long-term financial savings) and a technical perspective, the estimated cost to perform the project, and the timeframe in which the project should be completed.

Step 4: Execute

You’ve identified a business need, you’ve narrowed your scope, you’ve convinced leadership and now it’s time to begin your project. Well done! If along the way you realize that your initial assumptions about either the business value or project timeline were incorrect, revisit the business case and adjust the scope accordingly. An advantage of narrowing your modernization project scope to one app or business need is that you can be flexible in your execution.

Step 5: Evaluate and repeat

As you complete each project, you will learn a lot about the technologies, what worked well, and what didn’t. Perform a post-mortem to note what went well and what went sideways. You’ll have more DevOps experience and can use that knowledge to inform your next modernization project.

IBM Certification, IBM Exam Prep, IBM Tutorial and Material, IBM Career, IBM Preparation

What four actions can you take right now to modernize your apps? In our next post we will provide you with the technical know-how you need to initiate the process of modernizing your core applications. We’ll help you define a roadmap so you can tackle this project one piece at a time rather than attempting to transform your entire enterprise infrastructure all at once.

Source: ibm.com

Thursday, 19 November 2020

Red Hat Ansible: A 101 Guide

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Certification, IBM Guides

You’ve probably heard a lot about Ansible recently. Are you curious to learn more about this rapidly growing DevOps tool? Here’s a guide to help you get started.

What is Red Hat Ansible?

Red Hat® Ansible® is an open source IT deployment, orchestration and configuration management tool. It allows IT teams to define how a client (or group of clients) should be configured, and then Ansible issues the commands to match that stated configuration. Using Ansible’s built-in modules, those clients we can control need not only be compute resources but other environments including storage controllers, network switches and on-premises and public clouds.

Why do we need Ansible?

If you think back to how we used to manage computing, it was often a very time-consuming and error-prone process. If we needed a new server to be built, we would ask the system administrators to manually build that instance. We might then require a certain software stack to be installed, along with a required configuration and user IDs and so forth. Once the server is active, we have the added complexity of keeping up with the developers requiring new software releases, configuration changes or issues around scaling. Ansible enables us to automate all these stages into efficient, repeatable tasks, allowing IT to align to flexible business requirements in this increasingly DevOps-driven world.

In the 2019 Gartner I&O Management Survey, 52% of respondents said they were investing and will continue to invest in infrastructure and operations automation. The same study highlighted that 42% of respondents said they planned to start investing within the next two years.

What can Ansible do?

Ansible gives us the ability to:

1. Provision servers or virtual machines, both on premises and in the cloud

2. Manage configuration of the clients, set up users, storage, permissions and the like

3. Prevent configuration drift of clients by comparing desired state to current state

4. Orchestrate new workloads, restarting application dependencies and so forth

5. Automate application deployment and lifecycle management

Advantages of Ansible

There are a number of advantages in using Ansible ahead of other orchestration and configuration management tools:

Agentless: No agents are needed on the client systems you want to manage. Ansible communicates with the clients over the standard SSH port.

Simple: Ansible uses “playbooks” to define the required state of the client(s). Those playbooks are written in human-readable YAML format, so no programming skills are required.

Declarative: Ansible playbooks define a declarative state required for the client, and the key principle that enables this is idempotency. Idempotency is a concept borrowed from the world of mathematics meaning that an operation can be applied multiple times without changing the results beyond the initial application. For example, if you declare a client should have a software package installed at a certain version, Ansible will only install it if it isn’t already installed at that version.

Reusable: We have the ability to break tasks into a repeatable items called roles. These can then be called from multiple playbooks. For example, we might have a role that installs a database, one that configures users and one that adds additional storage. As we write new playbooks we just include the role we required, meaning we don’t have to rewrite that functionality multiple times.

Open: Ansible is a vibrant and reliable open source project and as such has a very active community, writing roles and modules for everyone to use. Galaxy Hub is once such repository where (for example) over 25,000 roles are available for anyone to download.

Ansible Tower

Ansible Tower provides an intuitive UI and dashboard to allow role-based access so users can run and monitor their own template.

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Certification, IBM Guides

Using REST APIs, Ansible Tower also allows us to integrate with existing clouds, tools and processes. These include hybrid cloud endpoints such as IBM PowerVC, Amazon EC2, Google Cloud, Microsoft Azure and VMware vCenter. In addition to endpoints, Ansible Tower provides the ability to interact with source code management repositories including GitHub.

Ansible and IBM Systems


Ansible and Ansible Tower can be used to deploy and manage workloads across the IBM Systems portfolio. Using the core OpenStack modules, we can create playbooks to build virtual machines running AIX, IBM i or Linux across IBM Power Systems estates. Using the extensive IBM collections on Galaxy Hub, we can build instances in the IBM Cloud, manage AIX, IBM i, Linux and z/OS instances both on premises and in a public cloud (where appropriate). We can also manage IBM Spectrum Virtualize storage using the modules freely available in Galaxy Hub.

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Certification, IBM Guides
Source: ibm.com

Tuesday, 15 September 2020

4 ways to transform your mainframe for a hybrid cloud world

IBM Exam Prep, IBM Tutorial and Materials, IBM Certification, IBM Guides

The IBM mainframe remains a widely used enterprise computing workhorse, hosting essential IT for the majority of the world’s top banks, airlines, insurers and more. As the mainframe continues to evolve, the newest IBM Z servers offer solutions for AI and analytics, blockchain, cloud, DevOps, security and resiliency, with the aim of making the client experience similar to that of using cloud services.

Many organizations today face challenges with their core IT infrastructure:

◉ Complexity and stability: An environment might have years of history and be seen as too complex to maintain or update. Problems with system stability can impact operations and be considered a high risk for the business.

◉ Workforce challenges: Many data center teams are anticipating a skills shortage within the next 5 years due to a retiring and declining workforce specialized in the mainframe, not to mention the difficulty of attracting new talent.

◉ Total cost of ownership: Some infrastructure solutions are seen as too expensive, and it’s not always easy to balance up-front costs with the life expectancy and benefits of a given platform.

◉ Lack of speed and agility: Older applications can be seen as too slow and monolithic as organizations face an increasing need for faster turnaround and release cycles.

Some software vendors suggest addressing these challenges with the “big bang” approach of moving your entire environment to public cloud. But public cloud isn’t the best option for every workload, and a hybrid multicloud approach can offer the best of both worlds. IBM Z is constantly being developed to address the real challenges businesses today face, and every day we’re helping clients modernize their IT environments.

There are 4 strategic elements to consider when modernizing your mainframe environment:

◉ Infrastructure
◉ Applications
◉ Data access
◉ DevOps chain

Let’s take a closer look at each one.

1. Infrastructure modernization


Most IBM clients’ mainframe systems are operating on the latest IBM Z hardware, but some are using earlier systems. The first step in updating your mainframe environment is adopting the newest features that can help you get the most from your infrastructure. Many technical innovations were introduced in IBM z15. This platform has been engineered to encrypt data everywhere, provide for cloud-native development and offer a high level of stability and availability so workloads can run continuously.

2. Application modernization


Core system applications — implemented as monolithic applications — form the backbone of many enterprises’ IT. The key characteristic of these monolithic applications is the hyper integration of the main components of the application, which makes it difficult to understand and update them. Modernizing your mainframe applications starts with creating a map to identify which apps should follow a modularization process and which should be refactored. This implies working on APIs and microservices for better integration of the mainframe with other IT systems and often redefining the business rules. You might also move some modules or applications to the cloud using containerization.

3. Data access modernization


For years, some businesses have chosen to move their sensitive data off IBM Z to platforms that include data lakes and warehouses for analytics processing. Modern businesses need actionable and timely insight from their most current data; they can’t afford the time required to copy and transform data. To address the need for actionable insights from data in real time and the cost of the security exposures due to data movement off the mainframe, IBM Z offers modern data management solutions, such as production data virtualization, production data replication in memory, and data acceleration for data warehouse and machine learning solutions.

4. DevOps chain modernization


The pressure to develop, debug, test, release and deploy applications quickly is increasing. IT teams that don’t embrace DevOps are slower to deliver software and less responsive to the business’s needs. IBM Z can help clients learn how to modernize through new DevOps tools and processes to create a lean and agile DevOps pipeline from modern source-code management to the provisioning of environments and the deployment of the artifacts.

Sunday, 1 March 2020

Maximizing Impact with Personalized Accessibility Training

IBM Exam Prep, IBM Tutorial and Material, IBM Certifications, IBM Study Materials

There are many outstanding accessibility tutorials online. Yet, I’m finding that designers and developers are reluctant to go through them.

I always wondered why?

As I travel to many different IBM offices around the world training product design and development teams and understanding their needs, I believe I have found what makes them more interested and engaged. The key is to make the content as personal as possible.

Steps to Improve Inclusive Design and Development


Whether the training takes place onsite and face-to-face, or via online education materials, it is always useful to anticipate the needs of the learner.

For example, let’s say there is an informative image needs to be labeled. When we specifically look at the product or offering that the learner works with, and find an image that is not labeled, we can brainstorm on what would be the best description of what we are trying to communicate. The exercise immediately becomes more engaging.

IBM Exam Prep, IBM Tutorial and Material, IBM Certifications, IBM Study Materials
It is slightly harder to achieve the same connection online, but we can at least anticipate the most common scenarios.

Also, I find that for a designer or developer, integrating accessibility into a solution often comes down to, “What’s in it for me?”.

It’s frustrating that this is the general attitude. However, instead of insisting upon “requirements,” it is important to connect it to a user need or a fulfilling experience. Before we even start thinking about accessibility, it is important to understand why.

We can connect with the learner through a variety of ways:

◉ Through the experience of having a friend or a relative with a disability;

◉ The learner’s own loss of vision or hearing due to aging; or,

◉ The desire and ability of doing something that benefits society, in general.

For example, when we talk about good contrast or readable content it gets more interesting when we ask the audience if they can think of a grandparent using glasses while on the computer. Instead of talking about the letter size or contrast ratio, once we can connect with the kind of text your grandma would be able to read, the specs become secondary, and much easier to meet.

Finally, it’s best to discuss the “usefulness” of accessibility. Accessible solutions can reach more people, therefore increase the audience and sales for a product. And, accessible solutions can help minimize the likelihood of legal risk, or unsatisfied users.

When we hear about litigation costs and impact, it is hard to argue against reducing corporate risk by developing solutions that are accessible to all.

Accessibility in an Investment in Customers and Employees


It is easy to look at accessibility as an additional time or cost commitment, and there is truth to it. Accessibility doesn’t come for free, as much as we would like it to be.

IBM Exam Prep, IBM Tutorial and Material, IBM Certifications, IBM Study Materials
But let’s face it, providing an accessible solution is a good investment, and for that matter, a marketing tool. According the World Health Organizations, 15 percent of the worldwide population has some form of disability. This means, our products can reach more people if they are accessible.

Or, we can just leave that audience for our competitors.

My favorite argument is that as our life expectancy increases, the likelihood of acquiring age related disabilities does as well. What we make accessible today will benefit us personally tomorrow. The question then becomes: How can I design something today that can adapt and be useful to me when I’ll be 80 years old?

There is always a connecting point with a learner, let that be a designer, developer, tester, manager, etc. After the connection is established, accessibility turns into a personal challenge instead of just being another requirement and box that needs to get checked.

Source: ibm.com

Tuesday, 18 February 2020

Top 5 DevOps predictions for 2020

There are five DevOps trends that I believe will leave a mark in 2020. I’ll walk you through all five, plus some recommended next steps to take full advantage of these trends.

In 2019, Accenture’s disruptability index discovered that at least two-thirds of large organizations are facing high levels of industry disruption. Driven by pervasive technology change, organizations pursued new and more agile business models and new opportunities. Organizations that delivered applications and services faster were able to react more swiftly to those market changes and were better equipped to disrupt, rather than becoming disrupted. A study by the DevOps Research and Assessment Group (DORA) shows that the best-performing teams deliver applications 46 times more frequently than the lowest performing teams. That means delivering value to customers every hour, rather than monthly or quarterly.

2020 will be the year of delivering software at speed and with high quality, but the big change will be the focus on strong DevOps governance. The desire to take a DevOps approach is the new normal. We are entering a new chapter that calls for DevOps scalability, for better ways to manage multiple tools and platforms, and for tighter IT alignment to the business. DevOps culture and tools are critical, but without strong governance, you can’t scale. To succeed, the business needs must be the driver. The end state, after all, is one where increased IT agility enables maximum business agility. To improve trust across formerly disconnected teams, common visibility and insights into the end-to-end pipeline will be needed by all DevOps stakeholders, including the business.

DevOps trends in 2020


What will be the enablers and catalysts in 2020 driving DevOps agility?

Prediction 1: DevOps champions will enable business innovation at scale. From leaders to practitioners, DevOps champions will coexist and share desires, concerns and requirements. This collaboration will include the following:

◉ A desire to speed the flow of software

◉ Concerns about the quality of releases, release management, and how quality activities impact the delivery lifecycle and customer expectations

◉ Continual optimization of the delivery process, including visualization and governance requirements

Prediction 2: More fragmentation of DevOps toolchains will motivate organizations to turn to value streams. 2020 will be the year of more DevOps for X, DevOps for Kubernetes, DevOps for clouds, DevOps for mobile, DevOps for databases, DevOps for SAP, etc. In the coming year, expect to see DevOps for anything involved in the production and delivery of software updates, application modernization, service delivery and integration. Developers, platform owners and site reliability engineers (SREs) will be given more control and visibility over the architectural and infrastructural components of the lifecycle. Governance will be established, and the growing set of stakeholders will get a positive return from having access and visibility to the delivery pipeline.

IBM Study Materials, IBM Prep, IBM Certifications, IBM Exam Study, IBM Guides

Figure 1: UrbanCode Velocity and its Value Stream Management screen enable full DevOps governance.

Prediction 3: Tekton will have a significant impact on cloud-native continuous delivery. Tekton is a set of shared open-source components for building continuous integration and continuous delivery systems. What if you were able to build, test and deploy apps to Kubernetes using an open source, vendor-neutral, Kubernetes-native framework? That’s the Tekton promise, under a framework of composable, declarative, reproducible and cloud-native principles. Tekton has a bright future now that it is strongly embraced by a large community of users along with organizations like Google, CloudBees, Red Hat and IBM.

Prediction 4: DevOps accelerators will make DevOps kings. In the search for holistic optimization, organizations will move from providing integrations, and move to creating sets of “best practices in a box.” These will deliver what is needed for systems to talk fluidly, but also remain auditable for compliance. These assets will become easier to discover, adopt and customize. Test assets that have been traditionally developed and maintained by software and system integrators will be provided by ambitious user communities, vendors, service providers, regulatory services and domain specialists.

Prediction 5: Artificial intelligence (AI) and machine learning in DevOps will go from marketing to reality. Tech giants, such as Google and IBM, will continue researching how to bring the benefits of DevOps to quantum computing, blockchain, AI, bots, 5G and edge technologies. They will also continue to look at how technologies can be used within the activities of continuous deployment, continuous software testing prediction, performance testing, and other parts of the DevOps pipeline. DevOps solutions will be able to detect, highlight, or act independently when opportunities for improvement or risk mitigation surface, from the moment an idea becomes a story until a story becomes a solution in the hands of their users.

Next steps


Companies embracing DevOps will need to carefully evaluate their current internal landscape, then prioritize next steps for DevOps success.

First, identify a DevOps champion to lead the efforts, beginning with automation. Establishing an automated and controlled path to production is the starting point for many DevOps transformations and one where leaders can show ROI clearly.

Then, the focus should turn toward scaling best practices across the enterprise and introducing governance and optimization. This includes reducing waste, optimizing flow and shifting security, quality and governance to the left. It also means increasing the frequency of complex releases by simplifying, digitizing and streamlining execution.

IBM Study Materials, IBM Prep, IBM Certifications, IBM Exam Study, IBM Guides
Figure 2: Scaling best practices across the enterprise.

These are big steps, so accelerate your DevOps journey by aligning with a long-term vision vendor that has a reputation of helping organizations navigate transformational journeys successfully. OVUM and Forrester have identified organizations that can help support your modernization in the following OVUM report, OVUM webinar and Forrester report.

Tuesday, 31 December 2019

Guard your virtual environment with IBM Spectrum Protect Plus

The server virtualization market is expected to grow to approximately 8 billion dollars by 2023. This would mean a compound annual growth rate (CAGR) of 7 percent between 2017 and 2023. Protection of these virtual environments is therefore a significant and critical component of an organization’s data protection strategy.

This growth in virtual environments necessitates that data protection be managed centrally, with better policies and reporting systems. However, in an environment consisting of heterogeneous applications, databases and operating systems, overall data protection can become a complex task.

In my day-to-day meetings with clients from the automobile, retail, financial and other industries, the most common requirements for data protection include simplified management, better recovery time, data re-use and the ability to efficiently protect data for varied requirements such as testing and development, DevOps, reporting, analytics and so on.

Data protection challenges for a major auto company


I recently worked with a major automobile company in India that has a large setup of virtual machines. Management was looking for a disk-based, agentless data protection software that could protect its virtual and physical environments, one that was easy to deploy, could operate from a single window with better recovery times and provide reporting and analytics.

In its current environment, the company has three data centers. It was facing challenges with data protection management and disaster recovery. Backup agents were to be installed individually, which required expertise on the operating system, database and applications. A dedicated infrastructure was required at each site to manage the virtual environment and several other tools to perform granular recovery as needed. The company also had virtual tape libraries that were replicating data to other sites.

The benefits of data protection and availability for virtual environments


For the automobile company in India, IBM Systems Lab Services proposed a solution that would replace its current siloed and complex backup architecture with a simple IBM Spectrum Protect Plus solution. The proposed strategy included storing its short-term data on vSnap storage, which comes with IBM Spectrum Protect Plus, and storing long-term data to IBM Cloud Object Storage to reduce the backup storage costs. The multisite, geo-dispersed configuration of IBM Cloud Object Storage could help the auto company reduce the dependency of replication, which used to be there with virtual tape libraries.

Integration of IBM Spectrum Protect Plus with Spectrum Protect and offloading data to IBM Cloud Object Storage was proposed to the client to retire more expensive backup hardware such as virtual tape libraries from its data centers. This was all part of a roadmap to transform its backup environment.

IBM Spectrum Protect Plus is easy to install, eliminating the need for advanced technical expertise for backup software deployment. Its backup policy creation and scheduling features helped the auto company reduce its backup admin dependency. Its role-based access control (RBAC) feature enabled it to provide better bifurcation of its backup and restore workload between VM admins, backup admins, database administrators and so forth when it comes to its data protection environment. The ability to manage the data protection environment from a single place also helped the auto company to manage all three of its data centers from one location.


This company can now take advantage of Spectrum Protect Plus to perform quick recovery of data during disaster scenarios and reduce its development cycle with clone creation from backup data in minutes or even seconds. Spectrum Protect Plus’s search and recovery of single-file functionality made the auto company’s need for granular recovery of files easy to address.

One of its major challenges was to improve performance and manage the protection of databases and applications. Spectrum Protect Plus resolved this with its unique feature of agentless backup of databases and applications running on physical and virtual environments. Spectrum Protect Plus also offers better reporting and analytics functionality, which allowed the client to send intuitive reports to the company’s top management.

Friday, 27 December 2019

Three ways to intelligently scale your IT systems

Digital transformation, with its new paradigms, presents new challenges to your IT infrastructure. To stay ahead of competitors, your organization must launch innovations and release upgrades in cycles measured in weeks if not days, while boldly embracing efficiency practices such as DevOps. Success in this environment requires an enterprise IT environment that is scalable and agile – able to swiftly accommodate growing and shifting business needs.

As an IT manager, you constantly face the challenge of scaling your systems. How can you quickly meet growing and fluctuating resource demands while remaining efficient, cost-effective and secure? Here are three ways to meet the systems scalability challenge head-on.

1. Know when to scale up instead of out

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
If you’re like many IT managers, this scenario may sound familiar. You run your systems of record on a distributed x86 architecture. You’re adding servers at a steady rate as usage of applications, storage and bandwidth steadily increases. As your data center fills up with servers, you contemplate how to accommodate further expansion. As your IT footprint grows and you can’t avoid increasing server sprawl despite using virtualization, you wonder if there’s a more efficient solution.

There often is. IBM studies have found that once many workloads reach a certain threshold of processing cores in their data center, it often becomes more cost efficient to scale up instead of out. Hundreds of workloads, running on 50 or so processing cores each, require significant hardware, storage space and staffing resources; these requirements create draining costs and inefficiencies. Such a configuration causes not only server sprawl across the data center, but sprawl at the rack level as racks fill with servers.

By consolidating your sprawling x86 architecture into a single server or a handful of scale-up servers, you can reduce total cost of ownership, simplify operations and free up valuable data center space. A mix of virtual scale-up and scale-out capabilities, in effect allowing for diagonal scaling, will enable your workloads to flexibly respond to business demand.

IBM studies have found, and clients who have embraced these studies have proven, that moving from a scale-out to a scale-up architecture can save enterprises up to 50 percent on server costs over a three-year period. As your system needs continue to grow, a scale-up enterprise server will expand with them while running at extremely high utilization rates and meeting all service-level agreements. Eventually a scale-up server will need to scale out too, but not before providing ample extra capacity for growth.

 2. Think strategically about scalability

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
In the IT systems world, scalability is often thought of primarily in terms of processing power. As enterprise resource demands increase, you scale your processing capabilities, typically through either the scale-out or scale-up model.

Scalability for enterprises, however, is much broader than adding servers. For instance, your operations and IT leaders are always driving toward increased efficiency by scaling processes and workloads across the enterprise. And your CSO may want to scale the encryption used to protect the most sensitive data to all systems of record data. As your systems scale, every aspect must scale with it, from service-level agreements to networking resources to data replication and management to the IT staff required to operate and administer those additional systems. Don’t forget about the need to decommission resources, whether physical or virtual, as you outgrow them. Your IT systems should enable these aspects of scalability as well. Considering scalability in the strategic business sense will help you determine IT solutions that meet the needs of all enterprise stakeholders.

3. Meet scale demands with agile enterprise computing

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
An enterprise computing platform can drive efficiency and cost savings by helping you scale up instead of out. Yet the platform’s scalability and agility benefits go well beyond this. Research by Solitaire Interglobal Limited (SIL) has found that enterprise computing platforms provide superior agility to distributed architectures. New applications or systems can be deployed nearly three times faster on enterprise computing platforms than on competing platforms. This nimbleness allows you to stay ahead of competitors by more quickly launching innovations and upgrades. Also notably, enterprise platforms are 7.41 times more resilient than competing platforms. This means that these platforms can more effectively cope when resource demands drastically change.

Techcombank, a leading bank in Vietnam, has used an enterprise computing platform to meet scalability needs. As the Vietnamese banking industry grows rapidly, Techcombank is growing with it – with 30 percent more customers and 70 percent more online traffic annually. To support rapid business growth, Techcombank migrated its systems to an enterprise computing platform. The platform enables Techcombank to scale as demand grows while experiencing enhanced performance, reliability and security.