Tuesday, 29 September 2020

Five key things to know about IBM Garage for Systems

IBM Exam Prep, IBM Learning, IBM Tutorial and Material, IBM Certification

What if you had access to experts who specialize in the IT solutions you were considering for your business? What if you could test-drive new IT hardware or software to make sure it’s the right fit?

IBM Garage for Systems gives businesses the opportunity to see firsthand how IBM IT infrastructure can support their business needs. Whether it’s through a co-creation workshop, a solution benchmark or a Redbooks training course, IBM Garage for Systems has infrastructure specialists who can help you gain confidence in your next infrastructure investment. We’ve helped organizations find the best platforms for their critical applications, take advantage of the security features in their systems to make sure their data is protected, put AI models to work on complex analytics and much more.

Interested to learn more? Here are the five essential things to know about us.

What sets IBM Garage for Systems apart?

1. Our mission is to deliver IT expertise that helps give you the best possible experience with IBM IT infrastructure.

We know that planning and implementing IT solutions is challenging, and our offerings are designed to help in the decision-making process. Working with IBM Garage for Systems gives you access to top-notch subject matter experts in IBM infrastructure technologies and solutions — from AI to hybrid cloud to enterprise security. We provide demos, benchmarks, proofs of concept and proofs of technology to help you find the right infrastructure solution for your business needs.

2. We co-create innovative solutions side-by-side with you.

Co-creation is a collaborative process for creating shared value between you and IBM, and it’s a fundamental approach to our work. The IBM Systems Co-Creation Lab helps organizations move IT projects from idea to implementation through workshops covering opportunity discovery, design and development. Co-creation gives you a chance to work side-by-side with IBM experts to create first-of-a-kind strategies and solutions together with IBM.

3. Our technical training content helps IT professionals build knowledge and skills on IBM Power Systems, IBM Storage, IBM Z and LinuxONE.

Experts from IBM Garage for Systems develop technical training courses and content to deliver the latest information about IBM Systems solutions to clients, Business Partners and IBM employees. IBM Redbooks content formats include online courses, videos, papers, books and other media. Here’s where you can find us:

Five key things to know about IBM Garage for Systems

◉ Online courses and certification programs are available through IBM Skills Gateway.

◉ Our Expert Speaker Series on YouTube provides short explanations of tech topics straight from the experts.

◉ IBM Redbooks offers print and web publications to help you grow skills and technical knowledge, and you can follow Redbooks on YouTube, Facebook, Twitter and LinkedIn to see the latest.

4. We offer expertise in AI, hybrid cloud and quantum computing.

IBM Garage has skilled data scientists with expertise in AI who are ready to help you develop a new AI solution or advance an existing one. We also have teams specializing in hybrid multicloud and multivendor environments, bringing the capabilities of IBM Cloud and Red Hat to you wherever your business is on the cloud journey. Additionally, we host an active IBM Quantum System in partnership with IBM Research. We provide tours and briefings on IBM Quantum offerings and have several certified IBM Quantum Ambassadors ready to engage with you. In all these areas, our experts can meet you where you are and help you go to the next level.

5. We have expert teams located around the world and can offer most services virtually.

IBM Garage for Systems is a worldwide team with the capability to service businesses all over the planet. We offer virtual services leveraging our IBM Systems infrastructure hubs in North America and France, and we have skilled teams in key geographies around the world as well, so wherever you are, you can access our support. In addition to online training and consultations, Co-creation Lab workshops, benchmarks and proofs of concept can be done virtually.

Source: ibm.com

Thursday, 24 September 2020

IBM Cloud Hyper Protect Services are now HIPAA ready

IBM Tutorial and Materials, IBM Exam Prep, IBM Certification, IBM Learning

In the era of hybrid cloud, much of the discussion has focused on the challenges of maintaining data security and privacy driven by the movement of data between partners and third parties. There is, however, a piece to the security puzzle that requires attention: compliance.

To help clients accelerate their compliance journey, we’re announcing IBM Hyper Protect Services are now HIPAA-ready. Building on our announcement to bring Hyper Protect Services to Apple CareKit with the IBM Hyper Protect Software Development Kit (SDK) for iOS, this is an exciting step for developers as they meet the security characteristics required to be HIPAA-ready while building healthcare applications running on Apple devices.

For IT leaders, the shift to hybrid cloud introduces new complexity associated with managing multiple clouds and on-premises environments. This complexity can quickly increase the time and effort required to meet compliance requirements. By choosing the right platform for your highly secure workloads, IT leaders can establish a strong foundation to address compliance requirements.

Enter IBM LinuxONE, the platform working behind the scenes to power IBM Cloud Hyper Protect Services. For clients big and small, we’re seeing increased interest from the world’s largest banks, ISVs, and even startups in emerging spaces like digital asset custody. They are choosing IBM Cloud Hyper Protect Services and IBM LinuxONE as they seek to simplify their compliance audits by taking advantage of encryption everywhere to address the risk of internal and external threats.

IBM Tutorial and Materials, IBM Exam Prep, IBM Learning, IBM Study Materials
At the heart of the strength of LinuxONE is pervasive encryption — a consumable approach to allow extensive encryption of data in-flight and at-rest designed to substantially simplify encryption and reduce costs associated with protecting data and achieving compliance mandates. With pervasive encryption, IT leaders can:

◉ Encrypt everything and eliminate data scope from consideration

◉ Provide cost-effective compliance while balancing performance workloads in the IBM Cloud by leveraging LinuxONE hardware-accelerated cryptography capabilities

◉ Focus on business value by protected critical data without costly application changes. With the industry’s first and only FIPS 140-2 Level 4 certified cloud hardware security module (HSM), production, developers and test can all work together and share resources, while being separated by cryptographic isolation capabilities to address audit requirements.

With time and resources at a premium, investing in the right platform for compliance support can help significantly reduce the time and effort required to meet your compliance requirements, freeing up your employees to get back to work impacting the bottom line.

Source: ibm.com

Wednesday, 23 September 2020

IBM Cognos vs QlikView: Major Factors to Finding Your Best Tool

Qlik and IBM give their customers with a broad range of products. Two such popular BI tools are QlikView and IBM Cognos. Some of you might be common with either or both means. Although both the devices are business intelligence platforms, certain things make them distinctly different from each other.IBM Cognos, QlikView, IBM Cognos vs QlikView, IBM, Qlik, QlikView vs IBM Cognos

So, are you excited to understand the difference between IBM Cognos and Qlikview?

What Is IBM Cognos?

IBM Cognos is an integrated business intelligence program offered by IBM. It is essentially a web-based service, which is a result of any individual services getting together. Cognos provides a comprehensive set of data analysis and visualization capabilities such as dashboarding, reporting, analytics, advanced analytics, etc. It comes as a combination of multiple components that work together to make an efficient business intelligence solution.

Elements of IBM Congos:

IBM Cognos Connection, IBM Cognos Insight, IBM Cognos Workspace, IBM Cognos Workspace Advanced, IBM Cognos Report Studio, IBM Cognos Event Studio, IBM Cognos Metric Studio, IBM Cognos Query Studio, IBM Cognos Analysis Studio, IBM Cognos Framework Manager, IBM Cognos Transformer, and IBM Cognos Cube Designer.

Characteristics of IBM Cognos:

  • Simple and intuitive interface
  • Personalized experience with customization capabilities
  • Scheduling and alerts
  • Interactive content available (online or offline)
  • A complete web-based experience
  • Easy upload of personal/external data
  • Report directly off a data source
  • Effortlessly merge data sources
  • Dashboards work using drag and drop on mobile device or desktop
  • Best automatic visualizations
  • Various reporting templates and styles

What Is QlikView?

QlikView is a data analysis and visualization tool. Through which users can fetch, mix, process, and analyze data from various sources. It is used initially by developers who concentrate on developing data models, analytical applications, and dashboards, visualizations to create analytical reports and give it to end-users via the Access point. To read data and discover data trends, this access point can help end-user for accessing data, taking out searches, creating data models, visualization, etc.

Features of QlikView:

  • Dynamic BI ecosystem
  • Default and custom connectors
  • Data visualizations
  • Guided and advanced analytics
  • Serves as an application development platform
  • Supports devices such as Windows, iPhone, Mac, iPad, and web-based tools and apps
  • English is the only supported language
  • It is deployable on-premise, on-cloud, and on mobile

IBM Cognos Vs. QlikView

Here, we will discuss all factors of the distinction between IBM Cognos Vs. QlikView.

1. Ease of Use

  • QlikView win points over Cognos when it gets to usability and ease of use. QlikView has more user-friendly dashboards with HTML5 graphical capabilities. Whereas, Cognos is reported sometimes to have bugs while working.

2. Supported Platform

  • QlikView: QlikView offers platform agreement with Windows, web-based, iPhone/iPad, Mac, and Android devices.
  • IBM Cognos: Cognos is harmonious with Windows, Linux, Mac, and web-based platforms.

3. Reporting

  • QlikView: It is more useful than Cognos in data transformation, data modeling.
  • IBM Cognos: Cognos is more useful in functionalities such as regular reporting, ad-hoc reporting, report output, scheduling, etc.
  • Both QlikView and Cognos have the right user and reporting interface.

4. Introduction Control and Security

  • IBM Cognos is better than QlikView in way control and security functionalities.

5. Self-service Capabilities

  • QlikView: QlikView is better than Cognos in specific self-service capabilities such as data exploration and search.
  • IBM Cognos: Cognos is somewhat better than QlikView in features like added fields, data column filtering, auto modeling, collaboration, and workflow.

6. Reliability and Availability

  • QlikView is better in terms of security and availability as the QlikView server is very stable and causes sporadic outages or errors. However, Cognos lags behind Qlik as it still does not offer compatibility with some popular platforms and has limitations with browsing and dashboarding.

7. Performance

  • QlikView: In terms of performance, QlikView is better than Cognos in customization, security, mobile user support, and sandbox/ test environment.
  • IBM Cognos: However, Cognos documents to be better in user and role management, access management, internalization, and size of partner applications.

8. Support and Training

  • QlikView: It gives good customer support through email, phone, live support training, and tickets. QlikView also conducts practical online and in-person training.
  • IBM Cognos: Cognos also gives customer support in all ways except for living support. However, Cognos is known for its active online support and training, which is better than QlikView.

9. Company Size

  • QlikView: QlikView gets its place in almost all sorts of industries, such as small scale, medium scale, and enterprise-level businesses.
  • IBM Cognos: Similarly, Cognos is used in small, mid-sized, and enterprise-level setups. However, it is favored more by large scale enterprises.

IBM Cognos Vs. QlikView: Verdict

Both QlikView and IBM Cognos are successful business intelligence tools offering a wide range of data analysis, data modeling, and reporting capabilities.

The most fundamental difference between QlikView and IBM Cognos is that QlikView is a user-friendly and concentrates more on the end-user and data discovery. Cognos is a comprehensive reporting tool aiming mainly at the IT department’s control of data and advanced reporting and matrices.

However, both devices have their downsides as well. So, it only depends on the user’s demands and needs regarding what they want from the tool. Also, the cost of both devices is more or less similar. The only difference is that QlikView commissioning is a bit cheaper originally, but the total price adds up to a lot when the costs for support are also added. As with IBM Cognos, the upfront price is higher than QlikView but gives better customer support at minimal or no additional fee.

Tuesday, 22 September 2020

Tackling volatility, uncertainty, complexity and ambiguity in IT

IBM Exam Prep, IBM Learning, IBM Tutorial and Material, IBM Prep, IBM Tutorial and Material

2020 has brought unexpected challenges for individuals, governments, businesses and IT teams. Even before COVID-19, organizations had to be prepared for a range of risks: unplanned outages, security breaches, hardware and software challenges, natural disasters and unknown risks. The concept of VUCA — which stands for volatility, uncertainty, complexity and ambiguity — offers a useful way to think about how we can be prepared and respond to unanticipated situations.

VUCA accurately reflects the situation businesses are in today: we face international markets, restructuring, growth and downsizing, technological change, and cultural and societal shifts, to name just a few. Business and IT leaders have to be prepared to act quickly, with clarity and wisdom, in the face of these challenges. Recognizing the reality of VUCA gives teams the opportunity to identify likely and unlikely setbacks and mobilize around common objectives when the need arises.

Despite VUCA, collectively, we need to be hopeful and thoughtful about our future and foster a sense of trust. Trust can change everything. It is essential to build a culture of trust and transparency in our workplaces and our society. Leadership consultant Edward Marshall says that “Speed happens when people … truly trust each other.” Trust should be essential for our governments, companies, organizations and teams, especially in times of VUCA.

Pivoting from onsite to remote service delivery

Let me share a recent story that highlights the lessons of VUCA. In early 2020, as the pandemic was beginning to affect geographies around the world, some leaders started telling me that we needed to change our services approach — and quickly. Our teams provide onsite technical services to IBM clients across the globe, so our whole business model was based on the ability to travel and be in our clients’ data centers. Before we even realized how much COVID-19 would affect the world, it became clear that our services approach would have to evolve to protect our clients and employees. In a matter of weeks, we reimagined how to do our work, radically transforming to offer remote service delivery for the majority of engagements. It was a whirlwind, but I’m so proud of how quickly and effectively the team pivoted to virtual services. It was critical that I trust the team during this time, and that trust paid off. My thinking was, “Let it go to grow.”

The pandemic has exemplified all four elements of VUCA: it’s been volatile, uncertain, complex and ambiguous. But having leaders who didn’t panic and instead looked for opportunity in the face of those challenges empowered us to act quickly to ensure that we could continue meeting our clients’ IT needs while keeping everyone as safe as possible. It’s one of the most powerful lessons I’ve learned in listening, trusting my team, staying open and flexible, and letting go of the status quo in order to come out ahead in a challenging situation.

Lessons of VUCA for IT

In IT, change has always been fast-paced, and uncertainty is nothing new. Reflecting on VUCA can help us mitigate the challenges we face today and in the future. It’s crucial in times like this that we build trust and mobilize for the greater good.

Volatility

No matter how much we plan, sometimes reality confounds our expectations. Volatile situations are stressful, and they challenge our implicit assumptions. The companies that prosper in spite of a crisis are often those that can adapt quickly to bring stability, without clinging to the past or freezing up when things go wrong.

Uncertainty

Uncertainty can be daunting, but trust can be an antidote to it. In IT, we can respond to uncertainty by seeking the collective wisdom of our teams and advisors. To me, that means listening to all voices and trusting the experts to counsel wisely.

Complexity

IT environments can already be complex, and businesses are always striving to bring greater simplicity to IT. When you add a crisis to that, things can get really complicated. I spent more than 20 years helping businesses with IT “crit sits” (critical situations), and one of the things I learned was how important health checks are to ensure you have a clear view of your technology estate and can address potential issues before they turn into a crisis.

Ambiguity

Ambiguity is my favorite component of VUCA because it makes me think about opportunity. The definition of ambiguity is “the quality of being open to more than one interpretation; inexactness.” We live in a world that’s many shades of grey, and there can be multiple right answers to a single problem. Accepting the ambiguity and staying open to possibilities positions us to succeed.

IT support for uncertain times

In IBM Systems Lab Services, we’re a trusted partner for our clients in uncertain times. Every day, we help businesses reduce unknowns in their IT environments. You can’t always predict what might become an IT challenge in the future, but we can help you reduce risk and be proactive about the health of your technology estate.

Saturday, 19 September 2020

Security and safety key in customer experience and economic recovery

IBM Exam Prep, IBM Tutorials and Materials, IBM Learning, IBM Guides, IBM Prep

COVID-19 changed just about everything in our daily lives, across both the supply and demand side of our economy. Nearly everyone yearns for a return to “the old normal.”

According to the surveys I’ve seen, consumers are pivoting toward brands that “take care of the basics”:

◉ Saving money

◉ Customer service

◉ Helping them feel secure

◉ Saving time

Consumers need to feel safe in order to resume buying in person. Merchants who minimize risks will benefit from the coming wave of pent-up demand and shifting brand loyalties.

Restaurants and shops creating safe environments whilst delighting customers with exceptional experiences will be the biggest winners of all, as several industries have seen an uptick in consumer propensity to switch preferred brands.

Technology’s role in the “new normal” customer experience

Consumer brands hyper-focusing on security and safety are reaping the rewards of their strategies. Security by design is always in style. However, some measures come at the expense of convenience. Think long queues, sanitization requirements and hand-washing stations.

Technology can be used to improve the overall customer experience by streamlining other activities. A security-by-design approach unlocks new ways to reduce touchpoints. One such fresh approach is KodyPay, a UK-based startup transforming financial transactions with a twist that my firm, TES Enterprise Solutions, recently helped launch.

KodyPay, a fintech startup co-founded by 20-year-old wonderkid Yao-Yun (Yoyo) Chang, aims to give small to medium sized retailers and restaurants the means to do exactly that using “socially distanced payments.”


The simplicity of the design is impressive. KodyPay removes all the complexity and expense of financial transactions. In a matter of minutes, and without IT skills, a merchant can start accepting socially distanced payments from a wide variety of credit cards, debit cards, e-wallets, and pay-later providers. One more COVID-19 problem solved, dead simple.

While contactless payments systems are not new, KodyPay’s approach is novel with an eye to be “secure by design.” KodyPay see themselves as stewards of customer data and believe data breach is detrimental to business. Data must be secured, always, from end to end. Their choice in infrastructure emphasizes their strategic prowess.

Rather than simply spinning up a few servers on the public cloud and starting to sign up clients, KodyPay decided to get the infrastructure correct from day one. I’ve been impressed from the beginning with the maturity of Yoyo, and his focus on protecting his customers’ data as the number one priority, recognizing the role hardware can play in creating a competitive differentiator.

With the support and guidance from TES, KodyPay runs on IBM LinuxONE and IBM DS8000. For companies like KodyPay with sensitive data that cannot be compromised, breached or exposed, I see this as the ideal stack.

All combined, it’s no surprise partnerships have been made with industry leaders like Cybersource, a Visa solution. Furthermore, a former partner at Goldman Sachs, Hank Uberoi  joined KodyPay as Chairman earlier this year. Most recently, Uberoi was the Executive Chairman of cross-border payments company Earthport, which Visa acquired in 2019 for approximately £247 million.

Technology as revenue driver and cost reducer


Contactless payment systems eliminate the need to sanitize POS equipment, resulting in faster checkout times and lower operating costs. Faster checkout times mean more customers per hour, reducing wait times to enter shops. Less waiting means less cart abandonment or waning desire to shop. Speed equals convenience. This convenience delivers the double impact of higher revenues per hour and lower operating costs, much needed by SMBs right now. KodyPay helps bring convenience and safety back together.

This is what excites me about KodyPay. They have a transformative solution for today, and future positioning for tomorrow that will help address our other global crisis: climate change.

The overall design of KodyPay is quite elegant. Less POS hardware means less stuff. Fewer materials extracted from the earth, less electricity consumed, less waste.

As university students return to class at the University of York and beyond, I’m hopeful the students will have one less thing to worry about and have faster checkout times thanks to KodyPay.

Source: ibm.com

Tuesday, 15 September 2020

4 ways to transform your mainframe for a hybrid cloud world

IBM Exam Prep, IBM Tutorial and Materials, IBM Certification, IBM Guides

The IBM mainframe remains a widely used enterprise computing workhorse, hosting essential IT for the majority of the world’s top banks, airlines, insurers and more. As the mainframe continues to evolve, the newest IBM Z servers offer solutions for AI and analytics, blockchain, cloud, DevOps, security and resiliency, with the aim of making the client experience similar to that of using cloud services.

Many organizations today face challenges with their core IT infrastructure:

◉ Complexity and stability: An environment might have years of history and be seen as too complex to maintain or update. Problems with system stability can impact operations and be considered a high risk for the business.

◉ Workforce challenges: Many data center teams are anticipating a skills shortage within the next 5 years due to a retiring and declining workforce specialized in the mainframe, not to mention the difficulty of attracting new talent.

◉ Total cost of ownership: Some infrastructure solutions are seen as too expensive, and it’s not always easy to balance up-front costs with the life expectancy and benefits of a given platform.

◉ Lack of speed and agility: Older applications can be seen as too slow and monolithic as organizations face an increasing need for faster turnaround and release cycles.

Some software vendors suggest addressing these challenges with the “big bang” approach of moving your entire environment to public cloud. But public cloud isn’t the best option for every workload, and a hybrid multicloud approach can offer the best of both worlds. IBM Z is constantly being developed to address the real challenges businesses today face, and every day we’re helping clients modernize their IT environments.

There are 4 strategic elements to consider when modernizing your mainframe environment:

◉ Infrastructure
◉ Applications
◉ Data access
◉ DevOps chain

Let’s take a closer look at each one.

1. Infrastructure modernization


Most IBM clients’ mainframe systems are operating on the latest IBM Z hardware, but some are using earlier systems. The first step in updating your mainframe environment is adopting the newest features that can help you get the most from your infrastructure. Many technical innovations were introduced in IBM z15. This platform has been engineered to encrypt data everywhere, provide for cloud-native development and offer a high level of stability and availability so workloads can run continuously.

2. Application modernization


Core system applications — implemented as monolithic applications — form the backbone of many enterprises’ IT. The key characteristic of these monolithic applications is the hyper integration of the main components of the application, which makes it difficult to understand and update them. Modernizing your mainframe applications starts with creating a map to identify which apps should follow a modularization process and which should be refactored. This implies working on APIs and microservices for better integration of the mainframe with other IT systems and often redefining the business rules. You might also move some modules or applications to the cloud using containerization.

3. Data access modernization


For years, some businesses have chosen to move their sensitive data off IBM Z to platforms that include data lakes and warehouses for analytics processing. Modern businesses need actionable and timely insight from their most current data; they can’t afford the time required to copy and transform data. To address the need for actionable insights from data in real time and the cost of the security exposures due to data movement off the mainframe, IBM Z offers modern data management solutions, such as production data virtualization, production data replication in memory, and data acceleration for data warehouse and machine learning solutions.

4. DevOps chain modernization


The pressure to develop, debug, test, release and deploy applications quickly is increasing. IT teams that don’t embrace DevOps are slower to deliver software and less responsive to the business’s needs. IBM Z can help clients learn how to modernize through new DevOps tools and processes to create a lean and agile DevOps pipeline from modern source-code management to the provisioning of environments and the deployment of the artifacts.

Saturday, 12 September 2020

Put AI to work in your business with metadata management

IBM Exam Prep, IBM Certification, IBM Learning, IBM Guides, IBM Cert Exam

You’ve likely heard it before, but it’s worth repeating that more than 80% of all data collected by organizations is not in a standard relational database. Instead, it’s trapped in unstructured documents, social media posts, machine logs, images and other sources. Many organizations face challenges to manage this deluge of unstructured data. For example, if you want to use large-scale analytics to gain insights for your business priorities, how are you going to pinpoint and activate the relevant data? Furthermore, how do you go about identifying and classifying sensitive data while removing data that’s redundant and obsolete?

Metadata management software like IBM Spectrum® Discover can help you manage unstructured data by lessening data storage costs, uncovering hidden data value and reducing the risk of massive data stores. Using such a product can enable you to make better business decisions and gain and maintain a competitive advantage.

Metadata management for AI solutions


Today, many businesses are looking for opportunities to take advantage of machine learning, deep learning and other AI technologies. Some of the most common tasks AI performs include:

◉ Extracting information from pictures (computer vision)

◉ Transcribing or understanding spoken words (speech to text and natural language processing)

◉ Pulling insights and patterns out of written text (natural language understanding)

◉ Speaking what’s been written (text to speech, natural language processing)

◉ Autonomously moving through spaces based on its senses (robotics)

◉ Generally looking for patterns in heaps of data (machine learning)

Real-world examples of these AI solutions include managing medical imaging data and “AI Doctors” in the healthcare industry; identifying fraud, algorithmic trading and portfolio management in financial services; automated claims handling in the insurance industry; and predictive maintenance and AI-assisted designs in the manufacturing industry.

Metadata management solutions like Spectrum Discover are particularly useful to businesses interested in using machine learning to gain more insights from their data. By helping you identify and prepare the data for analysis through machine learning, the software can help you fast track your AI projects.

New IBM Redbooks on AI and IBM Spectrum Discover


If you’re interested in learning more about metadata management software, 2 recent IBM Redbooks cover practical AI use cases with IBM Spectrum Discover and other IBM Storage software:

Making Data Smarter with IBM Spectrum Discover: Practical AI Solutions explores 6 use cases for AI solutions using Spectrum Discover in technical depth:

◉ Categorizing medical imaging data with content-search capability

◉ Extracting metadata from LIDAR imagery with custom applications

◉ Organizing training data sets for artificial intelligence

◉ Using artificial intelligence in medical imaging – JFR Challenge

◉ Data governance use case: Data staging for high-performance processing

◉ Data optimization use case: Data migration to tape for cost-efficient archiving

In addition, this book offers a reference architecture on how to design and implement an AI data pipeline using IBM Spectrum Discover.

In the second Redbooks publication, Cataloging Unstructured Data in IBM Watson Knowledge Catalog with IBM Spectrum Discover, you’ll find in-depth use cases from healthcare, life sciences and financial services. This paper explains how IBM Spectrum Discover integrates with the IBM Watson® Knowledge Catalog component of IBM Cloud Pak® for Data. This integration enables storage administrators, data stewards and data scientists to efficiently manage, classify and gain insights from massive amounts of data. The integration improves storage economics, helps mitigate risk and accelerates large-scale analytics to create competitive advantage and speed critical research.

You can explore other technical content at the IBM Redbooks website.

Source: ibm.com

Thursday, 10 September 2020

IBM

The creation of the IBM Advanced Control System

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Learning, IBM Prep

This is the story of a global IBM process control system, conceived in Canada 48 years ago, that incredibly is still serving customers successfully.

In early 1972, IBM Canada received an RFP from Imperial Oil Limited (IOL)/Exxon for a process control system that would have the capability to operate IOL’s ever larger and complex refinery systems, like the one they were going to start up at Edmonton, Alberta. Given IOL’s requirements, IBM’s then-current process control system solutions (1800 DAIS system, developed in Sarnia in 1968 and the IBM 1800 system at Ontario Hydro in 1971) could not measure up. A new solution was required, one requiring new technology and a more robust design.  A plus for IBM was that IOL had confidence in IBM’s ability to implement process control systems, as was demonstrated at IOL’s Sarnia refinery.

A core team, led by John Buchan, comprised of John Barron, Doug McWhirter, Larry Baker, Art Caston, Ken Sutherland, and Peter Long, together with Jack Ferren with A/FE Contract Practices, prepared IBM’s response to the Exxon/IOL RFP.  IBM’s “turnkey, fixed-price” proposal offered an innovative solution known as the Advanced Control System (ACS).

>> Take a tour of the modern mainframe

Doug McWhirter recalls: “John Buchan’s role was key to our proposal. When presented with this RFP, IBM had little relevant experience in 1972 with complex, commercial, fixed price, turnkey projects. It was tempting to reject this as representing an unacceptably high-risk opportunity. John was the driving force which pushed the company to overcome technical, financial and contractual obstacles.”

Bernie Kuehn, IBM Canada Sales VP, spearheaded this initiative, with support from IBM Canada’s ORC (Operations Review Committee), which included Jack Brent, Lorne Lodge and Carl Corcoran. Art Caston recalls: “Bernie Kuehn made the strong point, after the loss to Univac at Ontario Hydro, that to be successful in future bids of this type, it would require forward pricing of the software development over multiple future prospects.”

IOL and Exxon Research & Engineering (ER&E) selected the ACS proposal in September 1972, over competitive bids from Foxboro and Honeywell. The ACS solution was to be deployed at the IOL Strathcona refinery being built at Edmonton, Alberta.

Gerry Ebker, retired Chairman and CEO of FSD recalls: “Vin Learson was Chairman of IBM Corp. and also on the board of EXXON Corp. at that time. Vin had said that every time EXXON had an IT opportunity, IBM lost. He directed that the next time there was a bid opportunity, IBM had better win. When the automation of refineries in Edmonton, Canada came up for bid, IBM bid a fixed-price turn-key solution and won.”

The ACS solution was visualized as a process control system operating on dual IBM mainframes with IBM front-end processors offering high-speed online input/output data streams to and from industrial plant equipment in real time.   ACS was designed to automatically perform process control functions to control thousands of plant process operations, with hundreds of operations being controlled each and every second. The IOL Strathcona installation employed a pair of System 370/145 mainframes in order to provide a backup/failover capability, ensuring continuous operation in case the primary mainframe failed. Color graphic terminals provided plant operators with a variety of vivid and easy-to-use “live” displays from which to monitor/control the plant. The diagram below broadly illustrates the ACS hardware components as initially installed at Exxon’s Strathcona refinery.

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Learning, IBM Prep

Advanced Control System hardware components (Strathcona Refinery)

Late in 1972, the ACS project came to life at the Federal Systems Division (FSD) facility at Clear Lake City. This facility housed the FSD staff that developed the Real Time Operating System (RTOS) for monitoring and guiding NASA’s Apollo and Gemini missions. Their experience and knowhow formed the basis for the ACS Special Real Time Operating System (SRTOS). Ira Saxe, project manager, formed the initial ACS team that included FSD development staff, five Canadian IBMers (Peter Long, Ash Abhyankar, John Mathewson, Clark Starrie and Gerry Kirk) and two UK IBMers (Dave Vesty and Tony Hamilton).

At the same time, several Exxon and ER&E staff, led by Roy Lieber, also joined the ACS project in order to monitor project status, clarify functional requirements and provide timely feedback on the software being developed by IBM.  Eventually, the ACS project brought together a diverse and talented group from numerous organizations, including Exxon Research and Engineering, Imperial Oil Canada, IOL Strathcona refinery, IBM Canada, IBM Federal Systems division, IBM Corporation, IBM UK, and IBM Italy. A veritable United Nations development team!

In June 1973, Esso Antwerp and IBM Belgium joined the ACS project. This added a second Exxon refinery installation and along with it came additional requirements.

At the outset, the ACS project was fraught with multiple burdensome challenges. Communications and cooperation faltered among IBM internal organizations and also externally with ER&E, the project lacked accountability, solutions requirements were changed without control, deliverables and deadlines were ill defined, and as a result development effort and associated expenses grew immensely.  The project was headed for disaster.

Eventually, IBM Assistant Treasurer John Stewart was alerted that the ACS project was out of control and expending enormous amounts of money.  At that time, spending on development had passed five times the payment amount of the two contracts with Exxon, with no estimate of the cost to completion. As a result of Stewart’s intervention, Frank Cary, IBM Chairman and CEO, summoned the key IBM executives responsible for the project to explain the situation at a meeting of the Corporate Management Committee.  Frank Cary then directed Ralph Pfeiffer to take charge of the project for both A/FE and EMEA and to tell Exxon that the ACS project cost was capped at a given amount, and after that cap, any further work would be to Exxon’s account.  A meeting with Exxon executives was held at Florham Park, New Jersey, a few days later, to convey the ACS project status, IBM’s decision and to seek Exxon’s co-operation to finalize the specifications and contain development costs.

Doug McWhirter recollects: “The Exxon contract limited IBM’s liability to the quoted turnkey price. When costs began to far exceed the ability to complete the project within the contracted price, it was Bernie Kuehn who championed the position that the IBM shouldn’t default on its commitments to its customer and that the project should be completed for the committed price. Following extensive executive reviews, Bernie’s resolve prevailed, resulting in IBM absorbing a significant cost over-run.”

Early in 1974, the project was shored up with the appointment of Brian Finn, as Systems Manager, with direct access to Ralph Pfeiffer, and later the appointment of Gerry Ebker, as the senior FSD project manager.

Brian Finn recalls: “Following Ralph Pfeiffer’s meeting with Exxon senior management, a revamped implementation plan, covering each of the ACS software components was developed and agreed with the parties involved – A/FE, EMEA, FSD, ER&E, IOL and Exxon Belgium.  From that point on, the drive to complete the project on-time and on-budget began in earnest.”

The ACS software development team was organized across several functional components. The sub-teams worked together closely to bring the following software components into a single integrated system:

◉ Special Real Time Operating System (SRTOS) – provided real-time extensions of the normal OS/VS1 operating system functions, such as task management, storage management, etc.

◉ Display management (DISM) – provided display management functions for operator/engineer communications, using colour graphic terminals to display process variable information, operating targets, performance or response graphs and process schematics (see sample schematic below).

◉ Supervisory services – provided services to access process variable data, utilities to report process variable information and alarm history, and track functional changes to variables.

◉ File builder – provided forms to define process variables, general algorithms, and process commands without programming.

◉ Process unit control (PUC) – provided the engine that did the second-by-second processing of process variable inputs, using various defined algorithms to generate output values, at a rate of 200-300 variables per second.

◉ Data Acquisition and Monitoring Option (DAMO) – provided the interface between the S/370 and the front-end System/7 processor. This included streaming of process variable inputs and outputs, sending instrumentation language, and error handling.

◉ Front–end processor – resident in the System/7, this processor provided the analog and digital read/write interface to industrial plant instrumentation.

◉ Oil Movements and Storage (OM&S) – provided automated scheduling, direction and control of the movement of products through refinery components and to external pipelines, storage tanks and ships.

Roy Lieber, Exxon Research and Engineering manager, remarked on development of the Oil Movements and Storage software: “Probably most noteworthy about the project was that Exxon had significant experience in the process control portion, but the Oil Movement and Storage (OM&S) aspect was something a bit more challenging. Not to say that the PUC development was a piece of cake, but the potential problem areas were better known, and the debugging became the larger problem.  OM&S however was relatively new technology, and the Path Select a large element to be perfected. Today, of course, the equivalent of Path Select is trivial, but 46 years ago was another story.”

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Learning, IBM Prep

Sample process unit control schematic

The first customer installation of ACS at the newly built IOL Strathcona refinery adjacent to Edmonton, Alberta, began in summer of 1974 and was completed February 1976. Roy Lieber was appointed Exxon’s project manager for the installation. IBMers John Mathewson, Gerry Kirk and Gord Stretch formed the onsite support team. Gord supported the IBM hardware installation and ensured proper connectivity between the System/7 and refinery instrumentation. John and Gerry worked closely with the Clear Lake City development team to configure software and fix software problems.

The ACS installation faced a stringent acceptance test, where the system had to run without failure for 30 consecutive days. Gerry Kirk remembers: “As the ACS installation progressed, the software became more and more stable.  The 30 day no-fail target looked to be within grasp, when the ACS system unexpectedly crashed on the 26th day due to a lack of virtual storage. The irony was that the storage problem was caused by the OS/VSI operating system.”

Mike Burns, IBM Western Region VP at the time of the Strathcona install, recalls: “The unique thing that stood out in my mind till today was not the hardware or the software, rather it was the culture of our client EXXON. The professionalism and integrity that they demonstrated in every situation was really admirable.”

Esso Belgium’s plans to modernize and expand its refinery at Antwerp led to a second ACS installation that began in January 1976 and was completed December 1976. Ulen Jackson served as Exxon’s team leader for the installation.   IBMers Mauro Castelpietra, Joe Davis, Earl Ellisor, and Lenny Koll, led by Gary Young, formed the onsite support team.

Mauro Castelpietra recalled his ACS development and installation experience: “Failure was not an option. In addition to supporting the Antwerp installation, we developed several ACS enhancements and diagnostic tools which benefited many subsequent ACS installations.” Mauro went on to build a tremendous career around installing and supporting ACS throughout the world.

ACS also went on to play a key role in education, where IBM partnered with universities (Waterloo, Purdue, Tulane, Notre Dame, etc.), to make ACS installations available to their chemical engineering departments and students. Gerry Sullivan, with the University of Waterloo Chemical Engineering Faculty, promoted ACS at his and other universities in North America.

He remembers ACS: “In the past, process control courses were very theoretical and the methodology could only be applied to very simple processes. ACS allowed us to support our theoretical process control lectures with real time simulations and control of typical complex petrochemical processes. Industry applauded the new approach and were delighted with the knowledge and practical skills exhibited by our graduates.”

The ACS project was favored with exceptional leaders, who navigated the ACS project to be greatly successful. Three of the leaders went on to contribute to IBM’s global growth:

◉ Brian Finn, who represented IBM World Trade, went on to become leader for IBM in India and South Asia, then IBM Asia Pacific and eventually retired as Chairman and CEO of IBM Australia and New Zealand.

◉ Garry Rasmussen, who represented IBM Canada, went on to become VP and CIO at Merrill Lynch and subsequently President and CEO at IBM Canada’s joint venture with Canadian telecommunication organizations.

◉ Gerry Ebker, who represented IBM FSD, went on to become Chairman and CEO of IBM FSD, managing IBM’s significant business growth with the U.S. federal government, leading over 20,000 IBM FSD employees.

ACS development and support continued through the 1980’s and 1990’s, initially led by Gerry Kirk, with responsibilities centered in the IBM Canada Toronto Lab. IBM Canada personnel were also involved in worldwide marketing of ACS.  Over time, ACS customer installations grew to number over 120 worldwide, in the following countries: Canada, USA, Brazil, Venezuela, UK, Spain, France, Saudi Arabia, Belgium, Netherlands, Germany, Italy, Sweden, Japan, Philippines, Singapore, and Australia. ACS proved its adaptability by controlling plants and refineries across several industries: petroleum, chemical, pulp and paper, mining, steel, glass, pharmaceutical, utilities, and gas distribution. And after 45 years at Strathcona refinery, ACS is still in operation today in at least 10 industrial plants across the globe!

In July 2020, a few of the original ACS team members, from 1975, got together for a virtual Zoom reunion. Once the participants from Canada, USA and Australia adjusted to their aged appearances, stories flooded back of the “good old days” some 45 plus years ago. The birth of the Advanced Control System has formed many lasting memories. Memories of colleagues, friendship and achievement, enveloped by something special called ACS. More virtual reunions are planned for the fall, with larger global participation.

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Learning, IBM Prep

ACS Zoom reunion

Source: ibm.com

Tuesday, 8 September 2020

Modernize and scale your enterprise IT architecture with containers technology

IBM Exam Prep, IBM Certification, IBM Learning, IBM Tutorial and Material

If 2020 has confirmed anything, it’s that businesses today can’t operate in silos. Their relationships with customers, partners and suppliers need to be built on a strong foundation of collaboration. B2B collaboration refers to a network of organizations working together to plan and execute operations and meet shared objectives like improved efficiency, cost reduction, product innovation and customer service improvement. B2B collaboration solutions simplify the connectivity between people, systems and data that are managing an increasingly complex, multi-enterprise ecosystem overrun with disparate systems, processes and tools. These enterprise-grade solutions are critical to the success of any business. They are usually supported by strong, experienced development teams that adopt industry-leading deployment tools to ensure reliability and scalability of their deployments.

As enterprises shift to drive more operational efficiencies, reduce their infrastructure footprint and attempt to provide always-on and scalable B2B platforms, advancements in cloud technologies are helping drive innovation and faster time to value. About a decade ago, everyone expected organizations to move to the cloud – with the assumption this meant a public cloud. Fast forward to today, most businesses still maintain significant on-premises environments with limited public cloud deployments. However, with enhanced support for hybrid cloud deployment models, industry experts maintain that it is only a matter of time before enterprises transition to the cloud.

What is a container?


Containers are an executable unit of software in which application code is packaged, along with its libraries and dependencies, in common ways so that it can be run anywhere, whether it be on desktop, traditional IT, or the cloud. Containers, which are used to package and partition parts of a solution, are helping drive cloud deployment and adoption.

Enterprises worldwide are taking advantage of containers technology to enable virtualization while deploying B2B collaboration solutions in one of the following ways:

◉ As an on-premises solution inside their firewall
◉ Onto a hosted (or public dedicated) cloud infrastructure
◉ In a hybrid fashion where the workloads and deployments are spread across multiple data centers or multi-cloud environments

Early movers to the cloud started by creating and managing their own containers to transition their on-premises software to the cloud. Some of these early movers took advantage of docker based deployments. As cloud adoption grew, organizations started leaning towards open platforms, that support flexible deployment and support across multiple cloud providers, thereby helping avoid vendor lock-in. One such widely used platform is the Red Hat® OpenShift® Container Platform. OpenShift helps you manage the lifecycle of developing, deploying, and managing container-based applications. It provides a hybrid cloud application platform for deploying new and existing applications on secure, scalable infrastructure resources with minimal configuration and management overhead.

Containers technology built on platforms like OpenShift, help enterprises to modernize their B2B collaboration applications by providing:

◉ Speed and agility for quick-paced development processes and deployment flexibility

◉ Significant savings in IT operational costs through containerized architecture models

◉ Incremental value from existing software investments while simultaneously accelerating innovation

IBM Certified Containers


You can run IBM Certified Containers on a different cloud service provider’s Kubernetes software, or on the Red Hat OpenShift Container Platform (RHOCP), an even simpler, more efficient way to deploy, manage and scale secure, enterprise-grade software across multiple environments.

IBM Certified Containers are available across all of the IBM B2B Collaboration solutions bringing together Red Hat-certified containers and streamlined deployment instructions for OpenShift. These containers, orchestrated on the OpenShift container platform, provide a secure, compliant, scalable enterprise-grade application. This lets you free up your valuable support resources to focus on the more important aspects of building innovative extensions to the core application, helping serve the needs of your customers. Thus, these containers not only help modernize and simplify your current implementation but also help build operational efficiencies and reduce infrastructure costs.

Source: ibm.com

Friday, 4 September 2020

Automate AIX and IBM i operations with Ansible on IBM Power Systems

IBMi, IBM Exam Prep, IBM Tutorial and Material, IBM Cert Prep

IT organizations are actively looking for ways to modernize software development and IT operations. However, most of them run on multiple operating environments. According to a 2020 HelpSystem survey of IBM i users, more than 80 percent of respondents run other environments like Linux, AIX and Windows along with IBM i. Each environment comes with its own set of administrative tools and processes, which makes it challenging to establish modern agile and DevOps processes consistently. Often modern agile and DevOps processes are adopted only in new implementations. So, how can you successfully apply these processes across the entire IT stack?

Start by establishing a consistent approach to automate IT operations across the various operating system (OS) environments. Manual OS build takes significant admin hours. Its reliability depends on the admin’s experience and skills. In fact, one of the common challenges we hear from AIX and IBM i users is that this skills gap is increasing. Most organizations run hundreds of these environments. This means admins will need to repeat their build processes across multiple environments and they also need to validate that the correct security baseline is applied. With manual processes, there is a high likelihood for errors and delays. Automating these processes will enable admins to quickly deliver reliable OS images on demand.

However, consistent automation across multiple OS environments requires a tool that works across all environments. Red Hat Ansible Automation Platform is that tool. It is built on the open source community project sponsored by Red Hat and can be used across IT teams — from systems and network administrators to developers and managers. Red Hat Ansible provides enterprise-ready solutions to automate your entire application lifecycle — from servers to clouds to containers and everything in between.

Ansible content for AIX and IBM i helps enable IBM Power Systems users to integrate these operating systems into their existing Ansible-based enterprise automation approach. This also helps address the AIX and IBM i skills gap, since admins can leverage their existing Ansible skills to automate these environments. Red Hat Ansible Certified Content for IBM Power Systems, delivered with Red Hat Ansible Automation Platform, is designed to provide easy-to-use modules that can accelerate the automation of operating system configuration management. Users can also take advantage of the open source Ansible community-provided content (i.e., no enterprise support available) to automate hybrid cloud operations on IBM Power Systems.

Source: ibm.com

Thursday, 3 September 2020

4 ways Red Hat OpenShift is helping IBM Power Systems clients

IBM Exam Prep, IBM Certification, IBM Tutorial and Material, IBM Learning, IBM Prep

OpenShift® on IBM Power® Systems takes advantage of hybrid cloud flexibility, enterprise AI, and the security and robustness of the Power platform for private and public clouds. OpenShift is the Red Hat® cloud development platform as a service (PaaS) that enables developers to develop and deploy applications on public, private or hybrid cloud infrastructure. OpenShift 4, its latest version, is an Operator-driven platform that delivers full-stack automation from top to bottom. From Kubernetes to the core services that support the OpenShift cluster to the application services deployed by users, everything is managed throughout its lifecycle with Operators.

In this blog post, I’ll highlight 4 ways that OpenShift on IBM Power Systems is helping clients as they modernize applications and move to hybrid cloud.

4 benefits of Red Hat OpenShift 4 on Power


Red Hat OpenShift on Power Systems can be a building block in your journey to hybrid cloud. It’s well known that IBM Power Systems hosts mission-critical workloads and delivers excellent workload performance and reliability across industries. Now you can take advantage of this performance with container workloads using Red Hat OpenShift Container Platform. You can rapidly deploy OpenShift clusters using IBM PowerVC on Power Systems enterprise servers to help modernize your existing workloads.

Here are some of its advantages:

1. Flexibility


OpenShift on Power Systems can be deployed on IBM PowerVM and Red Hat KVM hypervisor, enabling you to use either scale-up or scale-out servers as required. You can also install Red Hat OpenShift bare metal on Power Systems.

2. Performance


Due to the simultaneous multithreading (SMT) technology on Power Systems, it’s possible to run more threads per core, which reduces the number of cores in comparison to an x86 system, achieving a 3.2X container density per POWER9 core. Depending on the types of workloads used — for example, a large-scale database, an AI or machine learning application or training modules — there’s a significant performance boost when using IBM Power Systems.

3. Better storage ROI


With the invention of IBM PowerVC Container Storage Interface (CSI) driver, you can take advantage of using existing block storage subsystems as persistent volumes for the container world. IBM PowerVC CSI driver bridges the need for persistent storage in a container environment by using your existing storage infrastructure.

Also, with the new addition of IBM Block Storage CSI driver, clients that don’t have PowerVC can leverage this CSI driver to directly access their existing IBM storage. The IBM Block Storage CSI driver for IBM Storage systems, can dynamically provision persistent volumes for block or file storage to be used with stateful containers, running in Red Hat OpenShift Container Platform.

The savings is manifested in many ways: new storage purchases and technology and training costs can be reduced or completely avoided.

4. Modernization and the hybrid cloud journey


Modernizing applications has become more critical than ever. Earlier apps can be difficult and costly to maintain, require antiquated or hard-to-find developer skills, and create a maze of disparate platforms that grow in complexity over time. OpenShift is built to help you make the shift to app modernization with greater ease, efficiency and precision. This Kubernetes-based platform enables you to achieve shorter application development cycles to deliver better quality software.

In addition to the many benefits of OpenShift on Power Systems, your journey to hybrid cloud can be further enhanced by IBM Cloud Pak® solutions built on Red Hat OpenShift, such as the Cloud Pak for Multicloud Management, Cloud Pak for Data and Cloud Pak for Applications. Together they provide a complete solution for AI, machine learning and cloud-native application workloads and management.

Red Hat OpenShift running on Power Systems is currently available on premises and in IBM Cloud data centers across the globe, as well as in partner cloud solutions (like Google Cloud and Skytap), to create a great synergy for your hybrid cloud environment. Thus, you now can deploy and manage your applications and services at your data center or in a public cloud. OpenShift’s ability to bring IBM Power Systems into the heterogenous container and virtualization environment is focused on hybrid, open cloud architecture and brings harmony to divergent hardware platforms that must coexist to make the business run.

Source: ibm.com

Wednesday, 2 September 2020

Modernize your apps with IBM Power Systems in hybrid multicloud

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Prep

Most people working in IT today have probably heard loud and clear that hybrid multicloud is the new “new normal.” In this blog post, I’ll share an example of how you can use IBM Power® Systems technology in a hybrid multicloud environment with management software available in Red Hat® OpenShift® Container Platform and IBM Cloud Pak® for Multicloud Management.

By now we’ve all either read about, experimented with or implemented workloads in a container environment. These new container technologies can work together with virtual machines on Power Systems within your data center and on multiple public clouds and still be managed through a single control system. So let’s jump in and take a quick look at how you can get real business value with software that lets you integrate these technologies.

Let’s talk speed and agility


You’ve just completed that design workshop for modernizing your business application. The new design may have components that can take advantage of the new container technology but also need to integrate with existing VM-based applications. IBM Cloud Pak for Multicloud Management running on Red Hat OpenShift can provide a single view and control into a truly hybrid multicloud environment.

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Prep

Let’s focus on a specific feature that can help speed the development of our new hybrid application. Within the Cloud Pak for Multicloud Management exists the Terraform and Service Automation Module. This module provides the following key capabilities that can assist you in development of a hybrid multicloud solution:

◉ Can deploy VMs to private or public clouds

◉ Can deploy VMs to different cloud architectures, one of which is OpenStack — by providing an OpenStack interface, you can easily connect to your in-house PowerVC environment or to a public cloud that hosts Power architecture, like IBM Cloud. The product has built-in connection templates for over a dozen of the most popular types of clouds

◉ Can orchestrate a workflow that can combine the capabilities described above to create deployments into multiple clouds in a single action

◉ Can integrate with the Multicloud Management catalog, which can then facilitate both container and VM-based deployments from a single view and control point

Now, let’s look at a specific deployment example.

5 steps to a new hybrid app


I have a new application that will ultimately run in a Red Hat Enterprise Linux® (RHEL) OpenShift container in a public cloud, which will request data from a Power VM (AIX, IBM i or Linux) running in my private data center.

Step 1 – Let’s create a connection profile to use our on-premises private cloud (that is, PowerVC) as a target for deployments from the Cloud Pak Multicloud Management Terraform and Service Automation Module. It’s pretty straightforward and requires only a few connection and authentication parameters:

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Prep

Step 2 – Now we need to create a Terraform template that describes the image on PowerVC where the newly developed VM is sourced. Included in the Cloud Pak is a graphical tool called “Template Designer,” which has many tailored templates that can be used as a starting point. In this template, we also define deployment attributes that can either be hard coded or left open to be specified at deploy time — items such as CPU, memory, network and ssh-keys. The designer tool is integrated such that once finished, you can push the template into the Terraform and Service Automation Module.

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Prep

Step 3 – Once the template reflecting the VM image and deployment attributes has been created, it can be used to create a “service.” This is where you have the option to orchestrate other activities as part of the deployment. An example might be integrating with a DNS Network IP Registration product to obtain an IP address and hostname, or generating an approval process, or an email notification. Maybe you need 2 VMs deployed together for high availability. There are multiple prebuilt templates available in the tool to allow orchestration steps in a graphical workflow. You drag and drop the action in sequence, and then fill in the specific parameters for that action. The workflow could also include a decision tree that changes the flow based on action outcomes.

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Prep

Step 4 – Now that we have a service defined, it can be published to the Multicloud Manager Catalog. This is done by simply clicking on a publish button and providing a little information on how you’d like the service to appear in the catalog.

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Prep

Step 5 – Having this new VM-based application exposed into the Multicloud Manager catalog now makes it visible right along with our partner container and many other services. It’s now possible for various personnel to deploy both the VM to your private PowerVC-based cloud and your container workload into a public cloud.

Now that you have the initial setup, you have the flexibility to allow multiple teams to deploy these workloads over and over, and in ways that make the most sense. Maybe you don’t have enough computer or storage resources in your data center and need to quickly go into the public cloud. You can import your VM image into IBM Cloud PowerVS, add the connection information and modify your target within the service deployment. Maybe you’d like to develop that container-based workload on premises. You can use Red Hat OpenShift running on Power Servers within your data center to keep everything internal until you’re ready to move it out to the public cloud. The flexibility is within your control and with the same tools.