Wednesday, 14 November 2018

Top IBM Power Systems myths: Linux on x/86 is different from Linux on Power

There are many misconceptions about IBM Power Systems in the marketplace today, and this blog series is all about dispelling some of the top myths. In the last post, I put aside the myth that IBM Power Systems has no cloud strategy. In this post, we’ll look at a myth that has been propagated by many of our x/86-based competitors and consultants. This myth wants you to believe that if you invest in a Linux on IBM Power Systems solution, you’ll be getting an inferior product, one that’s not “real” Linux, or that your applications won’t work the way they should.

IBM Systems Lab Services, Linux on Power Systems, Linux systems, Power servers, Power Systems, Services

There is one aspect of this myth that’s true. Linux on Power solutions are:

◈ Almost always faster
◈ More reliable
◈ Usually smaller (require fewer cores and fewer physical systems)
◈ More secure

In all other aspects, from Linux distributions and release levels to system management and monitoring tools and development environments, they are the same.

Consider the following points:

Endianness


Not long ago, there was a difference between Power and x/86 based systems that affected not only Linux distributions but all operating systems, applications and databases that ran on those systems. Our industry borrowed from Jonathan Swift’s Gulliver’s Travels, using the terms “big endian” and “little endian” to describe the way computers represent data. x/86 systems have always been “little endian” and Power was always “big endian.” That caused a problem because software created for little endian systems could not run on big endian systems without modification, and vice versa.

In April 2104, IBM announced little endian (LE) support for POWER8, and today all of our POWER9 systems support a 64-bit LE environment.

While LE support removes the endian differences, LE code compiled for an x/86 system needs to be recompiled for a Power-based LE system.

Linux distributions


IBM Systems Lab Services, Linux on Power Systems, Linux systems, Power servers, Power Systems, Services
There are hundreds of Linux-based distributions available in the market today. While many can run on IBM Power Systems, IBM officially supports RedHat Enterprise Linux, Ubuntu and SUSE Linux Enterprise Server. Community versions of Linux like Debian, openSUSE, CentOS, Fedora and others are also available.

These distributions are built around a package management system and include the Linux kernel, Gnu shell utilities, system management tools, a desktop environment as well as open source software and sometimes proprietary packages.

There are Linux distributions with features designed to take advantage of some of the unique capabilities of IBM Power Systems architecture, and the startup process for a Linux distribution may be a bit different between x/86 and Power. However, once it is installed the features and functionality are identical.

Virtualization options


There are usually several options for virtualization within a Linux distribution. KVM is the default option for Ubuntu and is the foundation for Red Hat virtualization. PowerVM is the only virtualization option for SUSE Linux Enterprise Server 12 at this time. KVM on Power and RHEV options provide the same functionality on Power as they do on an x/86 platform, while PowerVM is designed to take advantage of IBM Power architecture.

System administration and monitoring tools


All Linux distributions supported on IBM Power platforms provide system administration and monitoring tools. Cockpit, Performance Co-Pilot (PCP) and NMAP are examples of the tools provided by Red Hat. SUSE Linux Enterprise server supplies an advanced system management module that includes CFEngine, Puppet, Salt and The Machinery Tool. Landscape is Ubuntu’s systems management package.

There are many open source and proprietary solutions available like Chef, JuJu, Ansible, Wireshark and countless others that can be added to a Linux distribution. In fact, the top five open source tools for Linux systems administration are available on Power Linux systems providing the same level of functionality and support as they do for x/86 based systems.

Application development environments


Linux on Power solutions have all of the tools developers need to build their application portfolios. For example:

◈ Container management solutions like Docker, Kubernetes and Rancher

◈ Programming languages including Python, php, Ruby, Scala, Java, Erlang and C/C++

◈ Atom, Bluefish, Eclipse, Netbeans, Geany, AnJuta and Glade—all considered to be the top integrated development environments

◈ Git/Git-Hub, Apache Subversion, Darcs, Mercurial, Monotone and CVS—some of the best version control systems available

◈ Eight of the top ten text editors, including Atom, VIM, gedit, GNU EMACS and Nano

◈ Many other capabilities, including diff tools like meld and kdiff, debug tools and multi-media editors like Adrour, Audacity and Gimp

Desktop environments and application portfolios


There are also a wealth of open source and proprietary applications, databases and desktop options available for Power Systems clients to choose from. Here are a few examples:

◈ KDE, Mate, GNOME, Cinnamon, Budgie, LXDE and XFCE, which are considered to be the top desktop environments in 2018 for Linux

◈ Thunderbird, Geary and Evolution e-mail clients

◈ Pidgin and Empathy instant messaging applications

◈ Calligra Suite, Libre Office and WPS Office

◈ MariaDB, Postgre SQL, EnterpriseDB, MongoDB, Cassandra, Redis are among the top open source databases available, as well as IBM Db2

◈ Dolibarr and Odoo, two of the most popular open source ERP packages

◈ Spark, elasticsearch, Apache Solr and Hadoop for analytics

◈ SAP HANA’s more than 2000 clients, many of whom left the x/86 world to take advantage of the IBM Power architecture

Is Linux on x/86 different from Linux on Power?


Linux on Power is the same as Linux on x/86, except for better performance, reliability, security and a smaller physical footprint. These are the differences that should matter most and be the key reasons why a Linux on Power solution is a better choice than an x/86-based Linux solution.

IBM Systems Lab Services has a team of experienced consultants ready to help you get the most out of your Linux on Power system.

Thursday, 8 November 2018

IBM Spectrum LSF goes multicloud

IBM is moving swiftly to implement multicloud capabilities across both our IBM Spectrum Storage and IBM Spectrum Computing portfolios. In an important step for our high-performance computing (HPC) solutions, today we’re announcing the release of a deployment guide that facilitates the use of IBM Spectrum LSF Suite with Amazon Web Services (AWS).

IBM Spectrum, IBM Certification, IBM Guides, IBM Exam Study, IBM Learning
IBM Spectrum LSF Suite is a comprehensive set of solutions supporting traditional HPC and high-throughput environments, as well as big data, artificial intelligence (AI), GPU, machine learning, and containerized workloads among many others. IBM Spectrum LSF, the core of the Suite, is a workload and resource management platform for demanding, distributed HPC environments. It provides a comprehensive set of intelligent, policy-driven scheduling features that help maximize utilization of compute infrastructure resources while optimizing application performance.

IBM Spectrum LSF Suite comes in three editions and includes additional capabilities such as  LSF resource connector, which enables policy-driven cloud bursting to all major cloud services, including IBM Cloud, AWS, Google and Azure.

The newly released deployment guide builds on an existing relationship with AWS. IBM is providing expertise, services, and management capabilities that will give IBM Spectrum LSF customers fast, flexible access to AWS offerings.

The new deployment guide helps users build a wide range of customizable IBM Spectrum LSF cluster configurations that can enable users to take advantage of cloud computing. In particular, HPC environments can leverage the cloud during times of peak activity. To accommodate spikes in demand, traditional HPC environments often divide up jobs and stretch out scheduling–but this can lengthen time to insight. IBM Spectrum LSF solutions can help address this challenge by enabling dynamic access to cloud resources.

IBM Spectrum, IBM Certification, IBM Guides, IBM Exam Study, IBM Learning

Two of the most common IBM Spectrum LSF cluster solutions are the LSF Stretch Cluster and the LSF Multi Cluster configurations. In the LSF Stretch Cluster architecture, the master scheduler and other core functionality remain with the on-premises IBM Spectrum LSF cluster, but the cluster resources can be dynamically “stretched” over a WAN to include cloud resources.

The Multi Cluster configuration, on the other hand, essentially creates two clusters, one on premises and one in the cloud. This architecture can simplify communication and coordination between the on-premises and cloud-based clusters.

IBM Spectrum, IBM Certification, IBM Guides, IBM Exam Study, IBM Learning

Both configurations offer certain advantages and trade-offs, and both configurations are covered in detail by the new deployment guide.

With the release of the new LSF cloud deployment guide, enterprises and HPC facilities can more easily build the IBM Spectrum LSF cluster architectures that are best for them. Then they can leverage the power of IBM Spectrum Scale–the high-performance data management member of the IBM Spectrum Storage family–to enable the storage portion of the overall solution. The multicloud reach of IBM Spectrum Scale includes Spectrum Scale on AWS, available on AWS Marketplace. Currently available as a Bring Your Own License offering, the service is targeted at IBM customers who want to gain access to the elasticity of AWS for their high-performance computing workloads, allowing deployment of highly available, scalable cluster file systems on AWS. IBM provides a Cloud Formation script that deploys IBM Spectrum Scale across a cluster of AWS virtual server instances.

Release of the new LSF cloud deployment guide marks yet another milestone in the ongoing expansion of IBM Spectrum LSF multicloud capabilities, but it’s not the only important news for IBM customers. IBM is also announcing variable use licensing for IBM Spectrum LSF. This new “bite-sized” licensing will allow users to purchase licenses for IBM Spectrum LSF in blocks of CPU “core hours.” One block equals 1000 core hours. The new licensing will make IBM Spectrum LSF even easier and more flexible to deploy. IBM Spectrum LSF users will be able to run the solution almost anywhere and pay only for what they use rather than predicting and hoping.

Across the entire IBM Spectrum Computing portfolio, plenty of innovation is occurring, with plenty more on the roadmap. As enterprises and HPC facilities search for new cloud usage paradigms, they will likely look to intelligent solutions that leverage multicloud architectures and make them easier and less complex to adopt.

Sunday, 30 September 2018

How to transform customer experiences with cognitive call centers

Customer data and insights can help steer companies to new levels of innovation, engagement and profit. And, most organizations are sitting on a gold mine of customer data. But, it’s how customer data is used that matters. How is your organization collecting and using customer insights? Are you using it to create the most engaging customer interactions? And, are you creating a cognitive conversation with your customers? These are critical questions that not all companies are yet considering, but should.

Understanding the importance of cognitive conversations


A cognitive conversation takes advantage of data from external, internal, structured and unstructured voice and multichannel sources to deliver a customer response that is more conversational, relevant and personal. Organizations can meet changing customer preferences and behaviors by learning from every interaction. All parts of the organization can take advantage of data collected from different areas of the company to improve customer loyalty.

IBM Tutorial and Material, IBM Guides, IBM Learning, IBM Career

Businesses are recognizing that while customer interactions often begin on one channel, valuable insight and feedback is also being gathered from customers on other channels across the business. Unifying customer information across channels gives businesses a more complete context to resolve customer issues more fully and more quickly.

With the speed to which customer expectations are changing, it is conceivable that customers will prefer to manage their relationship with businesses without interacting with a live agent. Customers are using more channels to interact with businesses and are doing more research through websites and referrals before ever engaging.

A more conversational, personal, seamless and device-independent approach to customer engagement is critical. Customers are likely to commit to a brand or product after a satisfactory experience, so it is time the whole company focused on adopting capabilities to enable a more cognitive experience for clients and potentially surpass competitors.

Taking full advantage of the gold mine of customer data


Every department in your organization, from marketing to sales to customer service, has data on customer behavior. That collective data can be used to meet and even exceed the demands of customers. An organization’s brand, website, notifications and certain self-service channels, whether voice or chat, demonstrate commitment to the customer.

It may seem overwhelming to gather all this information across your organization at every entry point of interaction and also deliver a seamless, consistent and cognitive experience. But, it doesn’t need to be.

If you can tap into cognitive capabilities, they will turbocharge interactions across channels. This combination can transform traditional self-service brand engagement into a more relevant and relational experience for customers. Customers can use both traditional interactive voice response (IVR) features with cognitive capabilities, which can be integrated with an existing contact center environment, as well as other third-party applications such as computer-telephone integration (CTI). Combining these capabilities is a unique approach that transforms the customer experience to a more conversational and cognitive interaction. As a result, resolutions can be achieved more quickly than interactions handled by traditional IVR systems alone.

Transforming the customer experience


Changes in the customer experience journey are happening fast. There is no slowing down or stopping the convergence of technology and increasing demand for quick, relevant and personalized customer interactions. The stakes are high when it comes to the customer experience, and it’s not just the contact center that needs to take notice. The customer experience is a whole-company issue.

Customer experience is at a crossroads of change and transformation, adding a new level of engagement across the organization. Taking advantage of the forces pressuring organizations to evaluate their strategy, technology and general understanding of customer behavior will set the pace for the cognitive revolution.

Four key factors are making transforming the customer experience a hot topic:

1. Increased value of customer experience as a market differentiator
2. Speed of changing customer demands
3. Cognitive capabilities make collecting, learning and understanding data in near real time a reality
4. Market leaders figuring out how to combine knowledge with technology to magnify the customer experience

Thursday, 27 September 2018

6 common DevOps myths

There are a lot of DevOps myths floating around the IT world. That’s not surprising, given how much hype the term — a combination of “development” and “operations” —  has built up in the past few decades.

IBM DevOps, IBM Tutorial and Material, IBM Guides, IBM Learning, IBM Tutorial and Material

DevOps is more than worthy of the hype. When done properly, the DevOps approach can deliver massive positive impact for businesses. It can reduce costs, improve performance and break down silos between teams.

To understand the power of this approach, however, it’s important to know what DevOps is and what it is not. Let’s start by correcting six common DevOps myths.

1. DevOps is only for shops born on the web.


It’s true that DevOps mostly started at companies that were born on the web. Maybe that’s why people get the idea that this methodology will only work at internet firms such as Netflix or Etsy. That idea turns out to be a myth.

Large enterprises have been successfully using DevOps principles to deliver software for decades.

2. DevOps only matters to engineering and operations.


The name DevOps clearly reveals the origin of the approach. DevOps started as a better way for operations and development teams to work together.

Today, the approach can empower the entire organization. Everyone involved in the delivery of software has a stake in this methodology.

3. DevOps can’t work for regulated industries.


Regulated industries have an overarching need for checks and balances, as well as approvals from stakeholders. This doesn’t mean DevOps is a problem, however.

Adopting DevOps actually improves compliance, if it’s done properly. Automating process flows and using tools that have built-in capability to capture audit trails can help.

Of course, organizations in regulated industries will always have manual checkpoints or gates, but these elements can be compatible with DevOps.

4. You can’t have DevOps without cloud.


When many people think of DevOps they think of cloud. There is a good reason for this. Cloud technology provides the ability to dynamically provision infrastructure resources for developers and testers to rapidly obtain test environments without waiting for a manual request to be fulfilled.

That doesn’t mean cloud is necessary to adopt DevOps practices, though. As long as an organization has efficient processes for obtaining resources to deploy and test application changes, it can adopt a DevOps approach.

Virtualization itself is optional.

5. DevOps means operations learning to code.


Operations teams have a long history of writing scripts to manage environments and repetitive tasks. With the evolution of infrastructure as code, operations teams saw a need to manage these large amounts of code with software engineering practices such as versioning code, check-in/check-out, branching and merging.

Today, operations teams can create a new version of an environment by creating a new version of the code that defines it. This doesn’t mean, however, that operations teams ,must learn how to code in Java or C#. Most infrastructure-as-code technologies use languages such as Ruby, which is relatively easy to pick up for people who have scripting experience.

6. DevOps doesn’t work for large, complex systems.


This myth is totally off-base. The opposite is actually true: complex systems often require the discipline and collaboration that DevOps provides. Large systems typically have multiple software or hardware components, each of which has its own delivery cycles and timelines. DevOps facilitates better coordination of these delivery cycles and system-level release planning.

Tuesday, 4 September 2018

Social Learning in Practice at IBM

IBM Learning, IBM Guides, IBM Study Material, IBM Certification

What is social learning and how can it help drive engagement and develop a culture of learning?


The social learning theory of Bandura emphasizes the importance of observing and modeling the behaviors, attitudes, and emotional reactions of others. Bandura (1977) states: “Learning would be exceedingly laborious, not to mention hazardous, if people had to rely solely on the effects of their own actions to inform them what to do. Fortunately, most human behavior is learned observationally through modeling: from observing others one forms an idea of how new behaviors are performed, and on later occasions this coded information serves as a guide for action.” Basically, Bandura’s theory is that human beings can learn by example.

Why does social learning matter?


Research states that most people only recall 10% of information learned within just 72 hours in typical training environments. Social learning can reverse this curve. In fact, research shows retention rates as high as 70% when social learning approaches are employed. Rather than relying on typical training environments with low recollection rates, social learning allows learning to happen in the working environment. Learners can pull knowledge from experts within the organization rather than have it pushed on them. Learning becomes a part of the organization culture.

An example of social learning at IBM


The Data Analytics Center Of Excellence (COE) at IBM continuously provides Data Science training for our employees and decided to pilot the use of the recently IBM Data Science Professional Certificate on Coursera. They identified 2 different controlled study groups 1) A group of individuals who would have otherwise gone through a 5-day full time face to face bootcamp and 2) A group of instructors who would typically teach this bootcamp. One of the biggest problems of using MOOCs for enablement is the high dropout rate, research shows that approx ONLY 5% of the total learners complete a course. Here are a few ways in which we are keeping this group of learners engaged:

FAQs and Forum


A dedicated SLACK channel has been established with the pilot participants in which employees can pose questions and receive answers from within the group. This promotes collaborative learning as individuals can learn from their peers and also learn from questions posed by others. Apart from the pilot, there is also a large IBM Data Science Community  that hosts events on a regular basis and has plenty of enriching forums with discussions.

Organization Wikis


The participants are encouraged to blog about their experience. Bernard Freund, STSM – Data Analytics CoE writes a blog post at the end of each week as he completes a course. This post not only provides user with a thorough review of the course, but also highlights some issues along with workarounds which has been extremely useful for other learners attempting the course later.

Utilize expert knowledge


Besides the SLACK channel, we have also instituted check-point calls with the Coursera and course development team. Not everyone attends these calls, but it does give the participants an opportunity to get some 1:1 time with the SMEs to overcome any obstacles that may be preventing them from completing the program.

Gamification and rewards


You can’t force people to learn but you can give them the right tools and incentives to make sure they don’t waste opportunities. IBM does this through the Open Badge program. The program awards badges upon the completion of each of the 9 courses and a certificate upon program completion. These badges provide a way for the administrators and users to track their learning progress.

Currently, we are 1 month into the 3 month pilot and the learners seem very engaged and vested in their progress. On an average most participants have completed at least 2 of the 9 courses which does put them on track for completing the certificate within the pilot timeline. Stay tuned as we report further results in the coming months.

Friday, 31 August 2018

Get a health check for your SAP HANA on IBM Power Systems

Are you confident your SAP HANA on IBM Power Systems are getting optimal performance?

SAP HANA has been available on IBM Power Systems for a few years, and many organizations have migrated to it, bringing numerous advantages such as flexibility, efficient resource utilization, server consolidation and reduction in costs. As a Tailored Data Center Integration (TDI) model, an SAP-certified person is required to install and configure HANA. During deployment, a certified HANA engineer sets up the system following IBM Power server and SAP HANA best practices and runs an SAP HANA Hardware Configuration Check Tool (HWCCT), which ensures the environment has been configured for HANA prerequisites and for hardware performance to meet HANA KPIs.

After deployment, however, organizations will eventually need to make changes to their workloads and infrastructure. The monitoring tools you use might not capture deviations from best practices. Some components of your system might require periodic checks like firmware updates, patches, backups, cluster operations and so on. Hence the need arises for a periodic health check for SAP HANA on Power Systems. Without periodic health checks, you might not be getting the best availability and performance from your systems, and you could be at greater risk for an unplanned outage.

What is an SAP HANA on Power Systems health check?


A health check involves inspecting your system in several key areas, such as:

◈ Ensuring up-to-date software levels
◈ Examining the adequacy of hardware resources
◈ Looking at system tuning based on your current workload pattern
◈ Doing checks for best practices in virtualization
◈ Checking the feasibility of adopting newly released features in the Power server/OS/HANA

Your HANA configuration, error logs, high availability and backup policies are also validated.

Minimum checks that needs to be carried out as a part of an SAP HANA on Power Systems health check


The following chart shows a list of the minimum checks that must be covered as a part of an SAP HANA on Power Systems health check. This is only a high-level list; additional checks based on your results may be needed.

SAP HANA, IBM Guides, IBM Certification, IBM Learning, IBM Tutorial and Material

SAP HANA, IBM Guides, IBM Certification, IBM Learning, IBM Tutorial and Material

Benefits of an SAP HANA system health check


An SAP HANA on Power Systems health check offers numerous benefits:

◈ Helps you identify any single point of failure and fix it

◈ Helps prepare you for handling unexpected downtime

◈ Demonstrates current hardware utilization and growth trends, thus helping you plan for future growth or release a portion of your hardware for other workloads, thus saving on budget for any additional workloads

◈ Helps you get better support by staying up-to-date with software versions

◈ Helps you better manage your IT budget by knowing growth trends

◈ Helps you know new technologies that could be applied to your environment

◈ Improves productivity, improves your confidence and may reduce the cost of acquiring additional hardware for new workloads

Who can perform an SAP HANA on Power Systems health check?


An SAP HANA on Power Systems health check can be done by anyone who has good knowledge of IBM Power Systems, Linux and HANA. You may do it yourself or engage a team of experienced consultants like IBM Systems Lab Services. Lab Services helps organizations build and optimize SAP HANA solutions with Linux on Power Systems with a tailored data center infrastructure strategy, and health checks are among the many services we offer to help clients optimize their SAP HANA environments.

IBM X-Force Red Security Team takes on security challenges with the help of IBM Cloud

Unless you live under a rock, you’ve likely seen a recent top news headline with the words “security breach” somewhere in there. This is not the type of press companies want to be recognized for, and it is even worse for the millions of customers who are left out in the cold when their unauthorized information is made public.

IBM Cloud, IBM Certification, IBM Guides, IBM Learning, IBM Tutorial and Material

High-profile security breaches are becoming more common every year as cyber criminals are becoming more sophisticated in finding new security vulnerabilities to penetrate to access protected data. These hackers aren’t planning to ease up on businesses anytime soon, either. With that in mind, the best course of action for organizations is to rapidly test, identify and fix where they are most vulnerable.

Security penetration testing to better manage vulnerable data


IBM recognized this need two years ago when it launched IBM X-Force Red, a team of security professionals and ethical hackers whose goal is to help businesses discover vulnerabilities in their computer networks, hardware and software applications before cybercriminals find those same vulnerable areas. The security testing expertise that IBM X-Force Red brings to the table spans multiple industries including healthcare, financial services, retail, manufacturing, government and the public sector.

Although there are unique security vulnerabilities in each industry, password security issues remain among the top areas of concern for every enterprise, no matter the industry. It only takes one weak password for a cybercriminal to breach an entire business. The need for greater password security has given rise to an entire segment of “password auditing” solutions that test for password weaknesses within an enterprise, particularly among website applications.

Password auditing, or password cracking, is the act of running plain text through an algorithm to generate a hash, then matching the plain text to hashes. When a match occurs, the hash is considered cracked. Once the hash is cracked, so is the password. This assumes there hasn’t been anything added to the password before hashing — referred to as password “salt” — which is added to slow down hackers.

Hacking anything to secure everything


In the world of password auditing, there is little that the IBM X-Force Red team doesn’t know. The team put this on full display recently at the Black Hat Security Conference in Las Vegas, Nevada. However, as the team prepared for the security event, members realized that, to rapidly test all aspects of an organization’s password security vulnerabilities, they would need a strong compute foundation to run their tests at scale.

Dustin Heywood, also known as EvilMog, from the IBM X-Force Red team and a member of Team Hashcat — a group of password security researchers and the contest team for the open source Hashcat project — led both teams, first in a demo of their “Cracken” password cracking application, then in the Black Hat “Crack me if you can” password cracking contest. He decided to turn to IBM Cloud infrastructure as a service (IaaS) for high-computing performance and scalability. In preparation for both the demo and the contest, Heywood and his team provisioned and tested a complex, 32-server virtual server environment with 64 NVIDIA Tesla P100 graphical processing units (GPUs) all in under a day. In the words of one Hashcat team member, “it was a little like bringing a nuke to a gunfight.”

Big results


The IBM Cloud environment provided a fivefold increase over the existing IBM X-Force Red 16-server GPU-based infrastructure to fuel the “Cracken” password cracking application and demonstrate real-time, eight-character password cracking in an average of two to three minutes, a feat that would normally take the X-Force Red GPU-based infrastructure alone about eight to 12 hours per password to accomplish.

The IBM X-Force Red team didn’t stop there. With the DEF CON 26 conference coming hot on the heels of Black Hat, EvilMog used the same IBM Cloud and Cracken combined infrastructure to tackle the “Crack Me If You Can” contest, which is essentially, the World Series of password cracking contests. Over a two-day period, Team Hashcat cracked more passwords than any other team.

The team’s performance shows that the IBM Cloud is an ideal environment to consider for quickly running complex, compute-intensive applications.

Thursday, 19 July 2018

PowerAI for systems integrators

Business owners for enterprises of all sizes are struggling to find the next generation of solutions that will unlock the hidden patterns and value from their data. Many organizations are turning to artificial intelligence (AI), machine learning (ML) and deep learning (DL) to provide higher levels of value and increased accuracy from a broader range of data than ever before. They are looking to AI to provide the basis for the next generation of transformative business applications that span hundreds of use cases across a variety of Industry verticals.

AI, ML and DL have become hot topics with global IT clients. They are driven by the confluence of next-generation ML and DL algorithms, new accelerated hardware and more efficient tools to store, process and extract value from vast and diverse data sources that ensure high levels of AI accuracy. However, AI client initiatives are complex and often require specialized skills, ability, hardware and software that is often not readily available.

Trusted advisors such as systems integrators (SIs) are building the next generation of AI solutions for clients. SIs are being called upon to integrate best of breed parts to accelerate AI projects, driving the need to rapidly ramp up their own internal skills, capabilities and thought leadership around the multiple components of AI solutions. In addition, clients rely heavily on trusted SIs to clarify and demonstrate how business problems can truly benefit from today’s AI solutions and what is ‘not quite there yet’.  Taking a new AI project, with a broad suite of AI models, through the entire “AI lifecycle” of design, development, proof of concept, deployment and production, as well as integrating the new AI functionality into existing client transactional systems, is the ‘sweet spot’ for SIs.

As SIs move from the build-up phase to ramping up their AI skills, experience and IP associated with integrated and complex solutions, it becomes very important for SIs to leverage highly skilled partners such as IBM. IBM provides a broad range of industry-leading AI solutions. It includes both the software and the hardware infrastructure that are deeply optimized for a complete production AI system. Partnering with IBM’s AI offering teams sets the stage for SIs to establish their AI leadership by enabling delivery of an entire suite of brand new AI assets.

IBM’s new PowerAI Enterprise is a unique solution which makes DL and ML more accessible to clients. PowerAI Enterprise is a complete environment for “data science as a service”, enabling SIs to accelerate the build of more accurate AI applications for clients. It also accelerates the performance of those applications when running in production. PowerAI Enterprise combines popular open source DL frameworks and efficient AI development tools, and enables AI applications to run on accelerated IBM Power Systems™ servers.

PowerAI’s focus is to provide the comprehensive hardware and software infrastructure required to support new and demanding client AI applications, what IBM calls ‘The AI Infrastructure Stack’ (see diagram below). This stack spans components from servers all the way to ML/DL software. Since modern AI methods usually use GPU-acceleration, IBM has built a server optimized for AI, the IBM Power Systems AC922 server with POWER9 architecture, that has a high-speed connection between the POWER9 CPUs and the NVIDIA GPUs.

PowerAI running on Power Systems, combined with NVDIA GPUs and NVLink technology, enables SIs to rapidly deploy a fully optimized AI platform that delivers blazing performance, proven dependability and resilience, and it is fully supported by IBM. SIs can easily add their own unique incremental value on top of PowerAI, resulting in their own competitive advantage in the rapidly growing AI systems integrator marketplace.