Monday, 30 March 2020

A tale of two security models

IBM Exam Study Materials, IBM Guides, IBM Learning, IBM Security, IBM Certification

The approach of security as a choice


In your journey to cloud, there are many approaches to securing your workloads. One approach is very common across a variety of cloud vendors: You choose how yours is secured.

In a world where a breach of personal information can cost your enterprise millions of dollars in fines arising from regulatory non-compliance on top of the potential costs derived from loss of consumer confidence, inadequate security poses threats that cannot be taken lightly. The average cost of a data breach is $3.92 million, not to mention the impact to your reputation.

Data breaches, in particular massively impactful ones, are becoming more and more common. The largest factor in reducing the cost of a data breach is the use of encryption, which can render stolen data useless.

Don’t take me wrong, choice and flexibility are wonderful things to have when selecting a cloud provider. An approach which does not enforce the encryption of all data may work for non-sensitive, less critical workloads like DevTest. For workloads with sensitive data, there must be a better way to protect your business.

The approach of security at the core


Companies want to stay out of the news; they do not want publicized breaches and can help to address this by utilizing the highest security possible — by protecting business and customer information. Designed with secure engineering best practices in mind, the IBM Cloud™ Hyper Protect platform has layered security controls across network and infrastructure, which are designed to help you to protect your data from both internal and external threats.

IBM permits you to choose how content is secured. IBM Cloud™ Hyper Protect is engineered to enforce encryption by default for all data in transit and at rest. You do not have to decide whether or not you want your data encrypted and subsequently pay for and add on additional services to do so.

You do not have to choose to turn security on, it is always there — providing you 24x7x365 protection.

IBM Cloud™ Hyper Protect Services, in conjunction with strong operational security practices, are designed to help you secure your most valuable assets. IBM Cloud™ Hyper Protect Services are built upon a foundation of security and trust based on over 50 years of experience in enterprise computing. With Hyper Protect the client can choose who is entitled to view their content. All others are blocked from access. They have full sovereignty over their data.

How does it work?


IBM Cloud™ Hyper Protect Services are built on top of the IBM Secure Service Container. The IBM Secure Service Container (SSC) is a specialized technology for installing and executing specific firmware or software appliances. These appliances host cloud workloads on IBM LinuxONE in the IBM Cloud™. Secure Service Container is designed to deliver:

◉ Tamper protection during installation time

◉ Restricted administrator access to help prevent the misuse of privileged user credentials

◉ Automatic encryption of data both in flight and at rest

In addition, Hyper Protect offerings are engineered to radically reduce the likelihood of internal breach by ensuring that:

◉ There is no system administrator access. This removes the potential for all sorts of intentional or unintentional data leakage

◉ Memory access from outside the Secure Service Container is disabled

◉ Storage volumes are encrypted

◉ Debug data (dumps) are encrypted

And, building on top of those, the external attack surface can also be radically reduced by:

◉ preventing OS-level access to the services (IBM or external). There is no shell access to the services

◉ Only secured remote APIs are available which helps prevent attackers from “fishing” around for vulnerabilities in the underlying infrastructure

What’s the key question?


“Do I need to secure my data?”

If the answer is ‘yes,’ then a suggested approach is to choose a platform that starts with security at its core by implementing encryption by default. One which removes access to the common attack vectors by cloud and application administrators as well as by external malicious actors.

Combined with robust secure engineering and operational strategies that incorporate static and dynamic code analysis, monitoring, threat intelligence and other security-related best practices, the IBM Cloud™ Hyper Protect portfolio of services can help advance your cloud and cloud security strategies to the next level.

◉ Off the shelf, IBM Cloud™ Hyper Protect DBaaS provides data confidentiality for sensitive data stored in cloud native databases such as Postgres and MongoDB

◉ IBM Cloud™ Hyper Protect Crypto Services, built on the industry leading FIPS 140-2 Level 4 certified Hardware Security Module (HSM), is designed to provide exclusive control of your encryption keys and the entire key hierarchy including the HSM master key

◉ IBM Cloud™ Hyper Protect Virtual Servers , in beta as of October 2019, is designed for industry-leading workload protection

Source: ibm.com

Friday, 27 March 2020

Preparing for Instant Payments in a Digital Economy

As the commercial banking landscape transforms rapidly to adapt to an increasingly digital world, it’s clear that client-demand-led schemes like faster payments are challenging the traditional paradigms we’ve held sacred for decades. This is resulting in direct pressure on financial institutions to adapt faster and place a premium on innovation now – more than ever before.

IBM Certification, Commercial Payments, IBM Prep, IBM Tutorial and Material, IBM Learning

At Sibos in Geneva a couple weeks back, it was my pleasure to moderate a discussion on the topic of “preparing for instant payments in a digital economy” with very esteemed panelists. This resulted in a very interesting and engaging discussion given the two different viewpoints: one from a U.S. Central Bank and the other from a European Commercial Bank.

My key takeaways were as follows:

◉ Demand for new payments schemes and services in an omnichannel world is driving the need to be more efficient, faster, and nimble, to serve clients effectively. The true business case for faster/immediate/instant payments is serving customer needs and providing them the option to send and receive payments almost instantaneously. But this also opens up the avenue for innovative players to provide value-added services on the back of this scheme.

◉ The Central Banking authorities like the U.S. Federal Reserve Bank are enabling their member banks with guidelines by being an advocate, educator, and, more importantly, a change agent to influence payments system policy and direction. In Europe, directives like PSD2 and Pan European (SEPA) Instant Payments scheme are introducing new dynamics that are driving intermediation and emergence of Fintechs.

◉ Security and risk reduction are very high on the priority list for participants offering Faster/Instant Payments. These schemes highlight the need to provide this service together with real-time counter fraud, financial crime detection, and vastly improved and simplified regulatory compliance capabilities.

◉ As banks invest in their modernization, there’s a trend toward reliance on common industry standards like ISO20022, to ensure ability to support multiple requirements and proceed with progressive renovation of their legacy infrastructure.

◉ Being a truly digital bank is real. Financial institutions like DNB in Norway are leading the way. In a country with a 90% digital customer base and cash transactions account for approximately 9% of retail (C2B) spending, the banks support clients who opt for an electronic channel for self-service, resulting in less than 3% reliance on call centers or traditional branch banking.

Strong alliances between industry participants and their trusted partners will help make this journey to a fully digital (and very soon…) cognitive bank, easier and increasingly profitable. Becoming digital will be the new minimum, as banks apply machine learning and other related capabilities to better understand their business and, more importantly, their clients. These are truly interesting times. Banks are making the leap to becoming a digital bank – and, on the horizon, transforming to a cognitive bank.

Wednesday, 25 March 2020

French insurer teams with IBM Services to develop fraud detection solution

Auto insurance fraud costs companies billions of dollars every year. Those losses trickle down to policyholders who absorb some of that risk in policy rate increases.

IBM Exam Study, IBM Guides, IBM Learning, IBM Tutorial and Material, IBM Exam Prep

Thelem assurances, a French property and casualty insurer whose motto is “Thelem innovates for you”, has launched an artificial intelligence program, prioritizing a fraud detection use case as its initial project.

Fraud detection is a model that lends itself well to online machine modeling and is a project that would allow us to enter into artificial intelligence starting with the analytical field that we have prioritized. A successful fraud detection project would deliver immediate, significant financial gains for the company.

Tapping into IBM Services


We carried out a few preliminary tests and experiments internally with our data scientists and data engineers but encountered problems with tools and with the environment. Therefore, in order to go a step further, we decided two things. First, we needed to find a solution that would make it possible for us to free ourselves from storage and performance constraints. Second, to increase our expertise we realized we needed to engage experts in the field.

During the course of our research, we met with various representatives from IBM who showed the advanced analytics capabilities of IBM Watson Studio and IBM Cloud. We discovered that the value proposition that they proposed corresponded exactly to our needs.

At the beginning of the collaboration, an IBM Global Business Services (GBS) team met with different Thélem assurances teams including marketing and claims management to identify use cases for artificial intelligence. Car insurance is the area in which we experienced the majority of our cases of fraud, so we chose to begin there.

In addition to IBM Watson Studio, which is used as the development environment for analytical models and cases, additional solutions we employed include IBM Cloud with Secure Gateway Service to transfer data from Thélem to the IBM core; IBM Cloud Object Storage, which hosts data stored in the cloud; and IBM Watson Machine Learning, used for deploying IT scripts.

We also had to take into account GDPR legislation and regulations in Europe. Because of these regulations, we paid special focus to minimizing the amount of personal data uploaded and of securing the personal data we received throughout the initiatives implemented.

IBM GBS worked with us on the architecture definition and the addition of data in a secure manner to the cloud. Then, we worked together to define the method to implement use cases and followed up with training using data science models.

Discovering five times more potential cases of fraud


We realized an advantage right from the start: flexibility. The flexibility in launching the fraud detection solution, without having to concern ourselves with storage capacity, machine performance or services used, was phenomenal.

IBM Exam Study, IBM Guides, IBM Learning, IBM Tutorial and Material, IBM Exam Prep
The IBM solution also makes it possible to facilitate joint work among the different data scientists working on initiatives. More than one of them can work on a single initiative, share their work and the progress of their algorithms.

More concrete, tangible advantages include the fact that we’ve increased five fold the relevance of cases identified as potentially fraudulent. Additionally, IBM GBS helped us develop a methodology that we can use again on our own. We now have a tool, the methodology and data modeling know-how. This makes it possible for us to enrich our models over time, make better progress and ultimately increase the relevance rate, which should allow us to save an additional several hundred thousand euros every year over the next few years.

Going forward, we plan to begin exploring additional Watson tools such as testing aspects of image recognition or of chatbot services.

Source: ibm.com

Tuesday, 24 March 2020

Jump-start your AI adoption with AutoML on IBM Power Systems

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Guides, IBM Power Systems

Are you eager to adopt AI for your business but bogged down by the complexity of machine learning algorithms, the plethora of software technologies and the dearth of personnel with specialized skills? If so, you’re not alone! Even seasoned technologists are overwhelmed by this vast and fragmented AI ecosystem.

Automatic machine learning (AutoML) refers to the methodology of automatically building machine learning pipelines with minimal engineering input, enabling non-specialists to build AI solutions from raw data. AutoML and the associated software offer a promising direction to reduce the complexity in AI development, at least for regular applications, and boost the adoption of AI in enterprises. Read on to learn more about AutoML and why IBM Power Systems is ideal for deploying AutoML frameworks.

The automatic machine learning paradigm


Most often, complete AI solutions are built by teams with complimentary expertise in statistics, machine learning, programming, data engineering and vertical domain. The teams iterate through a complex process involving data preprocessing, feature engineering, machine learning algorithm selection, hyperparameter optimization and validation. This process is typically encapsulated in the form of an end-to-end machine learning pipeline, as shown in figure 1. AutoML aims to automate one or more of these pipeline components based upon the data characteristics and pre-built features (meta learning), and by using esoteric machine learning methods such as Bayesian optimization, genetic programming and reinforcement learning (AI for AI).

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Guides, IBM Power Systems

Figure 1. Typical end-to-end machine learning pipeline built by AutoML

There is growing interest in AutoML in industry and academia, as evidenced by the recent surge of commercial software (such as IBM AutoAI, H2O Driverless AI) and open source implementations (such as featuretools, auto-sklearn, auto-WEKA, TPOT, AutoKeras and auto-PyTorch). These frameworks functionally differ in the length of the machine learning pipeline and the quantitative methods employed to automate each component. They are capable of automatically building end-to-end machine learning pipelines fitting classical machine learning algorithms as well as deep neural networks (IBM Visual Insights, for example) for industry-scale supervised learning problems.

AutoML is compute bound


Loosely speaking, AutoML is solving a giant optimization problem, treating feature engineering, data processing and machine learning algorithmic choices as yet other dimensions in the hyperparameter space. Benchmark studies have not been able to conclude that any one framework is superior to another in all dimensions. Thus, the accuracies that can be achieved using AutoML are sensitive to the compute time allocated for search and the underlying hardware capabilities.

It is no exaggeration to say that IBM Power Systems AC922 servers are the most suitable hardware infrastructure for AutoML frameworks. After all, they’re the building blocks for the world’s most powerful supercomputers, the US Department of Energy’s Summit and Sierra systems. The major open source AutoML frameworks and H2O Driverless AI can leverage the exclusive compute capabilities of Power Systems, as depicted in figure 2. The Anaconda distribution makes the installation of open source tools a breeze on Power hardware.

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Guides, IBM Power Systems

Figure 2. The AutoML tools take advantage of Power hardware

Human in the loop


Though AutoML tools can engineer the machine learning pipelines for routine machine learning tasks, that alone is insufficient for building enterprise-grade AI systems. The data cleaning and preprocessing capabilities of the current AutoML tools are quite limited. The wealth of knowledge brought by domain experts from years of experience is unparalleled in data processing and feature engineering. In addition, AutoML tools are prone to overfitting, making engineering intervention necessary.

IBM Systems Lab Services can help jump-start AI adoption in your organization through rapid prototyping and fail fast methodology. Our experienced consultants help you make the right choices of machine learning tools, work with your domain experts in designing features and train your engineers to quickly become data scientists.

Monday, 23 March 2020

IBM Connections 6 CR3 now available for download

IBM has released “Cumulative Refresh 3”, the latest update to IBM Connections 6. CR3 centers our efforts on streamlining the user experience and providing a more contemporary UI with features focused on usability, engagement, and maintenance.

IBM Connections, IBM Exam Study, IBM Learning, IBM Tutorial and Material, IBM Certification

This update allows end users to find their relevant content faster than before, by remembering the last view used in Files and returning to where they left off on the next visit. The new “Recently Visited” views available in Files and on the Community Catalog page show the files and communities a user looked at most recently. This makes it easier to find things on which users are currently working.

In addition to IBM Connections CR3, the mobile app for IBM Connections has also been updated with a number of end user focused improvements – look for it on the app stores.

Here is a list of the most important enhancements in the CR3 release:

Focus on your work


◉ Quickly return to the Communities and Files you viewed recently.

◉ Instantly filter recent content in Communities and Files.

◉ Pick up where you left off – Files will return you to the last view where you were working.

◉ Simplified navigation and a full-screen option for Files make it easier for both new and experienced users to find the content they are looking for faster.

IBM Connections, IBM Exam Study, IBM Learning, IBM Tutorial and Material, IBM Certification

Recently viewed Files and Communities with type-ahead; rich Communities on Mobile

Increase User Adoption


◉ Deploy “Visual Update 1” with IBM Connections Customizer for a beautiful, modern user experience.

◉ Mobile Connections users have a consistent experience with the web user interface.

◉ Upload images and files with a consistent user experience throughout Connections.

◉ New filters for Metrics provide additional insights into usage activity and community health, and Metrics now allows for the import of additional historic data.

◉ Use the new Highlights app (preview) to engage users with easier to design, fit-to-purpose Communities.

Easier to install and customize IBM Connections


◉ New quick filters and the universal search capability can now be powered by Elasticsearch instead of SOLR – install it once to simplify the environment.

◉ Deploy “Visual Update 1” to provide a clean, modern user interface for IBM Connections and as a basis for easy customization of colors, fonts, icons, and other attributes based on the new design language for IBM Connections.

Sunday, 22 March 2020

IBM Is Named a Leader in IDC’s Newest Commerce MarketScape

IBM Tutorial and Materials, IBM Learning, IBM Certifications, IBM Exam Prep

We are not going away, we are going strong!


There is a lot of concern in the market, much of it manufactured by our competitors, about the long-term viability of WebSphere Commerce. We, of course, have never doubted our path forward and with the help of our customers we are continuing to enhance the platform and position our clients for success in the future, whatever that future may look like. We have built a modern, flexible and agile platform that makes it easy to manage the commerce experience and adapt it to new technologies and market demands, without sacrificing customer experience, security or reliability.

In acknowledgement of this continued focus on our customers, we are excited to share that IBM has been named a leader in IDC’s Marketscape: WorldWide Retail Commerce Platform Providers 2018.

IBM Tutorial and Materials, IBM Learning, IBM Certifications, IBM Exam Prep

IBM was recognized as a leader for:

◉ Delivering a very broad and deep portfolio of capabilities for retail commerce platform services, and actively working on its expansion and innovation

◉ Rich partner ecosystem with 60 global partners validated as ready for IBM Commerce, IBM’s third-party business partners have built integrations from their solutions into IBM’s commerce platform.

Speaking of our rich partner ecosystem, we have a few great examples of partners who have seen the power of IBM’s flexible, agile platform to create amazing experiences.

Zobrist Consulting Group was able to create a modern storefront that delivers a fantastic customer experience while making it fast and easy for any retailer to set up a powerful and engaging site.

“I applaud IBM’s work to rearchitect the Commerce platform for the cloud and its investment in providing microservices for all of its commerce capabilities. This is a bold and daring move, but this innovation will propel retailers into the future where commerce is everywhere, frictionless and smart.” said Teresa Zobrist, CEO of Zobrist Consulting Group. “We saw the opportunity with IBM microservices and built a next-generation SPA (Single Page Application)  storefront, Mobiecom. It is blazing fast and validated for V7, V8, V9 and Digital Commerce! Customers can use Mobiecom as a starter store for any version of their choice.” suggested Zobrist.

Wunderman Thompson Commerce partnered with IBM for commerce technology solutions that have an API-first approach for the flexibility to create custom experiences, with the reliability of a tried and tested leader in the commerce space.

“IBM has long been a proponent of customer-first solutions so leading organizations can build engaging, differentiated experiences that express their brand.” Said Patrick Munden, Chief Growth Officer, Wunderman Thompson Commerce. “By leveraging the flexible, API-first commerce platform we’ve been able to create unique experiences for our clients so we’re very excited about V9’s modern architecture.”

Saturday, 21 March 2020

Announcing Red Hat Ansible Certified Content for IBM Z

IBM Tutorial and Material, IBM Guides, IBM Learning, IBM Exam Prep

We know that speed and agility are of utmost importance for our clients on their journey to cloud. But what we’ve also heard from them is that they’re looking for an enterprise-wide strategy to provide consistent automation across their hybrid environment — spanning their IT infrastructure, clouds and applications.

The reality of hybrid IT is here. Our clients are looking for solutions that leverage their investments in — and the strengths of — their existing IT infrastructure, clouds and applications in a seamless way. To deliver on this we’re focusing on three areas: developer experience, automation and operations that bring value to our clients no matter where they are on the hybrid IT continuum.

To help make this a reality, today IBM announced the availability of Red Hat Ansible Certified Content for IBM Z, enabling Ansible users to automate IBM Z applications and IT infrastructure. The Certified Content will be available in Automation Hub, with an upstream open source version offered on Ansible Galaxy. This means that no matter what mix of infrastructure our clients are working with, IBM is bringing automation for IBM Z into the fold to help you manage across your hybrid environment through a single control panel.

Ansible functionality for z/OS will empower IBM Z clients to simplify configuration and access of resources, leverage existing automation and streamline automation of operations using the same technology stack that they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution with Content Collections, Red Hat Ansible Certified Content for IBM Z provides easy to use automation building blocks that can accelerate the automation of z/OS and z/OS-based software. These initial core collections include connection plugins, action plugins, modules and a sample playbook to automate tasks for z/OS such as creating data sets, retrieving job output and submitting jobs.

Over the last several months, we’ve made significant strides to improve the developer experience by bringing DevOps and industry-standard tools like Git and Jenkins to the platform. We’ve announced IBM Z Open Editor, IBM Developer for z/OS V14.2.1, and, of course, we are a founding member of Zowe™.  In February we announced the availability of Red Hat OpenShift on IBM Z, which enables developers to run, build, manage and modernize cloud native workloads on their choice of architecture.

And with today’s announcement, we’re taking the next step toward this commitment.  For developers and operations, Ansible allows them to break down traditional internal and historical technology silos to centralize automation — all while leveraging the performance, scale, control and security provided by IBM Z. This brings the best of both worlds together with a practical and more economical solution. We’re excited about this important step both for our clients, and for our shared mission with Red Hat to provide a flexible, open and secured enterprise platform for mission-critical workloads.

Source: ibm.com

Friday, 20 March 2020

Register for the Think Digital Event Experience

IBM Study Materials, IBM Guides, IBM Tutorial and Material, IBM Exam Cert, IBM Certification

Think 2020 — the most comprehensive, can’t-miss IT event of the year — has been reimagined as a next-generation, one-of-a-kind online event delivering the best of IBM to you, wherever you are. True to Think’s mission to help you reimagine the modern world and make a meaningful impact on the human experience, the Think Digital Event Experience will come to life on May 5-7 to a global audience of tens of thousands of thinkers like you, delivered on Watson Media, and you can register right now!

IBM is redefining the Think conference for the digital space, with an AI-enabled experience running on multiple channels, including video content only available during the live event (such as featured sessions, client and technical presentations, and innovation talks optimized for online consumption), individual and group chats, executive one-on-one meetings, “second screen” experiences, and access to labs and certifications (for an added fee).

From the comfort of your own computer screen you’ll be able to learn all about the latest advancements in open technologies— from hybrid multicloud to data and AI — and interact with the luminaries who are using this tech to transform our lives. Featured session speakers include IBM and Red Hat executives Ginni Rometty, Arvind Krishna, and Jim Whitehurst, along with media and cultural trailblazers such as Amal Clooney, Mayim Bialik, and Immogen Heap.

Plus, Think will start off with a personal touch on May 5th, thanks to multiple live meetups with local executives, featured session viewings, subject matter experts, and in-person meetings with IBM personnel.

With all of these exciting innovations and opportunities, Think 2020 is the digital destination where you can solve your most pressing issues, innovate with cutting-edge technology, and turn inspiration into reality. Here’s why you can’t miss this unprecedented event:

◉ Experience the future of tech: Build your credentials and gain hands-on experience in hundreds of business and technical sessions, featuring content that is only available during the live event.

◉ Interact with experts and luminaries: Solve the most critical issues facing your organization and speak to the subject matter experts who are leading the way.

◉ Network with the world’s best: Connect with industry leaders and peers from around the world in one-on-one meetings and groups discussions.

Source: ibm.com

Thursday, 19 March 2020

Top IBM Power Systems myths: “IBM i is no longer relevant in today’s marketplace”

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Prep

It’s not surprising that there’s a perception in the marketplace that the IBM i operating system on IBM Power Systems is a legacy platform that’s no longer relevant. How could a technology like IBM i, introduced more than 30 years ago, still be relevant today, when cloud, AI, blockchain and cognitive dominate the conversation?

There’s a lot of evidence to counter this myth though, from the myriad clients who use it to the latest third-party market research on its affordability. Let’s take a look.

IBM i is not a “legacy platform”


Let’s dispel this part of the myth first. I’ve heard many stories that characterized IBM i as a legacy platform, but they overlook the fact that numerous companies around the world have significant investments in IBM i solutions and rely on it every day to run their core business applications.

These companies range from small to large businesses and government agencies.

Consider the following IBM i stories:

Stonetales Properties migrated applications from an x86 Windows environment, upgraded older Power Systems and centralized its real estate portfolio management system on IBM i in a POWER9 cloud.

Brain Staff Consultants built a state-of-the-art AI chatbot in the cloud to respond immediately to student requests.

IRIS Financial Services, a managed cloud service provider, introduced an archiving service to help financial institutions streamline compliance for legal archiving of documents.

These cloud and AI solutions aren’t legacy applications; they take advantage of the latest technologies and integrate them into an IBM i on Power Systems environment, known for its performance, reliability, scalability and security.

What is relevant to clients in today’s marketplace?


Now let’s talk about relevance. Cloud is relevant to every company today, as are AI and machine learning, open source software and application development environments. Performance, reliability, scalability and security also top the list of priority targets, and are all fundamental strengths of IBM i on Power Systems.

Consider how IBM i integrates cloud technology into its strategy. There are many cloud-based options available for IBM i clients and more on the way. IBM Cloud, Google Cloud, SkyTap and others understood that integrating cloud capabilities into a rock-solid platform that is reliable, secure and high performance is a very smart investment, and that’s why IBM i on Power Systems is available on their clouds today.

Likewise, IBM i is relevant to today’s developers, offering top open source programming tools like Git, Node.js, Python, Apache, Angular and Ruby. It offers enhanced RPG and COBOL environments and augments them with languages like SQL, Java 8, .NET, PHP, C and C++ to name a few. Then, tie it all together with Rational Developer for i, a world-class software development environment. Complimenting this very strong development environment is an ecosystem of independent software vendor (ISV) solutions that number in the thousands.

What about cost?


Cost is also relevant to clients’ buying decisions, but it’s usually not the top consideration when choosing a system. That said, IBM i, running on Power Systems, is subject to the same myth as was AIX running on IBM Power Systems: that when compared to comparable x86 systems, it’s too expensive. While there are scenarios where that may be true, what surprises many clients is that IBM i usually provides a lower total cost of ownership (TCO) and sometimes a lower total cost of acquisition (TCA) as well.

In fact, the 2020 IBM i Marketplace Survey, conducted by HelpSystems, reported that 90 percent of the clients surveyed believe IBM i provides a better ROI than other platforms. Given that most of these companies have environments with platforms from other vendors, they’re in a much better position to assess the TCO and ROI of other options.

That same study highlighted another characteristic of IBM i on Power Systems, one it is known for throughout the industry, and that’s its ease of use and low system administration costs. Most of these environments only require one or two system administrators, and nearly 50 percent of the clients surveyed run in an unattended mode.

One more client example


Carhartt is a good story to close with. One of America’s leading workwear brands, Carhartt needed to grow its international presence but questioned how to best scale its sales capacity and keep operational costs as low as possible. Teaming with IBM Business Partner Mainline, it upgraded and centralized its environment on IBM i on Power Systems E980, reduced its data center footprint by 70 percent and saved an estimated $1.1 million based on reduced operational costs.

It’s especially impressive when you consider that Carhartt manages this environment with a team of just three people.

Source: ibm.com

Wednesday, 18 March 2020

Forrester study shows how on-premises investments support hybrid multicloud strategy

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification

When hybrid cloud first emerged, it offered a new way for enterprises and organizations to use cloud technology in optimizing workloads and improving customer experience. Now it has matured into a complex ecosystem covering on-premises, private cloud, and public cloud components.

IBM and Forrester recently teamed up to investigate the ways organizations develop their hybrid multicloud implementations and strategies. The research was published and unveiled some interesting findings. With the expectation that 50% of critical workloads will run on premises or in an internal private cloud in the next two years, this survey identified how engaging in a hybrid strategy can be important for overall business health. However, the study also found that knowing where to invest with existing infrastructure and understanding how to strategically modernize is important for responding IT decision makers to navigate the many choices available to them.

If it’s difficult to understand the kind of technical problem these decision makers face, you can think of a similar problem: any time you step out the door on your way to work and hit the rush hour slowdown. Imagine if residents tried to solve the traffic problem by buying faster cars, but instead of widening lanes on the highway to support faster-moving vehicles, the city installed a new bus lane. Would traffic improve at all? Probably not, because transit infrastructure is a combination of systems that solve different pieces of the overall problem in helping people get to where they need to go. A new bus lane solves a part of the traffic problem by dedicating resources for bus use, but it doesn’t make the road itself better suited for faster cars. Hybrid multicloud strategies apply a similar concept to IT infrastructure by leveraging a mixture of new and existing technologies to cover multiple business use cases and workload needs.

There are several components to consider for a successful hybrid multicloud strategy and achieving performance, security, and resilience. Enterprise leaders and IT decision makers might have to optimize their investment in existing infrastructure with the costs of infrastructure modernization. Forrester found that 90% of IT decision makers surveyed agree that on-premises infrastructure is part of their organization’s hybrid cloud strategies. The study reports that when IT organizations prioritize other IT initiatives over infrastructure refreshes, they leave themselves exposed to security risks.

Much like our traffic example, when enterprises invest in modernizing just a piece of their system and choose to delay refreshes on infrastructure like their on-premises systems, it can potentially increase security and compliance risks. IT decision makers surveyed reported that there has been reluctance to invest in existing infrastructure, despite the risk of losing the ability to conduct business due to an outage. A robust public cloud investment might provide an excellent mobile experience for customers, but with a constrained back-end system full of security vulnerabilities, enterprises might not advance their security strategy.

The Forrester study highlights a number of other important points, but the top of mind concerns we hear from clients surveyed center on security and their data privacy compliance requirements in addition to problems with performance.

◉ 40% of respondents find that public cloud does not meet their security needs

◉ 43% of respondents cite the inability to meet increasing customer and employee performance due to delaying infrastructure refresh

◉ 39% of respondents feel loss of competitive edge as an IT organization due to delaying infrastructure refresh

◉ 75% of IT decision makers surveyed have received pushback for advocating strategies other than cloud environments

This study shares how IT decision makers assess the strengths and differences between public and private cloud implementations, so that enterprises can optimize the benefits that come from their public cloud, private cloud and on-premises resources. IBM Systems has the ability to support on-premises and hybrid multicloud infrastructure modernization. We know data strategy is key because it helps companies address both their compliance and security objectives and is a factor for deciding where to place workloads. We learned this from our experiences supporting clients as they make a hybrid multicloud strategy that addresses the modernization of components.

Tuesday, 17 March 2020

To improve customer experience, agents need a new best friend: AI

IBM Multicloud, Artificial Intelligence, IBM Exam Study, IBM Tutorial and Material, IBM Guides, IBM Exam Prep

Customer service representatives (CSRs) are the face of your brand; skilled and empowered reps are essential to improve customer experience and enhance consumer loyalty. Whether customers walk into a retail store where brand reps help them compare products, make recommendations and deliver a personalized sales experience — or speak to a call center agent to resolve an issue or assist with online purchase journeys, the experience delivered by CSRs is fundamental to brand success.

The challenge? Customer reps are under massive pressure to deliver exceptional service on-demand — like ducks on the water, they’re calm on the surface but paddling like mad underneath. According to recent AdWeek data, 52 percent of CSRs feel their company isn’t doing enough to prevent burnout and 35 percent are considering leaving their jobs. In the age of big data, personalized experience and rising consumer expectations, agents need a new best friend to help keep them afloat: Artificial intelligence (AI).

The challenge of staying afloat


Being a CSR isn’t easy. Agent attrition is steep — ranging from 30 to 45 percent — and on-boarding new staff to fill the gaps often causes dips in customer satisfaction. So why the churn? For customer care agents, it’s easy to end up overwhelmed and underwater. Some of their top challenges include:

◉ Lacking tools: 44 percent of CSRs say they lack the right tools, the AdWeek report found; 34 percent of contact center decision-makers say they don’t have access to knowledge management solutions, making it difficult to find, sort and leverage customer data on-demand.

◉ Missing data: 34 percent of respondents to the AdWeek study point to a lack of pertinent customer data, in turn frustrating their efforts to deliver a personalized experience.

◉ Increasing pressure: Call volumes have increased 39 percent in the last 18 months, AdWeek reports — and 92 percent of customers will stop buying from brands after three (or fewer) poor interactions. This puts massive pressure on agents to deliver delightful service with high first call resolution (FCR) rates.

◉ Siloed systems: Data systems are often disparate and difficult to access, meaning CSRs need to rapidly cycle through multiple applications to research the issue — resulting in longer hold times for customers.

◉ Limited context: Context is everything. Cold-start conversations and evolving product lines make it difficult to gauge exactly what consumers want on first contact, leaving CSRs stuck in a loop even as customers get frustrated having to repeat their concerns. A well informed acknowledgement of the issue in the opening greeting from the agent can set a positive tone for the rest of the interaction.

How AI can help


Artificial intelligence tools offer a way to bridge the gap between customer expectations and agent abilities. By leveraging intelligent tools capable of collating disparate data sources, analyzing consumer calls for context and delivering on-demand assistance to agents, there’s potential for companies to improve customer satisfaction scores (CSAT) by 20 to 30 points, and eliminate up to 30 percent of call center costs by implementing AI tools to assist agents. For large organizations handling 100 million+ calls each year, this could mean savings into the billions over time.

No fear: why agents want AI


Fear of AI takeover is dwindling; Tech Republic notes that just 27 percent of call center staff worry that AI will eliminate their jobs. Instead, agents want AI to help them where it matters most: Eliminating redundant processes. Meanwhile, Tech Republic also reports that 70 percent of agents believe that effective automation of routine tasks would allow them to focus on higher-value work. Put simply, human agents and AI working in tandem is the CSR dream.

IBM Exam Study, IBM Tutorial and Material, IBM Certifications, IBM Exam Prep

In their personal lives, CSRs are increasingly exposed to delightful AI-augmented experiences via their mobile phones, such as auto-complete, tailored social media content, digital assistants and contextual personalized search. CSRs expect this same AI augmentation at work to improve their productivity.

Improving AI customer care with human input


More than 50 percent of call center managers recognize the need for new technology, but almost 60 percent highlight ongoing IT issues. This paradox emerges full-force when companies look to implement AI; despite big potential, new deployments don’t always improve customer experience as predicted. The problem? Missing input from the experts — your customer service reps.

While it’s tempting to offload everything to AI, many agents have in-depth knowledge of specific products, existing service models and customer-facing processes. The result? They don’t want AI to attempt to automate the entire workflow end-to-end. What agents really want are tools capable of collecting key consumer data up front, automating some mundane simple tasks and offering help and recommendations to agents when appropriate. Put simply, this isn’t an all-or-nothing scenario: When AI and human agents work as a team, they can split individual tasks of the workflow.

Clients across industries have partnered with IBM to deploy five key approaches for effective AI-agent teaming. Stay tuned for our upcoming second piece in this AI series to be published shortly.

Source: ibm.com

Monday, 16 March 2020

IBM’s comprehensive and innovative Digital Experience Platform rises above

IBM Prep, IBM Tutorial and Material, IBM Guides, IBM Learning, IBM Exam Prep

Comprehensive. Innovative. These are just two of the words the 2019 Gartner Magic Quadrant for Digital Experience Platforms report uses to describe IBM’s Digital Experience software.

IBM Digital Experience is again part of the Leader quadrant for the Gartner Digital Experience Platforms report, as it had been for many years in the former Gartner Horizontal Portals report.

Why this continued recognition from the analyst community?

Gartner says, “IBM’s Digital Experience platform, which is a combination of IBM Digital Experience Manager (which includes WebSphere Portal and Web Content Manager) and Watson Content Hub, is comprehensive and innovative due to the vendor’s focus on AI, cloud capabilities, the UX/UI and multichannel deployments.”

A great digital experience platform has to be both comprehensive and innovative. Left brain and right brain-friendly. IT and Marketing-friendly. Capable of powerful integration with the back-end systems you have and easy-to-use, so as not to kill the creative spark. The IBM Digital Experience platform stands apart in the ability to execute in both powerful and creative ways.

Here’s just some of what IBM Digital Experience delivers:
  • Omnichannel experience: You can deliver the experiences your customers expect by easily offering engaging omnichannel experiences, responsive content, targeted offers and consistent branding.
  • Intuitive, rich content management. Eliminate data silos with a centralized content strategy that allows for the easy identification and selection of content that can be used across multiple environments. This helps maximize the use of your content and ensures that you can deliver the right content every time.
  • Analytics and insights. Help identify, analyze and act on user behavior patterns to improve the experience and drive loyalty.
  • Personalized content delivery. Identify user needs through profiling and data prediction. Regardless of channel or device, the right digital experience can then be created in real time with the right content, offer and emotional appeal. This will lead to a better, faster and more efficient customer journey.
  • Open platform with strong APIs and integration. Find a platform that helps aggregate content, data and applications quickly and easily. Through 2021, 85 percent of effort and cost for a DXP program will be spent on integrations with internal and external systems, including the DXP’s own built-in capabilities.2
  • Highly scalable. Make the most of new technology and the ability to handle more users and their digital interactions without reducing performance.

Saturday, 14 March 2020

Keep customer data secured, private and encrypted with the latest IBM Z enhancements

In my discussions with CIOs and CISOs in organizations around the globe, I’ve noticed a common concern. How can these organizations keep business and customer data private and protected as they transform for hybrid multicloud?

IBM Exam Prep, IBM Guides, IBM Tutorial and Material, IBM Certification, IBM Career

As the volume and complexity of data sharing grows this concern is increasingly validated. According to the 2018 Third-Party Data Risk Study by Opus and Ponemon, 59 percent of companies experienced a data breach caused by a third party. As research company Enterprise Management Associates (EMA) notes in a recent paper sponsored by IBM, data sharing is a common part of business today and data theft is almost equally as common. And because applications span hybrid multicloud environments, much of your customer data may live in the public cloud and may be shared frequently with your external business partners.

Data security solutions to address these concerns exist, but many are siloed. As data moves from one place to another, that data must be independently protected at every stop along the way, resulting in protection that can be fragmented, rather than end-to-end.  Organizations moving more workloads to hybrid multicloud environments must ensure that data within these environments is protected effectively.

Extend data privacy and protection with Data Privacy Passports


One advantage IBM Z enjoys when it comes to security is that we own the z/OS operating system and software stack. This allows us to design security into the platform from the chip to the software stack, and continuously innovate and react to or anticipate customer needs by adding new capabilities. Recently we announced IBM Data Privacy Passports, a data privacy and security enforcement solution with off-platform access revocation. Now you can protect data and provide need-to-know access to data as it moves away from the system of record. Just as a passport allows you to travel beyond your home country’s borders with your government’s protection, Data Privacy Passports allows data to move beyond your data center while retaining the protection provided on IBM Z.

Securely build, deploy and manage mission-critical applications with IBM Hyper Protect Virtual Servers


Many technologies aim to protect applications in production, but the build phase may expose applications to vulnerabilities. IBM Hyper Protect Virtual Servers are designed to protect Linux® workloads on IBM Z and LinuxONE throughout the application lifecycle by combining several built-in capabilities from the hardware, firmware and operating system. You can build applications with integrity through a secure build Continuous Integration Continuous Delivery (CICD) pipeline flow. Through this CICD, developers can validate the code that is used to build their images, which helps reassure their users of the integrity level of their applications. After deploying, administrators can use RESTful APIs to manage the application infrastructure — without having access to those applications or their sensitive data.

Clients such as KORE Technologies and Phoenix Systems can address tampering and unauthorized access to data by isolating memory and restricting command-line access for administrators. “It’s crucial that we can push code out to our customer environments quickly and efficiently, ” says Isabella Brom, COO at KORE Technologies. “With IBM Hyper Protect Virtual Servers we can do that, while protecting our clients’ digital assets from compromise either from outside or from within.”

Protect data in flight with IBM Fibre Channel Endpoint Security


With pervasive encryption, you can decouple data protection from data classification by encrypting data for an application or database without requiring costly application changes. The design of new IBM Fibre Channel Endpoint Security for IBM z15™ extends the value of pervasive encryption by protecting data flowing through the Storage Area Network (SAN) from IBM z15™ to IBM DS8900F or between Z platforms. This occurs independent of the operating system, file system, or access method in use, and can be used in combination with full disk encryption to ensure SAN data is protected both in-flight and at-rest.

Redact sensitive data with IBM Z Data Privacy for Diagnostics


Even though IBM has earned a reputation for being a stable platform, problems do occur and diagnosing these problems often requires organizations to send diagnostic reports to IBM or other vendors. It is possible for sensitive data to be captured as part of the error reporting process and there is no easy way for an organization to determine what data has been captured. This can pose a problem for compliance with data privacy regulations. With IBM Z Data Privacy for Diagnostics, a z/OS capability available on IBM z15™, you maintain control when working with third-party vendors by redacting data tagged as sensitive and creating a protected diagnostic dump that can  be shared externally.

Source: ibm.com

Friday, 13 March 2020

How to build your IT infrastructure for 5G-enabled edge computing

IBM Study Materials, IBM Prep, IBM Tutorial and Materials, IBM Certification, IBM 5G

The global 5G-enabled edge computing market is growing rapidly, fueled by major technology changes that are disrupting the traditional networking industry. By 2025, this market is expected to exceed $50 billion.

5G has the potential to deliver a new generation of services, thanks to higher data rates and ultra-low latency. To take advantage of this potential, communications service providers are looking to move data processing and compute power closer to the end user — closer to the edge.

While the digital boom provides many opportunities for IT leaders, it comes with challenges: a growing number of smart devices, the need for faster processing and increased pressure on enterprise networks. To harness all this potential power, organizations need to modernize their networks to effectively consume these new services at the edge.

Some key trends are empowering this shift toward the 5G-enabled edge.

Shift to cloud-based virtualized networking


Ever-increasing amounts of data and video, mobile workload volatility, the rising number of connections and demand for lower latency are all driving organizations to develop transformative strategies. To embrace the 5G-enabled edge future, businesses must transition to networks with cloud-based virtualized networking.

Cloud-based networking allows for simplified management and expansion of network capabilities, which helps accelerate innovation, service fulfillment and operations.

Virtualization and cloudification are paving the way for an unprecedented level of cognitive automation, allowing networks to conduct intelligent, agile, responsive network and service operations.

The rise of NFV and SDN technologies


Network function virtualization (NFV) and software-defined networking technologies are expected to present the highest opportunities in the 5G infrastructure market. As a key enabler of 5G infrastructure, NFV is the next logical step in network evolution. NFV replaces single physical network appliances with virtualized network functions linked together across virtual machines.

Software-defined networking (SDN), on the other hand, is designed to make networks more flexible and agile. SDN redefines network architecture to support unique requirements of the 5G ecosystem. 5G SDN will play a crucial role in the design of 5G networks. In particular, it will provide an intelligent architecture for network programmability, as well as the creation of multiple network hierarchies.

AI and automation-enabled network management


The move toward virtualization and cloudification will require new levels of network automation, especially in a world where workloads are increasingly dynamic and many IoT applications require low latency. This would warrant a shift to AI-enabled network management platforms.

Network DevOps


The adoption of cloud-based virtualized networks has initiated a need for a continuous development and operations (DevOps) methodology. This is important in order to facilitate an automated, factory-based approach for end-to-end service lifecycle management.

Adopting a DevOps methodology for network operations is crucial to future network evolution because it provides an environment for continuously engineering (building, onboarding, testing and managing) new services — not to mention ongoing updates for existing services.

Network DevOps enables a lean, effective way to implement functionality and services that help improve customer experience and drive revenue by automating the services lifecycle and driving resiliency. As networks become increasingly software-based, this provides a straightforward way to create new services by assembling and chaining software components together. In fact, it allows for a DevOps-like model for network service development, network operations and end-to-end management.

There is no doubt that the explosion of the 5G-enabled edge market presents incredible opportunities for businesses to streamline IT operations and reap new value from existing resources. Implementing transformative network strategies and modernizing enterprise networks will help innovative business get the most from this opportunity and deliver new services at the edge.

Wednesday, 11 March 2020

How Red Hat OpenShift can support your hybrid multicloud environment

IBM Study Materials, IBM Tutorial and Material, IBM Certifications, IBM Exam Prep

We know that enterprises have moved, or are moving, their workloads to the cloud. It’s rare, however, that a company wants to move all of its applications to the cloud — and even rarer that it would choose a single cloud provider to host those applications. The most common strategy is a mixture of on-premises and public cloud providers, or what is referred to as hybrid multicloud. Most organizations today are already some way along this complex journey.

In this blog post, I’ll explain how enterprise workloads are changing, what it means for businesses, and how Red Hat OpenShift can help organizations with hybrid multicloud.

First, a few definitions


A multicloud strategy encompasses cloud services from more than one cloud provider. As cloud adoption has increased, lines of business often find new ways to consume cloud capabilities to meet their specific demands for compliance, security, scalability or cost. That, along with the client’s reluctance to rely on a single provider, has led to the majority of organizations adopting a multicloud approach.

Hybrid cloud refers to a computing environment that combines private and public clouds within a single organization. Some businesses use a hybrid cloud solution to increase security, whereas others use it to match required performance to a particular application. It allows applications and data to move between these environments.

A hybrid multicloud combines these strategies, providing private and public cloud solutions from more than one provider.

A recent report by Flexera indicated that 84 percent of enterprises have a multicloud strategy, while those planning a hybrid cloud strategy grew to 58 percent.

Our workloads are changing


Not only do we have numerous options on where we can deploy our applications; we now have multiple options on how we can deploy them.

Traditionally, we had stand-alone systems that ran a single operating system, utilizing all the hardware owned by that system, and they normally provided a single-function application. Hypervisor virtualization then brought us the ability to “share” hardware resources, such as processing, memory and I/O. This meant we could pack more virtual machines onto a single server; however, each one still tended to have its own operating system and application. Containerization introduced the concept of operating system virtualization. This allowed us to deploy and run applications in a “trimmed down” environment, consuming far fewer resources than virtual machines as the containers only need to contain the runtime components required for that application.

IBM Study Materials, IBM Tutorial and Material, IBM Certifications, IBM Exam Prep

Figure 1: Comparing virtual machines against containers

What does that mean to an enterprise customer?


Enterprise customers are looking for ways to transform some of their traditional applications into cloud-native applications, but at the same time realize there is a need to keep certain workloads on virtual machines, be it on-premises or in the cloud. This introduces the need to deploy and manage both containerized and virtual machine workloads on the most suited hybrid multicloud environment.

Not only that, but what about managing these containers post deployment? How do we ensure they started correctly, how do we monitor them to ensure they are performing as expected, how do we allow access to those applications and how do we upgrade those applications? This gets more complex in environments where we’re running hundreds or thousands of individual containers.

This is where Red Hat OpenShift can help.

How Red Hat OpenShift can help


Red Hat OpenShift Container Platform is an enterprise-ready platform as a service built on Kubernetes. Its primary purpose is to provide users with a platform to build and scale containerized microservice applications across a hybrid cloud, and we could write numerous blog posts on all the features available within OpenShift. It can be installed on premises, in a public cloud or delivered through a managed service.

OpenShift Container Platform architecture is based around master hosts and worker nodes.

Master hosts contain the control plane components such as API Server, cluster store (etcd), controller manager, HAProxy and the like, and should be deployed in a highly available configuration to avoid single points of failure. With OpenShift Container Platform v3.11, Master hosts are RHEL/Atomic server running on IBM Power Systems or x86.

The worker nodes are managed by the master hosts and are responsible for running the application containers. These application workloads are scheduled to the worker nodes by the master hosts. With OCP v3.11, worker nodes are RHEL/Atomic/Linux on IBM Z servers running on IBM Power Systems, x86 or IBM Z. Currently, you cannot mix node architectures within the same OCP cluster.

What does OpenShift Container Platform offer?


OpenShift provides a number of deployment methods such as DeploymentConfig, StatefulSets and DaemonSets. These allows us to define how our containerized application should be deployed, including key features such as number of pods, number of replications, which images to use for those pods, scaling options, upgrade options, health checks, monitoring, service IP and routing information, port to listen on and so forth. We can then add that application template to the catalogue and allow self-service portal users to deploy it within their own project space.

We now have a declarative state describing what we want that application to look like, and OpenShift will monitor it to ensure it matches the defined state. Should it deviate from that desired state, OpenShift takes actions to resolve the issues.

Let’s take an example where an application was defined as requiring two pods in its configuration template and for some reason one of those pods terminated. The OpenShift master would notice this deviation and take action (in this case, create a new pod), as shown in the following chart:

IBM Study Materials, IBM Tutorial and Material, IBM Certifications, IBM Exam Prep

Figure 2: OpenShift recreating terminated pod

IBM Cloud Pak for Multicloud Management and Cloud Automation Manager


Not only can Red Hat OpenShift deliver containerized applications; it gives us the ability to manage a multi-cluster environment and drive more traditional IT environments such as on-premises or public cloud virtual machines. IBM Cloud Pak for Multicloud Management allows us to manage multiple Kubernetes clusters both on premises and in the cloud, giving us a single view of all of our clusters and enabling us to perform multi-cluster management tasks.

IBM Study Materials, IBM Tutorial and Material, IBM Certifications, IBM Exam Prep

Figure 3: IBM Multicloud Manager overview

From the Multicloud Management Cloud Pak, we’re able to deploy IBM Cloud Automation Manager (CAM) within our OpenShift cluster. Cloud Automation Manager gives us the ability to provision virtual machine based applications across multiple hybrid clouds by allowing us to register additional cloud providers, such as the on-premises IBM PowerVC (based on OpenStack) and VMware vSphere environments, or public cloud providers such as IBM Cloud, Amazon EC2, Microsoft Azure, Google Cloud and the like.

Once we have our cloud providers added, we can configure terraform based templates that define how a VM should look within the target environment. These templates can be published as service offerings to appear in the OCP catalogue as shown in the following graphic:

IBM Study Materials, IBM Tutorial and Material, IBM Certifications, IBM Exam Prep

Figure 4: OpenShift catalog, including virtual machine options

Source: ibm.com

Monday, 9 March 2020

AI turns untapped data into tech support insights

IBM Tutorial and Material, IBM Guides, IBM Exam Prep, IBM Certification

As data continues to grow at staggering rates, enterprises are grappling with how to manage it. They’re sitting on virtual goldmines of tech support insights, yet how to deploy artificial intelligence and analytics effectively can be a struggle. In fact, over 70% of leaders surveyed in a NewVantage Partners study say adoption of AI is a challenge.

IBM AI and analytics leader Milena Arsova said computational power and analytics alone aren’t enough to improve operations. “You need to know where to start in the context of the problem and the business goal, otherwise it’s like buying a sports car in separate parts and having to assemble it yourself,” she said.

Becoming a data-centric organization requires a shift in mindset. Lay the foundation with combined data science and technology expertise and AI methods for data capture and analysis. Some industries such as banking are ahead of the curve and use AI for personalized support or automating repetitive tasks.

Here are four ways to apply AI and analytics to tech support to pinpoint insights, increase productivity and assess the cost impact.

1. Improve uptime and availability for service instances

Stay ahead of issues that can cause disruptions by using predictive maintenance and analytics to assess downtime patterns, predict failures and refine device management processes. Address routine issues that plague your infrastructure, so more time is spent on critical tech support operations.

2. Sharpen root cause analysis

AI helps you find patterns that may not be obvious in volumes of complex data. For example, you might find that 40% of technical support’s time is spent resolving a specific problem. You can look at cause drivers and support models to assess if introducing a different support model such as remote support or a customer-facing smart chatbot will provide quicker resolution at a lower cost.

3. Tackle infrastructure inefficiencies

With AI, you can determine if a technical support device has a higher failure rate and make device refresh recommendations. This can have a direct, positive impact on the overall cost of support and risk exposure to potential failures.

4. Monitor inventory and supply chains

In terms of inventory management, use automation and virtualization to monitor infrastructure health and locate installed bases with expiring warranties or support contracts.

“Enterprise installed bases can range from a few hundred to over 90,000 devices in the case of large international banks, so the potential cost savings of reducing the risk of failure is significant,” said Arsova.

Similar efficiencies can be applied to supply-chain cycles. Analyze tech support devices to identify where there’s a high demand for specific parts. Then tie those devices to parts-distribution centers so you can supply the appropriate stock to meet demand.

Watch the video
Looking to the future

 A recent IDC report estimates 41.6 billion IoT devices will generate 79.4 zettabytes of data in 2025. Given soaring data rates, understanding data structure and increasing analytics efficiency are quickly becoming necessities for smarter technical support operations. Digital platforms record every transaction and experience, and data insights can reveal opportunities and competitive advantages.

 “AI and analytics need to be continuous, fast and flexible to affect business outcomes,” said Arsova.

Source: ibm.com

Saturday, 7 March 2020

EachforEqual: Advancing gender equality in business & technology

IBM Prep, IBM Guides, IBM Tutorial and Material, IBM Learning, IBM Certification

Every year on March 8, people around the world observe International Women’s Day — a celebration of the social, economic, cultural and political achievements of women, and a call to action to continue working toward greater gender equality in our world.

While some organizations have prioritized workforce diversity and inclusion for a long time, there’s been a growing push in recent years to address the now well-documented shortage of women in leadership and technical roles — two spaces that are vital to the work of IBM Systems.

Data reported by Catalyst on women in management indicates that in 2019, 29 percent of senior management roles globally were held by women. This number has been trending upward, but the current statistic by no means indicates equal participation of those represented in the workforce.

Gender representation in technology fields is a bit grimmer. According to recent research on the technology industry from McKinsey & Company, women are still underrepresented in computer sciences at every stage — from K–12 schooling to higher education to the workforce. Women hold 26 percent of roles in the computing workforce and just 11 percent of senior leadership roles in tech — with women of color experiencing the greatest barriers to entry in the technology sector.

We have a long way to go in our industry, and in society more generally, and awareness of the problem is only a starting point. Each year, International Women’s Day brings an opportunity to celebrate women’s accomplishments and renew our commitment to keep working toward gender equality. This year’s theme, #EachforEqual, is about collective individualism — the idea that the combined contributions of many can truly shift the tide of inequality. It’s an opportune moment to reflect on the part each of us plays in building a more gender-balanced world.

We know that gender equality won’t be achieved by just a few leaders speaking out; it requires everyone to pledge to challenge biases, call out gendered assumptions and boost the visibility of women. For those in leadership roles, a commitment to mentor and promote women, provide the educational tools for their success, and create a supportive and inclusive work environment is imperative. And to address the underrepresentation of women in the tech industry, we need to invest in challenging biases that begin in the home and early childhood classroom and continue throughout women’s careers.

In IBM Systems Lab Services, a global technical services organization in IBM Systems, women fulfill critical leadership and technical roles — from consultants to managers to our vice president. As an organization, we’re committed to being vocal on behalf of gender equality in our industry, and to enabling our female colleagues for success.

Source: ibm.com