Saturday, 30 May 2020

IBM

What it will take to enable trusted, transparent and efficient supplier collaboration

IBM Prep, IBM Guides, IBM Learning, IBM Tutorial and Materials

Today’s supply chains are a complex, global network of networks. Thanks to the increasing sophistication of everyday products and services – from cell phones to automobiles – supply chains often rely on four tiers of suppliers or more to deliver finished goods.

That volume of suppliers, the global spread of supply chains, and a lingering dependence on manual processes, make answering simple questions hard. “What’s the status of my order, shipment or invoice?” becomes an hours-long endeavor involving emails, Excel spreadsheets and piecing together a trail of individual business documents, often in the form of Electronic Data Interchange (EDI). Identifying and effectively managing disruption is also harder, especially when you consider that while most supply chain professionals collaborate primarily with tier 1 suppliers – a whopping 40 percent of supply chain disruptions occur among tier 2 to tier 10 suppliers.

Finally, for all its benefits in terms of lower-cost labor and materials, globalization has also introduced new complexity and risk. The current global pandemic is the ultimate example. More than 980 of Fortune 1000 companies have tier 2 suppliers in China. When lockdown went into effect in the region, many companies were unable to pivot quickly enough. Since most didn’t have visibility into tier 2 suppliers, they didn’t realize they had dependencies until they were informed by tier 1 suppliers. Then, they were unable to find, validate and onboard reliable, new suppliers in other regions – and it showed, on production lines, in warehouses and on supermarket shelves worldwide.

The good news is, there is a practical path forward to enable trusted, transparent and efficient supplier collaboration. Two key recommendations coming out of the World Economic Forum right now are digitization and prioritizing data security and privacy across your multi-enterprise supply chain. I couldn’t agree more.

The future of supplier collaboration is intelligent, digitized and self-serve to improve data quality, ensure information immediacy, and reduce disputes across your supply chain. It is built on a secure foundation of transparency and trust, which enables more strategic relationships to help you reduce cost, mitigate risk and drive innovation.

IBM clients are already enabling deeper supplier collaboration to create net-new value for their customers. My favorite recent example is a leading global logistics provider that is helping a major manufacturing customer shift from producing cars to ventilators. With IBM Sterling Supply Chain Business Network, this logistics provider can offer its customer frictionless connectivity with countless new suppliers to quickly and cost-effectively obtain the hundreds of unique parts needed to make nearly 30,000 ventilators. Together, they’re helping close the gap on critical medical equipment shortages across the US.

To continue helping clients minimize the complexity of supplier onboarding and collaboration, I’m excited to announce several innovative new capabilities in this space – along with key offers that IBM is making available to clients at a discount or at no charge for a limited time, in support of the COVID-19 pandemic.

◉ IBM Sterling Business Transaction Intelligence Multi-Enterprise provides your line-of-business users with real-time visibility into the lifecycle of B2B transactions. This enhanced edition also leverages IBM Blockchain so you can extend that visibility across your supplier networks, creating a secure, shared single version of the truth for B2B transactions. Immediately deepen trust, transparency and collaboration with trading partners – and minimize disputes – with invitation-only blockchain access that allows everyone to see the same chronological history of events and documents. Embedded AI also provides proactive discrepancy alerts to help you quickly identify and get ahead of disruptions.

◉ IBM Sterling Transaction Manager and IBM Sterling Catalog Manager are designed to help you improve collaboration by simplifying onboarding and digitizing and automating B2B transactions with trading partners – including smaller partners that lack the technology and expertise to support EDI transactions. Sterling Transaction Manager provides a digital solution for procure-to-pay and order-to-cash cycles. Non-EDI partners simply log-in to a web portal to receive purchase orders and send order acknowledgements, shipping notices, and invoices. The Sterling Catalog Manager lets non-EDI suppliers and trading partners easily upload and maintain product information across multiple catalogs via the cloud-based, web portal.

IBM Rapid Supplier Connect is a new, COVID-driven digital solution that makes it possible for healthcare and government buyers and suppliers to quickly find each other – including non-traditional suppliers collaborating to help keep hospitals and key support organizations ready. It brings together existing solutions like Trust Your Supplier from Chainyard, IBM Sterling Inventory Visibility and Sterling Supply Chain Business Network, to provide emergency supplier onboarding and inventory availability so buyers can find new critical goods suppliers with speed and confidence.

These solutions are just a few of the ways we’re helping clients across industries to deepen collaboration with suppliers to create smarter, more innovative and resilient supply chains.

Source: ibm.com

Friday, 29 May 2020

IBM Power Systems for Linux: What sets Power apart

IBM Prep, IBM Tutorial and Material, IBM Exam Prep, IBM Learning

The myth that “Linux on x86 is different from Linux on Power” has been promoted by those who wanted you to believe that if you invested in Linux on IBM Power Systems you would get an inferior product that wasn’t “real” Linux, and your applications wouldn’t work as they should. In an earlier post, I dispelled that myth by showing that the basic components of any Linux distribution, from the kernel to the tools used to administer the system and develop applications, are the same, regardless of the underlying hardware platform.

That said, the underlying hardware platforms (the system architectures) are not the same. The chip architectures and instruction sets are different, and while these differences don’t affect the application functionality (HANA is HANA and Mongo is Mongo), they do affect the performance, reliability and scalability of these systems.

Linux on x86 is different from Linux on Power because applications running on the IBM Power/Linux platform are usually faster, more reliable, more scalable, more secure and require fewer cores and physical systems. The underlying system architecture is the reason why.

The Linux kernel


In a Linux kernel, there are sections of code that take advantage of unique features of a system architecture. When the Linux kernel is built and compiled for IBM Power Systems, it’s aware of these attributes, like reliability, availability, serviceability (RAS) and security options not found on x86 systems. It uses them to provide mission-critical capabilities that can result in significantly higher performance, reliability and capacity.

There’s a similar section of code available in a Linux kernel to exploit x86 architectural features. The challenge x86 vendors face is that the Linux kernel doesn’t create a section of code for each vendor and their special features. So, Linux distributions for x86 systems have to be built to the lowest common denominator for the x86 system architecture.

The analytics and big data advantage


Analytics and big data are at the heart of applications driving innovation around the world today. The efficient management of data within these applications is critical to their performance, and this is where IBM Power Systems have a significant advantage over x86 systems. Consider the following:

◉ Power Systems support up to eight-way simultaneous multithreading, while x86 architecture only supports two-way hyperthreading. Because of this, Power Systems are more highly utilized, providing more throughput per core, a key advantage given that most databases are multi-threaded.

◉ Firmware in Power Systems helps address latency issues found in nonuniform memory access (NUMA) architectures by attempting to keep core and memory allocations together when virtual machines (VMs) are created, providing significant benefits to database applications where memory access is high.

◉ System performance is greatly increased by efficiently managing the movement of data from main memory to cache memory. Power Systems provides a highly efficient cache system that includes a very large on-processor victim cache shared by all cores on the processor, offering high probability that data will be in cache when needed.

◉ GPUs are critical to the performance of analytics, AI and high-performance computing (HPC) applications. On x86 systems, GPUs are connected through the PCIe interface and are limited to 32 GB/s. For Power Systems, a direct, high-speed, memory coherent connection to integrated NVIDIA GPUs running at 300 GB/s is available. This can provide up to a 7x performance increase for analytics, AI and HCP applications.

Get the most from your system architecture


It’s critical that Linux application development, execution and management be guaranteed across different system architectures. It’s also important that Linux distributions have the ability to exploit features that are unique to a system architecture while not compromising that guarantee. Otherwise, Linux distributions would be reduced to supporting the least common denominator of a system architecture, eliminating the rich set of features at the heart of IBM Power Systems and the benefits those features bring to your solution.

Source: ibm.com

Thursday, 28 May 2020

No two supply chains are the same – IBM Sterling has a platform for that

IBM Exam Prep, IBM Tutorial and Materials, IBM Cert Exam, IBM Guides, IBM Study Materials

Historically, most supply chains are closed systems locked behind four walls. They are a stitched-together series of heterogeneous systems, networks and applications, with many different data formats. Because no two supply chains are the same and are comprised of many systems beyond their control, developers and partners tell me that one of their biggest challenges is orchestrating solutions to enable end-to-end transactional flows, like order to cash, seamlessly.

At IBM, we believe the only way to create end-to-end supply chain solutions is with an open platform that you can extend and enhance. This platform should provide building blocks so you can customize and configure solutions and bridge to other networks and services involved in the supply chain – and enable you to bring in data and insights from other domains to solve supply chain problems unique to your business.

What open means for supply chain


This commitment to open technology is in our DNA and that’s why I’m excited to tell you about the new IBM Sterling Supply Chain Suite that we’re launching today. It’s part of our broader multi-enterprise business network (MEBN) strategy that acknowledges there are many types of networks and applications that must work together across different enterprises – with applications and expert services on top of data to bring added value – to not only solve problems but get ahead of them.

As part of the launch, we’re introducing the Sterling Developer Hub and Developer Advocacy Program to provide support across the entire development lifecycle and as you engage in the ecosystem.

How open works for you


IBM Exam Prep, IBM Tutorial and Materials, IBM Cert Exam, IBM Guides, IBM Study Materials
The IBM Sterling Supply Chain Suite delivers the following new extension points:

◉ Business service creation with public APIs. You can access public APIs for data ingest and query. A canonical map for data insulates the services from the particulars of the data origin, and new data origins don’t require changes to your applications.

◉ Open AI to build your own AI agents. The IBM Sterling Supply Chain Business Assistant gives you access to our AI agents, which are pre-trained in supply chain, as well as the ability to build your own supply chain business agents and introduce machine reasoning skills against external data sources. This allows you to teach the AI platform about problem domains specific to your supply chain.

◉ Assets for interconnecting with API-driven systems. Most new apps, services and networks are API-driven. As part of our network-of-networks strategy, the developer platform has iPaaS-based reusable assets to help you create the connective tissue to interconnect with other applications and services that are important to your supply chain.

◉ IBM Cloud Paks to run services wherever your business is. You can use the IBM Cloud Pak for Data to host custom analytics, AI and data science services using our InfoHub as a data source. You can also use the IBM Cloud Pak for Integration to interconnect your networks with other networks and systems. With Red Hat OpenShift you can run these Cloud Paks anywhere – in any cloud or on premise.

Source: ibm.com

Tuesday, 26 May 2020

Behind the scenes with tech trailblazers: Meet Florin Manaila

IBM Prep, IBM Tutorial and Materials, IBM Learning, IBM Exam Prep, IBM Guides

How did you first get interested in AI and supercomputing?


I’ve always been fascinated by the learning curve of how we discover and adapt to new things. Even since I attended university, I wanted to come up with better answers for automating repetitive tasks (like measuring how much water there is on a crop parcel based on a satellite image). The magic of supercomputing for AI isn’t that you need to carefully design complex elements such as compute, network and storage. It’s more about how such systems are architected to support hundreds of data scientists and hundreds of experiments (natural language processing, sound, computer vision and more) running in parallel. Those AI supercomputing systems become live entities with their own rules, constraints and demands augmenting human research. This is like space exploration for me.

What aspects of cognitive systems do you specialize in?


I specialize in architecting and designing AI clusters for distributed deep learning training at scale. Among all the areas of AI, I like computer vision most. Images are universal; they’re not bound to language or culture. Therefore, the processes of learning from images can be applied everywhere. For example, you can train a deep neural network in Germany and use it in Iceland by retraining with additional images category or objects. This is called transfer learning, where a model developed for a task is reused as the starting point for a model on a second task. Even a small data set of 650 GB with 2 million files can challenge your design, but think about a 1 petabyte data set. What will happen?

What are some of the most interesting AI projects you’ve seen in your work?


Each project is very interesting, because each organization using AI today is innovating on completely new ideas. Some clients are looking for small-scale AI deployments but using existing enterprise AI technology for fast adoption, such as to identify plant diseases by training convolutional neural networks to classify leaves, to track the activity of various beetles in a lab, or to accelerate healthcare diagnostics with deep neural network models based on X-rays or CT scans.

I got to work on a large AI implementation for research with IBM Fellow John Cohn, and nearly every aspect of the original design was challenged by the users and the client team, until it evolved into something unexpected, driven by innovation and the needs of the client. When you start a new project, it can be hard to anticipate where you might end up, and that’s part of the excitement. An AI supercomputing system might need to accommodate the needs of hundreds of researchers.

I’m also interested in the energy efficiency and carbon offset of large AI clusters. If you train a generative adversarial network (GAN) model on eight IBM Power System AC922 servers (32 GPUs in total) for a month, you’ll need to offset the equivalent of 20 trees. How do you design a system at a scale that will have hundreds of users and help lower the CO2e?

How does IT infrastructure factor into the success of an AI project?


Essentially, the AI infrastructure (hardware and software) you’re using influences your research. Therefore, it’s essential to meet the following criteria:

◉ User-centric (freedom of choice)
◉ Simple and open
◉ Scalable and efficient
◉ Reliable
◉ Federated

At scale, simplicity will present an advantage, and the open source technology will drive a smoother and faster adoption among data scientists.

Can you give some examples of ways you seen AI transform business for IBM clients?


Take a client that’s using IBM Visual Insights to count bad seeds and predict the quality of its products for better pricing and positioning in the market. This project is changing perspectives on how a technology can be used to shape the future of company strategy. Another example I can think of is a client using Watson Machine Learning Community Edition on the Power AC922 for an entirely new business creating audiobooks from e-books to help users with vision impairment. Isn’t this transforming the way we were creating products and strategies in the past?

What’s your favorite part of your job?


I like working on client challenges and coming up with inventive ways to deliver AI solutions that meet their needs. Sometimes I need to go back and run experiments and benchmarks to understand the limits and come up with new ideas that will fit the original project requirements. Our ability to be successful, in my view, isn’t just about understanding the problem but experimenting with it to fully understand the constraints. I need to know what problems I am solving with my design. Only in this way can you come up with a good solution.

When you think about the future of AI for business, what do you imagine?


Today, innovation in AI is driven by hardware, followed by the software ecosystem. Nothing is possible without accelerated computing (GPUs, TPUs, FPGAs, ASICs). Even if those massive accelerated systems can prove that they can perform a specific prediction in a few seconds compared with conventional workflows, there’s still resistance from many companies in adopting AI. Time will prove that machine learning solutions really can be better than conventional tools. I imagine a future for AI where it will be a part of every business process in every company, which will open different challenges than those we face today.

Source: ibm.com

Sunday, 24 May 2020

IBM

Enabling distributed AI for quality inspection in manufacturing with edge computing

While artificial intelligence (AI)-assisted quality inspection significantly improves inspection cycle time and inspection accuracy, management and support for hundreds of thousands of cameras, robotic arms, and robots can be a challenge. A discussion on the implementation and deployment of AI-assisted quality inspection in an actual manufacturing production environment would naturally lead to an edge solution that efficiently and securely deploys trained models to edge, manages the lifecycle for the models and edge devices, and archives edge data. As Rob High, CTO, IBM® Edge Computing described, “Edge computing brings AI and Analytics workloads closer to where data is created and actions are taken — reducing latency, decreasing network demands, increasing the privacy of personal and sensitive information, and improving operational resilience. IDC estimates that we will see the number of intelligent edge devices in the market grow to 150 Billion by 2025.”

The technology stack described in this blog explains how to efficiently scale model run times and simplify inference process for quality inspection in manufacturing. The same approach can be applied to different use cases with minimal changes.

Use case in IBM system hardware manufacturing


In 2018, IBM Systems supply chain successfully adopted an on-premises PowerAI Vision for quality inspection in IBM system hardware manufacturing for mainframe, IBM Power Systems™, and IBM storage systems. Models trained by PowerAI Vision have been used for production-level quality inspection, resulting in improvements in both efficiency and quality assurance, and:

◉ Reduced manual eyeballing inspection time from 10 min to 1 min per product

◉ Reduced field issues and improved customer satisfaction

◉ The project presented in this blog started in late 2018 in order to tackle the following challenges in automation and scaling: Scalability in model deployment within the plant, across plants in different geographies, or inter-company deployment (for example supplier quality management)

◉ Large-scale model life cycle management

◉ Large-scale edge device life cycle management (including device set up, monitor, and recovery)
Edge data archive

◉ GPU usage and segregation between model training and inferencing for better resource utilization

◉ Separation of the responsibilities among personas for quality inspector, modeling engineer, and edge manager

The system context diagram in Figure 1 illustrates the intended functions of the final solution. This blog, the first of the series, focuses on the work completed for the use case goal of deploy a trained model to edge. We plan to share other use case models in the upcoming blogs.

Figure 1. System context diagram of quality inspection system

IBM Exam Prep, IBM Guides, IBM Learning, IBM Tutorial and Material

Why PowerAI Vision?


PowerAI Vision was used for model training because of PowerAI’s significant speed advantage with model training. It has integrated Representational State Transfer APIs (REST APIs) to automate train, retrain, and model exporting. It makes computer vision with deep learning more accessible to business users by including an intuitive toolset for labelling, training, and deploying deep learning vision models, without coding or deep learning expertise. It includes the most popular deep learning frameworks and their dependencies. It is built for easy and rapid deployment which translates into increased productivity. By combining PowerAI Vision software with IBM Power Systems, enterprises can rapidly deploy a fully optimized and supported platform with blazing performance. 

Why IBM Edge Computing for Devices


The IBM Edge Computing service was used to manage distributed nodes, deliver, update, and remove model for quality inspection. IBM Edge Computing for Devices provides users with a new architecture for node management. It is designed specifically to minimize the risks that are inherent in the deployment of either a global or local fleet of edge nodes. Users can also use IBM Edge Computing for Devices to manage the service software lifecycle on edge nodes fully autonomously. More specifically, IBM Edge Computing for Devices supports model management through Sync Service. It facilitates the storage, delivery, and security of models and metadata packages.

The reference architecture can be used as a guideline for designing an edge solution.

Figure 2. IBM Edge Computing reference architecture

IBM Exam Prep, IBM Guides, IBM Learning, IBM Tutorial and Material

Model management system proof of concept architecture


This proof of concept (POC) architecture was created with the ability to deploy in the cloud or on site.

All components can be scaled to support multiple edge nodes with different types of models. In the current POC, we used Fast R-CNN models to show the object detection functionality. Figure 3 illustrates the architecture we used to build the model management system POC.

Figure 3. Proof of concept architecture

IBM Exam Prep, IBM Guides, IBM Learning, IBM Tutorial and Material

The architecture consists of the following three main parts:

◉ PowerAI Vision for model training
◉ Cloud or on-premises application stack for central control and management
◉ NVIDIA Jetson TX2 as an edge node for quality inspection

The main process flow for model training that yields the best automated inspection is:
  1. Modeling engineer uses PowerAI Vision to train the object detection model
  2. Modeling engineer initiates model export functionality in the main dashboard (after authorization process)
  3. Dashboard invokes model exporting service with storage specified as a cache for models
  4. Model exporting service communicates with REST API from PowerAI Vision to download the model
  5. Model exporting service stores model to the edge IBM Edge Computing for Devices object storage from Model Management Service (MMS)
  6. Modeling engineer initiates model deployment to specified edge nodes using main dashboard
  7. Main dashboard communicates with the edge connector which is responsible for working with the Edge Computing Horizon exchange REST API
  8. Edge connector initiates deployment of patterns using IBM Edge Computing for Devices exchange REST API
  9. IBM Edge Computing for Devices agent receives configuration from IBM Edge Computing for Devices exchange to deploy a new container with specific model
  10. IBM Edge Computing for Devices agent initiates Docker container start up with our IBM Edge Computing service
  11. IBM Edge Computing for Devices service downloads model content from the IBM Edge Computing for Devices Object storage
  12. Docker downloads the required Docker image which represents runtime for model
  13. Docker starts model with Rest API
  14. Quality inspector uses the edge dashboard to analyze photos of the product
  15. In the Edge dashboard, user initiates quality inspection using REST API of deployed model and the results is displayed thereby automating the inspection process

Friday, 22 May 2020

Why having full control of your digital identity matters

IBM Exam Prep, IBM Guides, IBM Tutorial and Material, IBM Learning

Showing a passport at the border, a driver’s license to the police, or your ID card when buying a house are all examples of how we prove who we are today. It seems we take for granted everything that a central authority (in this case our local government) has done to securely store our identifying data to issue our physical ID documents.

We trust our “analog” IDs and automatically extend this trust into the digital world. We easily disclose sensitive personal data to get an extra discount while shopping online. We believe that our Facebook profiles are secure enough to provide to local service providers. Or, we send sensitive business and personal information over insecure websites when applying for COVID-19 emergency aid funding, to name just a few examples.

But the online world is different: the number of devices connected to the Internet, including the machines, sensors, and cameras that make up the Internet of Things  talking to each other and to us will soon exceed earth’s population by more than three times and the same is valid for social media identities. COVID-19 has locked down in one way or another more than 2.6 billion people and as a result security incidents have grown up 40 percent due to home working. And while investments in IT security are growing, the number of cyber threats and incidents is rising even faster. So is their scale and the damage they cause. Human error is one of the key factors behind the success of these attacks.

So, how can we identify ourselves online securely and effectively? How can we work, shop, “meet” a doctor or trade cryptocurrency in a safe and secure manner?

We have no choice but to reclaim full control of our digital identity and our sensitive data.

I am very proud that IBM is part of a solution that helps us do exactly that.

Based on the IBM POWER platform, the Swiss startup Vereign has built a blockchain that enables a decentralized, self-managed identity solution. Vereign has chosen IBM POWER architecture, because of the completely open system stack – from the foundation of the hardware through the software stack. Such a digital identity relies on open source and open standards and allows users to verify authenticity across the web while keeping data sovereignty. It can be added to any type of service and automatically meets GDPR and other key compliance requirements. 

Thursday, 21 May 2020

Four tips to strengthen the IT and business alliance

IBM Exam Prep, IBM Tutorial and Material, IBM Guides, IBM Cert Exam, IBM Learning

Every year for the past 20 years, IBM has conducted a Global C-suite Study to better understand what’s on the minds of business leaders. This succession of studies has shown IT evolving into a key provider of competitive differentiation for most organizations. The specific challenges and opportunities may change year to year, but the need to link IT projects tightly with business projects has only grown in importance over time.

Some of the challenges named in the 2020 Global C-suite Study highlight the importance of technology across the business. For example:

◉ Serving customers in the trust economy: Leading organizations are challenged to prove transparency on data usage, to establish reciprocity with customers who share information in exchange for personalized service, to demonstrate data accountability, and to do this all while adjusting rapidly to market dynamics. This challenge carries data, infrastructure and governance implications.

◉ Leveraging the human-technology partnership: Leaders want to use AI for “augmented” rather than “artificial” intelligence so as to provide a humanized, not just a personalized, experience. This challenge carries implications on data governance and trust in addition to innovation and infrastructure.

◉ Implementing a platform business model: Many businesses are becoming disruptors or being disrupted by companies with platform business models, facilitating exchanges of goods and services between large interdependent groups. This challenge carries infrastructure as well as data sharing and governance implications.

In the same 20-year period, we’ve seen the relationship between businesses and their IT organizations evolve from IT providing low-cost utility services to IT being an essential partner with the business. IT went from being a necessary cost businesses tried to minimize to being an asset in which they invest to enable competitive differentiation. You can see that technology weaves the fabric of each of the business challenges described above. A stronger reliance on IT dictates having a stronger alliance with IT.

Four tips to strengthen the IT and business alliance


So how can CIOs foster and accelerate strengthening this alliance between the business and IT? To begin with, CIOs and IT leaders need to be involved early in the business planning process, and more resources should be dedicated to IT enablement.

Here are four tips for CIOs:

1. Understand your organization’s business model, objectives and strategy.

Business objectives almost always carry IT implications, and IT needs tend to stem from business objectives. A few examples might be providing humanized customer experiences; reducing business risk; and gathering, protecting and sharing data. Getting onboard early in the business strategy and planning process to understand your company’s business objectives is the starting point for strengthening the IT and business alliance.

2. Ideate on high-impact projects that deliver a human-centered “wow” factor.

Hosting human-centered rapid problem-solving sessions involving both business and IT people is one way to not only have IT involved early but ensure it is seen as directly partnering with business. You can use a structured methodology like Design Thinking to facilitate these sessions. The outcome will be a series of snack-sized, high-impact projects.

3. Know your existing IT environment; assess its capabilities and ideate on solutions.

Conduct a pin-pointed assessment of your IT environment based upon the highest priority high-impact projects you identified. Assess only the IT capabilities and business processes relevant to those projects rather than an exhaustive gap analysis. This speeds the analysis step and keeps the team from being overwhelmed by an expansive set of recommendations that are too complex to implement. It also links your project and subsequent resource requests directly to the anticipated high-impact business outcome.

4. Use a structured approach to develop an IT strategy that aligns investment and focuses on business priorities.

Component business modeling is a framework that can help businesses understand areas of potential differentiation and balance resources. A component business model can be created for the business overall but also for IT, to ensure key strategic business areas offering competitive differentiation have proper IT support.

This approach strengthens the alliance between IT and the business on both near-term tactical projects and the overall business strategy.

Monday, 18 May 2020

Making life easier for system admins: The IBM PowerVM LPM/SRR Automation tool

IBM Prep, IBM Exam Prep, IBM Tutorial and Material, IBM Guides, IBM Certifications

If you’re a system administrator working with IBM Power Systems servers, you know that Live Partition Mobility (LPM) and Simplified Remote Restart (SRR) are essential capabilities for managing your server environment. In the normal course of maintaining your systems, you might need to move partitions because of hardware repairs, firmware updates or VIOS updates. LPM allows you to keep partitions running during such moves. It can also help with load balancing and energy conservation. And what if one of your servers unexpectedly goes down? SRR can help you move partitions within minutes of a crash, thus helping you reduce downtime.

IBM created a tool that makes the use of LPM and SRR faster and easier: The IBM PowerVM LPM/SRR Automation tool. Its purpose-designed graphical user interface (GUI) is easy to use and allows admins to move and restart many partitions at once, instead of spending lots of time navigating through the Hardware Management Console. For example, moving 16 partitions can take more than 190 clicks on the HMC GUI, while the LPM/SRR Automation tool can move any quantity of partitions in as few as five clicks. The tool also makes it easy to move all the partitions back to their original server with their original configuration after the crashed server is repaired, something that’s not even possible with the HMC GUI.

The tool was first delivered to IBM Power Systems clients in 2015 and has been used throughout the world by over 600 organizations to simplify and enhance the LPM and SRR process.

IBM Prep, IBM Exam Prep, IBM Tutorial and Material, IBM Guides, IBM Certifications

Figure 1: Home screen of the PowerVM LPM/SRR Automation Tool

Taking advantage of PowerVM features


PowerVM has added many basic LPM and SRR features to enhance their functionality and performance. A few are available in the HMC GUI, but the selection is limited, and using them requires many clicks.

IBM Prep, IBM Exam Prep, IBM Tutorial and Material, IBM Guides, IBM Certifications

Figure 2: HMC GUI LPM screen showing the features available

The HMC command line interface (CLI) exposes more features of LPM and SRR but is more complicated to use. It requires logging in to the HMC using Secure Shell (SSH) and using a terminal session to issue commands to the HMC. The following is an example command to move a partition over a high-speed MSP connection while keeping the virtual fibre channel adapters mapped to the proper VIOS pair:

migrlpar -o m -m ‘kurtkP8’ -t ‘bobfP8’ -p ‘bf_client1’ –ip bbhmc2.rchland.ibm.com -u hscroot -i \””redundant_msps=53/kk1vios1//172.28.10.70/bb1vios1//172.28.10.55,53/kk1vios2//172.28.10.71/bb1vios2//172.28.10.56\”,\”” virtual_fc_mappings=3//1//,4//1//,5//6//,6//6//\”” –requirerr 2

The PowerVM LPM/SRR Automation tool allows you to take advantage of many of these features more quickly and easily through its GUI or spreadsheet support. For example, the same functionality shown in the CLI example above can be done using the tool’s GUI.

Also built into the tool are back-end capabilities that have no equivalent on the HMC, features like:

◉ Return all the partitions back after LPM or SRR operations

◉ Daily health checks of LPM/SRR readiness

◉ Scripting capabilities

◉ Automatic movement of Mobile Capacity (aka Power Enterprise Pools – PEP)

The following table offers a comparison of LPM and SRR capabilities available from the HMC GUI, HMC CLI and the tool.

IBM Prep, IBM Exam Prep, IBM Tutorial and Material, IBM Guides, IBM Certifications

Figure 3: Comparison of LPM/SRR features

As you can see, there are quite a few features the tool offers that go above and beyond what’s available from the HMC. Setting up a default MSP is a good example (see this video demonstrating how to do it using the tool).

SRR — another function not available from the HMC GUI — is invaluable to clients that don’t have high availability on their partitions, and the LPM/SRR Automation tool’s GUI makes it easy to take advantage of it. Daily SRR validation is also unique to the tool (see this video for a demo of the feature).

Sunday, 17 May 2020

Hyper protect your business for digital transformation

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Exam Prep

Your customers may be demanding new and better digital products and services. To meet these evolving needs, we see organizations across many industries changing the way they operate by using digital technologies to build new services and improve efficiencies.

In the financial services industry, for example, distributed ledgers and smart contracts have the potential to revolutionize value transfer. The transparency of blockchain combined with the convenience of transacting tokens can make previously illiquid asset classes such as corporate bonds more accessible and attractive. Institutional investors, exchanges, banks and corporations have indicated they have started using digital assets for lending, trading, payment and issuance services, with crypto-specific applications such as staking and blockchain governance emerging.

Financial institutions have a regulatory and fiduciary responsibility to know who has access to digital assets encrypted by private keys. To participate in this new financial landscape and unlock the transformative power of distributed ledger technologies, enterprises need the ability to store and transfer digital assets securely and quickly and maintain control of their encryption keys.

Announcing IBM Hyper Protect Digital Assets Platform


To provide clients with an environment well-suited for digital asset management, we are excited to announce the general availability of the IBM Hyper Protect Digital Assets Platform. It is designed to provide a secured basis for financial institutions and enterprises to deploy their commercial or custom solutions for managing digital assets. Powered by IBM Hyper Protect Virtual Servers, the platform is available on premises with IBM LinuxONE or Linux on IBM Z. The platform is also available in the public cloud with IBM Cloud Hyper Protect Services.

Customers such as Hex Trust are enjoying the benefits of the platform. Hex Trust is developing a new institutional custody solution using IBM Hyper Protect Virtual Servers and IBM LinuxONE. Rafal Czerniawski, CTO of Hex Trust, says “The reason we selected IBM’s platform is so that we will be able to sleep at night.” He continues, “We have to make sure that nobody within or outside our company can circumvent approvals, authentications, or policies. The protection offered by IBM Hyper Protect Virtual Servers allows Hex Trust to innovate and to bring more digital assets, like complex derivatives, to the blockchain.”

The platform’s signature near-line storage capability provides digital asset enterprises with the ability to access the benefits of hot wallet liquidity and the security of cold storage by using cryptographic shredding techniques, envelope encryption and hardened custom policies. Digital asset and permissioned blockchain applications are deployed into protected memory enclaves that are designed to maintain isolated execution, application integrity and confidentiality, and defend against internal and external threats. The enclaves leverage tamper-resistant secure boot technology and are backed by FIPS 140-2 level 4 certified Hardware Security Modules (HSM), the highest level of security certification commercially available, featuring a 360-degree envelopment of tamper detection and response.

Digitally transform with cloud-native development


To stay relevant in the eyes of customers, your organization should build and modernize services so you can innovate quickly and efficiently. Recently, we announced that Red Hat® OpenShift® Container Platform is generally available for IBM Z and IBM LinuxONE, reinforcing the cloud-native world of containers and Kubernetes with the security, scalability and reliability of IBM enterprise servers.

Several new enhancements further your ability to create cloud-native experiences. With Pivotal Cloud Foundry support for IBM z/OS® Cloud Broker, you can now integrate mission-critical apps and data services available on z/OS with container-based applications on another popular open-source cloud platform. And with the latest release of IBM Blockchain v2.1.3, you can extend multicloud capabilities with support for blockchain on Red Hat OpenShift on LinuxONE and Linux on Z.

Look to your IT infrastructure to help you deliver a secured yet open hybrid multicloud environment. With IBM Z and IBM LinuxONE, you can deploy the cloud operating model you want with the privacy and security you need.

Friday, 15 May 2020

Five key things to know about IBM Systems Client Experience Centers

IBM Exam Prep, IBM Guides, IBM Learning, IBM Certification, IBM Study Materials

What if you had access to experts who specialize in the IT solutions you were considering for your business? What if you could test-drive new IT hardware or software to make sure it’s the right fit?

IBM Systems Worldwide Client Experience Centers give businesses the opportunity to see firsthand how IBM IT infrastructure can support their business needs. Whether it’s through a co-creation workshop, a solution benchmark or a Redbooks training course, Client Experience Centers have infrastructure specialists who can help you gain confidence in your next infrastructure investment. We’ve helped organizations find the best platforms for their critical applications, take advantage of the security features in their systems to make sure their data is protected, put AI models to work on complex analytics and much more.

Interested to learn more? Here are the five essential things to know about us.

What sets the Client Experience Centers apart?


1. Our mission is to deliver IT expertise that helps give you the best possible experience with IBM IT infrastructure.

We know that planning and implementing IT solutions is challenging, and our offerings are designed to help in the decision-making process. Working with Client Experience Centers gives you access to top-notch subject matter experts in IBM infrastructure technologies and solutions — from AI to hybrid cloud to enterprise security. We provide demos, benchmarks, proofs of concept and proofs of technology to help you find the right infrastructure solution for your business needs.

2. We co-create innovative solutions side-by-side with you.

Co-creation is a collaborative process for creating shared value between you and IBM Systems, and it’s a fundamental approach to our work. The IBM Systems Co-Creation Lab helps organizations move IT projects from idea to implementation through workshops covering opportunity discovery, design and development. Co-creation gives you a chance to work side-by-side with IBM experts to create first-of-a-kind strategies and solutions together with IBM.

3. Our technical training content helps IT professionals build knowledge and skills on IBM Power Systems, IBM Storage, IBM Z and LinuxONE.

Experts from Client Experience Centers develop technical training courses and content to deliver the latest information about IBM Systems solutions to clients, Business Partners and IBM employees. Redbooks content formats include online courses, videos, papers, books and other media. Here’s where you can find us:

Online courses and certification programs are available through IBM Skills Gateway.
Our Expert Speaker Series on YouTube provides short explanations of tech topics straight from the experts.
IBM Redbooks offers print and web publications to help you grow skills and technical knowledge, and you can follow Redbooks on YouTube, Facebook, Twitter and LinkedIn to see the latest.

4. We offer expertise in AI, hybrid cloud and quantum computing.

The Client Experience Centers have skilled data scientists with expertise in AI who are ready to help you develop a new AI solution or advance an existing one. We also have teams specializing in hybrid multicloud and multivendor environments, bringing the capabilities of IBM Cloud and Red Hat to you wherever your business is on the cloud journey. Additionally, we host an active IBM Quantum System in partnership with IBM Research. We provide tours and briefings on IBM Quantum offerings and have several certified IBM Quantum Ambassadors ready to engage with you. In all these areas, our experts can meet you where you are and help you go to the next level.

5. We have expert teams located around the world and can offer most services virtually.

Client Experience Centers is a worldwide team with the capability to service businesses all over the planet. We offer virtual services leveraging our IBM Systems infrastructure hubs in North America and France, and we have skilled teams in key geographies around the world as well, so wherever you are, they can access our support. In addition to online training and consultations, Co-creation Lab workshops, benchmarks and proofs of concept can be done virtually.

Come innovate with us


You understand the needs of your business; we understand IBM Systems infrastructure solutions inside and out. Together we can find the best approach to help you solve business challenges, plan for digital transformation, introduce new technologies and so much more. IBM Systems Client Experience Centers are here to help.

Source: ibm.com

Thursday, 14 May 2020

What is edge computing and how can it help my business?

IBM Tutorial and Material, IBM Guides, IBM Exam Prep, IBM Certifications, IBM Learning

How we compute has substantially shifted over the years. One innovative method enables real-time action, improves user experience and reduces costs by putting compute power closer to the source of data. It’s called edge computing, and we’re only at the beginning of what it can do.

Edge computing is a distributed computing paradigm that moves compute and AI to the edge, where data is generated and action is taken. This allows faster insights and actions, better data security and control and more continuous operation.

IBM has designed infrastructure that is optimized for edge computing, including servers purpose-built for AI workloads that are both on the edge and near the edge. Paired with storage solutions that provide industry-leading performance and essentially unlimited scalability, businesses will have an edge platform that will enable them to do the most with their data.

Edge computing for manufacturing: A case study


IBM recently built an end-to-end edge solution for manufacturing. IBM Visual Insights is running on the IBM Power System AC922 for model training, IBM Cloud or on-premises applications for centralized control, IBM Edge Application Manager for deploying and managing the models, and NVIDIA Jetson TX2 for inferencing. IBM Cloud Object Store and IBM Elastic Storage System 3000 are used to provide a unified storage for data and models. For heavier workloads, the IBM Power System IC922 is ideal for inferencing compared to Jetson TX2.

With this solution, IBM was able to:

◉ reduce inspection time from 10 minutes to less than one minute per product
◉ reduce field issues
◉ improve customer satisfaction
◉ ensure large-scale model and edge device life-cycle management

Many businesses are already seeing the benefits of edge computing. Success stories include improving streaming video and producing high-resolution images from an assembly line. Such data has to be processed in real time, with machine learning at the edge or near edge.

How can edge computing benefit your industry?


IBM has taken the lead in edge computing, bringing high-performance computing and AI solutions to many different industries. IBM is particularly specialized in cases where a significant amount of workload is run in micro data centers or edge networks such as at airports, factories and large retail stores.

Many businesses are seeing the impact edge computing can have. It can improve production monitoring for manufacturing,  drive predictive maintenance and asset management for oil and gas, and even power autonomous vehicles. Cities looking to take advantage of technology to better serve its citizens can improve traffic control measures.

Edge computing can bring measurable benefits to business from the get-go and more opportunities will continue to emerge.  With 5G rolling out in many countries in 2020, edge computing will get an additional boost from 5G’s unmatched bandwidth, high speed and low latency; it will transmit a large amount of data at a high speed to and from hybrid multicloud and edge locations. The possibilities through edge computing and IBM will only continue to grow.

Click here to learn more about edge and IT infrastructure, or see what’s possible with edge computing in your industry.

Source: ibm.com

Tuesday, 12 May 2020

AI at the edge: Driving new business opportunities at the point of data creation

IBM Tutorial and Materials, IBM Exam Prep, IBM Tutorial and Material, IBM Guides

Data has always come from outside the data center. As the number of smart devices generating new data grows, there is a need for faster processing and an increased pressure on networks to process data efficiently. The processing must necessarily leave the data center and move closer to the edge, where the data are created. By 2022, 50 percent of enterprise data will be processed at the edge, compared to only 10 percent today.

Edge computing enables faster insights and actions as running inference closer to the data source improves speed to action. In AI inferencing, where sub-second latency is the expectation, the latency accumulated by sending data back to a central processing facility becomes the bottleneck in business operations.

With dedicated infrastructure, better data security and control are possible, as minimizing data transport to central hubs reduces both vulnerabilities and the aforementioned latency. Finally, putting processing at the edge enables continuous operations to reduce disruption and cost, running autonomously even when disconnected.

When the creators of the Mayflower Autonomous Ship faced these exact challenges, they reached out to IBM to deliver a solution that would meet their difficult requirements. The team at ProMare was looking to create a fully autonomous version of the Mayflower ship to recreate the historic Atlantic Ocean crossing in 1620 from Plymouth, England, to America. Sailing across the Atlantic Ocean represents the ultimate edge application, thousands of miles from any human contact or data center.

The team decided to put IBM at the helm, building an AI captain using IBM Power Systems AC922 and IBM Visual Insights. Without human intervention, the ship will navigate the Atlantic after having been trained using a literal ocean of images to recognize and maneuver past anything it encounters on the way.

“Many of today’s autonomous ships are really just automated robots [that] do not dynamically adapt to new situations and rely heavily on operator override,” says Don Scott, CTO of the Mayflower Autonomous Ship. “Using an integrated suite of IBM’s AI, cloud, and edge technologies, we are aiming to give the Mayflower full autonomy and are pushing the boundaries of what’s currently possible.”

We also recently announced IBM Power Systems IC922, built specifically for inference workloads at the edge. Power Systems IC922 provides the compute-intensive and low-latency infrastructure needed to unlock business insights from trained AI models. POWER9-based, it supports faster data throughput and decreased latency through advanced interconnects such as PCIe Gen4 and OpenCAPI. Both are important for edge AI applications.

As 5G rollout continues across the world, edge computing combined with 5G creates opportunities to enhance digital experiences, improve performance and data security, and enable continuous operations in every industry. Edge brings computation and data storage closer to wherever data is created by people, places and things. We can provide an autonomous management offering that addresses the scale, variability and rate of change in edge environments, in edge-enabled industry solutions and services, and in the solutions that can help you modernize your networks and deliver new services at the edge.

Source: ibm.com

Friday, 8 May 2020

How GRC sustainability can help support cost efficiency

IBM Exam Prep, IBM Tutorial and Materials, IBM Learning, IBM VPN

Sustainability and resilience are about how an organization designs and carries out strategies that can be adaptable to help address long-term global trends, crises, threats, changing regulations or customer needs. As the requirements for success change, so can the enterprise.

In this current pandemic many organizations should ask business continuity questions about capacity, bandwidth and critical functions. For example, do the existing network bandwidth, VPN capacity, laptops and IP addressing for remote access solutions support the additional load? If a full complement of remote services are unavailable, what partial services are available? Who will handle customer, third party, business partner, vendor and supplier communications about the crisis – and what should those communications look like? Have all outsourced critical functions been identified, and how are they impacted by the current situation? How effective is the resiliency planning of service providers and third party vendors?

As we’ve seen, many banks and financial institutions desire specific insights into what comes next. If an event occurs that interrupts the normal operations of a business, a business continuity plan should be in place. For governance, risk and compliance professionals, which tactics, tools or solutions today can offer the flexibility, cost-efficiency and sustainability to help manage known and unknown risks to come?

Potential to cut costs, increase efficiencies and stem losses


A classic resilience and sustainability tactic should include adopting GRC technology that can help support overall cost savings. The idea is to potentially cut expenses while increasing efficiencies and stemming losses. As threats and regulatory requirements grow so can the related expenses, fees and penalties. A key approach to consider is to shift risk and regulatory compliance processes to agile platforms that use analytics, AI and machine learning and to tools that are codeless and quick-start enabled.

For instance, agile GRC technologies are those that are designed to adapt to ongoing regulatory changes, but are simple enough to scale up to many users without a lot of training. Things like pre-built options, flexible configurations, integrated questionnaires, automated workflows and drag and drop functionality drive efficiency.

A horizontal view of multiple risk disciplines


Sustainability in GRC: Long-term value strategies that help the enterprise adapt with agility to meet success requirements. Practices that are forward-looking to help address global trends, crises, threats, changing regulations or customer needs.

Legacy systems typically work in silos, patched into interactions with other systems. Views are vertical and sometimes fragmented; processes may be semi-automated or even manual. As organizations look for an integrated data model and single source of truth for risk and compliance data, we recommend moving away from siloed on-premises solutions and into lower cost models with a cloud and SaaS focus. They can offer fast deployment and greater interoperability. Open standards and cloud-native technologies can provide flexibility, agility, and cost efficiency.

More importantly, the intelligent technologies and flexibility they provide can help create an enterprise-wide view of multiple risk disciplines and reduce data silos. Much like warmth escaping through open windows in the winter, data silos trigger a variety of losses. Situations like data inaccuracy, false negatives and potential fines due to system unavailability or authentication failures can result in irritated customers. Such silos and limited interoperability between systems can potentially result in breaches, loss of market share, fraud losses, non-compliance fines – the list can be long.

Organizations can drive faster, more accurate decisions with an integrated, agile GRC platform. An integrated GRC platform can help automatically monitor regulatory events, manage model inventory and provide model assessments with AI and predictive analytics.

Cost-efficiency and sustainability


IBM has worked with customers through many worldwide events. Business continuity in the face of volatility has always been a primary goal. With these challenges top-of-mind, IBM OpenPages with Watson offers an integrated GRC solution. IBM OpenPages with Watson is designed to enable organizations to meet their business objectives in a world of dynamic of risk and threat.

OpenPages with Watson can help support total cost of ownership reductions and improve risk assessments inside the enterprise. With AI, advanced analytics and automation tools and techniques, it can help organizations meet compliance goals. A secured and scalable GRC platform hosted on the IBM Cloud can help clients reduce IT infrastructure overhead with speed and agility.

Standardization in three key areas


Clients can look for cost savings and risk reduction through standardization in three key areas:  Operational efficiency, risk reduction and organizational performance improvement.

Operational efficiency – The OpenPages zero-training interface for the first line of defense can help reduce the inefficiencies of manual rework by enforcing data quality and using AI to assist with classification and association. The availability of in-context guidance for a user task is designed so that first line users have clarity of objective and completion criterion. The task oversight view allows managers to see the current state of tasks and where work can be rebalanced to hit key dates.

Risk reduction – OpenPages has an agile designer environment that can allow new programs to be introduced quickly. Integrated workflow and UX design studios can allow risk managers to collaborate with program technical designers to build out the full user experience for each line of defense. This can help organizations respond to opportunities and threats more quickly.

Collaborations with leading risk and compliance data suppliers such as Thomson Reuters, Wolters Kluwer and Ascent deliver alerts that are triaged and mapped to internal business controls – and which are the most relevant and timely for each user or community of users in your organization. This can mean fewer surprises and more awareness of emerging areas of concern. Furthermore, our focus on business continuity planning and impact analysis, integrated directly with risk areas of vendor management and IT governance, can allow organizations to prepare for business disruptions associated with unforeseen events.

Organizational performance improvement – Organizational performance improvement and risk and performance are two sides of the same coin. When risk is well-managed, organizations can focus on clients, employees, growth, and operational excellence.  OpenPages can support transparency aimed to allow organizations to operate in a world of dynamic change.

The goal of a well-managed risk program that combines a risk-aware culture, data-driven decisions and a holistic view of all risk activity is to help improve the reputation of an organization, reduce the cost of capital and improve its valuation.

Source: ibm.com

Thursday, 7 May 2020

IBM RegTech hosts risk and compliance thought leaders at virtual summit

The COVID-19 pandemic is one of those rare civilizational disruptions that is profoundly affecting nearly all human institutions. Organizations are scrambling to respond to threats to the health and safety of their stakeholders, and to grapple with new constraints and shifting realities in this crisis.

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Exam Prep, IBM Guides

While the human toll of the pandemic is unequivocally a tragedy that surpasses any damage done to enterprises and institutions, it’s still important for organizations to pause and consider how readiness and response strategies can help them avoid some of the pitfalls that many organizations are facing in this difficult time.

Even in these challenging times IBM believes that institutions will continue to be expected to meet their regulatory obligations and to manage and innovate their stringent risk and compliance frameworks. In that spirit, IBM hosted a virtual RegTech Summit, bringing together leading thinkers in the risk and compliance space to discuss how companies should deal with risk in precisely the kind of scenario that all organizations are now experiencing with regard to COVID-19. The goal of the summit was to discuss how organizations ought to deal with the current situation, and how they should approach risk in general. The panel discussed how financial institutions in particular are coping with these challenges and using innovative risk management programs, methodologies, and advisory services to manage risk, fight financial crimes, maintain business continuity, and improve processes and outcomes.

The panel opened with a discussion about how IBM has been responding to the pandemic. First and foremost, IBM has focused on the safety and security of its employees by accelerating work-from-home capabilities and educating team members about best practices for work during times of suggested social isolation.

IBM has also spent over $200 million to date toward addressing challenges posed by the COVID-19. For example, they’ve partnered with 19 other companies and 14 industry-leading science communities to develop a High Performance Computing Consortium, which leverages 360 petaflops of computing power to support COVID-19 research. IBM is also delivering information via the Weather Channel app and Weather.com, drawn from trusted data sources from state and local governments, the World Health Organization, and the Centers for Disease Control and Prevention, helping users to track the virus. The company released IBM Watson Assistant for Citizens, a virtual assistant that can help large civic institutions handle high volumes of stakeholder queries on the subject of COVID19.

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Exam Prep, IBM Guides
In a time where the day-to-day is anything but “business as usual,” it’s up to leaders in business continuity to plan for and respond to disruptive events that threaten to derail crucial business functions. Michael Puldy, IBM’s Director of Business Continuity, contributed his perspective to the conversation, using the COVID-19 challenge as a means of testing the organization’s risk awareness in general. He broke down the core challenges into three categories.

1. What happens when the workplace is unavailable?
2. What happens when the workforce is unavailable
3. What happens when IT resources are unavailable?

In a sense, said Puldy, the current crisis can be seen as the “World Cup” for Business Continuity Management (BCM). The goal is to proactively plan ways to route around roadblocks in these categories, which sometimes occur in tandem with one another, so that the organization can do right by its employees, clients and stakeholders.

“Don’t necessarily try to solve the pandemic issue, but look at an event with a more holistic perspective,” says Michael.

What happens when you have all these three scenarios — workforce unavailable, workplace unavailable, and IT unavailable — all happening at the same time? How can you move back and forth to accommodate your recovery and maintain an element of business continuity?

Solutions include work from home policies and technologies, comprehensive documentation as well as testable response processes. Clearly defined chains of command, action plans, and knowledge bases can help teams respond quickly and effectively to unforeseen challenges.

Andrew Yuille, VP of Partnerships and Alliances at Thomson Reuters discussed the task force that the company developed to provide resources to their community. As a leading provider of business information services, the company is well-positioned to provide insights for many industries facing the COVID-19 crisis.

Thomson Reuters’ regulatory intelligence platform monitors the activities of regulators so that they can inform their customers about the high volume of regulatory changes. They’re also working with digital channels to help identify fake news and common scams, not limited to the world of finance. These include bogus investments, “miracle” cures, stimulus-themed phishing schemes, charity scams, price gouging and other criminal activities.

“It’s very easy to get flanked, both as businesses and individuals, while we’re focused on the particular crisis in front of us,” says Andrew. “Actually, all the old issues are out there. Some of them have got new labels on the outside . . . but they’re the same old scams.” In an effort to educate the public, the company has also developed a free COVID-19 resource center.

Michael Dawson, Managing Director at Promontory Financial Group, provided insight into the future regulatory climate in the U.S. and abroad. Governments are currently working to implement stimulus measures, such as the huge number of payments to individuals and loans to small businesses. In the short term, many governments are employing some form of temporary regulatory relief to ease the burden. The U.S. federal government has deferred a new risk and control framework, and an EU regulator deferred a critical report. But banks and other institutions shouldn’t rely too heavily on such relief. Such organizations should expect increased scrutiny on consumer protection requirements, for example mortgage borrower protections and forbearance for active military service members.

“With such a massive event, and such a massive impact on so many people around the world, the volumes are going to be huge,” says Michael. “No matter how well-intentioned you may be in terms of wanting to comply, the actual act of complying with the requirements in the volume spike can be a real challenge.” It will be crucial for these companies to pay close attention to the latest regulatory developments in the places in which they operate.

In the financial industry, security measures must be implemented to ensure work-from-home trading activity is performed in a manner compliant with financial regulations. Promontory is in “all hands on deck” mode to enable remote work, even in sensitive areas where data security is paramount, for example, for employees who monitor financial behavior in order to spot money laundering and terrorist financing.

The threats businesses face in the current crisis are often the same old problems that they’ve always faced, only now there is more distraction. Insurance fraud, financial scams, cybersecurity threats, hacking and phishing are just a few of the security challenges that continue to threaten individuals and businesses – as they have since the advent of digitally networked commerce and finance. It’s crucial that risk and compliance departments don’t let COVID-19 issues prevent them from being aware of usual dangers.

Wednesday, 6 May 2020

Storage solutions for the edge

IBM Exam Study Materials, IBM Tutorial and Material, IBM Certification

We described some of the critical infrastructure requirements for edge computing. Here, we will discuss some practical solutions from IBM Storage.

IBM Storage has decades of experience offering a wide range of edge solutions. Interestingly, industry-leading technologies from IBM Storage for data and AI solutions normally associated with central computing installations also offer advantageous edge computing solutions.

IBM Exam Study Materials, IBM Tutorial and Material, IBM Certification

IBM Spectrum Scale has been an industry leader in file management for over 25 years. It is state-of-the-art software-defined storage (SDS) that offers a long list of leading-edge data management and security features. It has been deployed in some of the most demanding commercial and research environments.

IBM Cloud Object Storage is another industry leader from IBM Storage for data and AI solutions, including edge solutions.  IBM Cloud Object Storage is a highly scalable cost-efficient cloud storage solution with concurrent parallel access for edge solutions in the data center and in the public cloud. It enables enterprises to store and manage massive amounts of data more efficiently and securely, with extreme system reliability and accessibility from any location.

While IBM Spectrum Scale brings decades of successful production deployments in large, centralized installations, it also offers features and capabilities that make it exceptional for edge computing use cases. For example, the Active File Management (AFM) feature automatically shares and caches data across geographically distributed sites to gain performance similar to local data and innovative transparent cloud tiering allows non-disruptive, intelligent data migration between on-premises and cloud storage, enabling IBM Spectrum Scale to integrate edge locations and central data centers with simplified single-pane management.

A key requirement of edge computing solutions likely to be far from central IT resources is ease of management. Edge installations must also have the ability to start small and grow as needed.

IBM Storage offers a unique and powerful system implementation of IBM Spectrum Scale that addresses both these requirements. IBM Elastic Storage System 3000 (ESS 3000) is designed to deploy world-class data management technology simply and easily. It leverages the ultra-low latency and massive throughput advantages offered by NVMe-accelerated flash storage.

ESS 3000 utilizes containerized delivery of IBM Spectrum Scale software for simplified installation, management and upgrades. To address the requirement for agile scalability, you can start with a single 2U building block with world-class 40GB/sec performance and as little as 25TB of storage, then grow capacity, performance and resilience linearly as needed by adding more units.

IBM Cloud Object Storage has been bringing the edge to the data center or the core enterprise for many years. We partner with customers to bring mobile and connected devices where local processing can occur closer to their core data center using object storage.

IBM Cloud Object Storage is a highly scalable cost-efficient cloud storage solution with concurrent parallel access to bring data from the edge faster and more efficiently with our geo-dispersed data protection. It enables enterprises to store and manage massive amounts of data more efficiently and securely, with enhanced technology and extreme system reliability and accessibility from any location. In a report from IDC that interviewed IBM Cloud Object Storage customers they found that customers in the report received a return on their investment of 255%, with organizations breaking even on their investment in IBM Cloud Object Storage in an average of eight months.

Customers can start with less than 100TB and grow to 100s of PB or even to EB+ configurations. There is a tremendous amount of data being created at the edge and much of that data needs to be stored securely in the core. IBM has multiple customers storing 100s of PB of data from edge devices simply and efficiently as data requirements continue to grow. IBM Cloud Object Storage is turning storage challenges into business value by simplifying and optimizing the infrastructure for edge data requirements.

IBM Spectrum Scale and IBM Cloud Object Storage are also available as key components of IBM Storage Suite for IBM Cloud Paks, a comprehensive container-ready solution. This Suite delivers pre-tested reference architectures with data resources for file, block and object requirements that can fast track your journey to private cloud with native cloud acceleration and help you to build agile, secure persistent storage resources for your containers.

Source: ibm.com