Tuesday, 31 December 2019

Guard your virtual environment with IBM Spectrum Protect Plus

The server virtualization market is expected to grow to approximately 8 billion dollars by 2023. This would mean a compound annual growth rate (CAGR) of 7 percent between 2017 and 2023. Protection of these virtual environments is therefore a significant and critical component of an organization’s data protection strategy.

This growth in virtual environments necessitates that data protection be managed centrally, with better policies and reporting systems. However, in an environment consisting of heterogeneous applications, databases and operating systems, overall data protection can become a complex task.

In my day-to-day meetings with clients from the automobile, retail, financial and other industries, the most common requirements for data protection include simplified management, better recovery time, data re-use and the ability to efficiently protect data for varied requirements such as testing and development, DevOps, reporting, analytics and so on.

Data protection challenges for a major auto company


I recently worked with a major automobile company in India that has a large setup of virtual machines. Management was looking for a disk-based, agentless data protection software that could protect its virtual and physical environments, one that was easy to deploy, could operate from a single window with better recovery times and provide reporting and analytics.

In its current environment, the company has three data centers. It was facing challenges with data protection management and disaster recovery. Backup agents were to be installed individually, which required expertise on the operating system, database and applications. A dedicated infrastructure was required at each site to manage the virtual environment and several other tools to perform granular recovery as needed. The company also had virtual tape libraries that were replicating data to other sites.

The benefits of data protection and availability for virtual environments


For the automobile company in India, IBM Systems Lab Services proposed a solution that would replace its current siloed and complex backup architecture with a simple IBM Spectrum Protect Plus solution. The proposed strategy included storing its short-term data on vSnap storage, which comes with IBM Spectrum Protect Plus, and storing long-term data to IBM Cloud Object Storage to reduce the backup storage costs. The multisite, geo-dispersed configuration of IBM Cloud Object Storage could help the auto company reduce the dependency of replication, which used to be there with virtual tape libraries.

Integration of IBM Spectrum Protect Plus with Spectrum Protect and offloading data to IBM Cloud Object Storage was proposed to the client to retire more expensive backup hardware such as virtual tape libraries from its data centers. This was all part of a roadmap to transform its backup environment.

IBM Spectrum Protect Plus is easy to install, eliminating the need for advanced technical expertise for backup software deployment. Its backup policy creation and scheduling features helped the auto company reduce its backup admin dependency. Its role-based access control (RBAC) feature enabled it to provide better bifurcation of its backup and restore workload between VM admins, backup admins, database administrators and so forth when it comes to its data protection environment. The ability to manage the data protection environment from a single place also helped the auto company to manage all three of its data centers from one location.


This company can now take advantage of Spectrum Protect Plus to perform quick recovery of data during disaster scenarios and reduce its development cycle with clone creation from backup data in minutes or even seconds. Spectrum Protect Plus’s search and recovery of single-file functionality made the auto company’s need for granular recovery of files easy to address.

One of its major challenges was to improve performance and manage the protection of databases and applications. Spectrum Protect Plus resolved this with its unique feature of agentless backup of databases and applications running on physical and virtual environments. Spectrum Protect Plus also offers better reporting and analytics functionality, which allowed the client to send intuitive reports to the company’s top management.

Sunday, 29 December 2019

Exploring AI Fairness for People with Disabilities

This year’s International Day of Persons with Disabilities emphasizes participation and leadership. In today’s fast-paced world, more and more decisions affecting participation and opportunities for leadership are automated. This includes selecting candidates for a job interview, approving loan applicants, or admitting students into college. There is a trend towards using artificial intelligence (AI) methods such as machine learning models to inform or even make these decisions. This raises important questions around how such systems can be designed to treat all people fairly, especially people who already face barriers to participation in society.


AI Fairness Silhouette

A Diverse Abilities Lens on AI Fairness


Machine learning finds patterns in data, and compares a new input against these learned patterns.  The potential of these models to encode bias is well-known. In response, researchers are beginning to explore what this means in the context of disability and neurodiversity. Mathematical methods for identifying and addressing bias are effective when a disadvantaged group can be clearly identified. However, in some contexts it is illegal to gather data relating to disabilities. Adding to the challenge, individuals may choose not to disclose a disability or other difference, but their data may still reflect their status. This can lead to biased treatment that is difficult to detect. We need new methods for handling potential hidden biases.

Our diversity of abilities, and combinations of abilities, pose a challenge to machine learning solutions that depend on recognizing common patterns. It’s important to consider small groups, not represented strongly in training data. Even more challenging, unique individuals have data that does not look like anyone else’s.

First Workshop on AI Fairness


To stimulate progress on this important topic, IBM sponsored two workshops on AI Fairness for People with Disabilities. The first workshop in 2018, gathered individuals with lived experience of disability, advocates and researchers. Participants identified important areas of opportunity and risk, such as employment, education, public safety and healthcare.  That workshop resulted in a recently published report outlining practical steps towards accommodating people with diverse abilities throughout the AI development lifecycle. For example, review proposed AI systems for potential impact, and design-in ways to correct errors and raise fairness concerns. Perhaps the most important step is to include diverse communities in both development and testing. This should improve robustness and help develop algorithms that support inclusion.

ASSETS 2019 Workshop on AI Fairness


The second workshop was held at this year’s ACM SIGACCESS ASSETS Conference on Computers and Accessibility, and brought together thinkers from academia, industry, government, and non-profit groups.  The organizing team of accessibility researchers from industry and academia selected seventeen papers and posters. These represent the latest research on AI methods and fair treatment of people with disabilities in society. Alexandra Givens of Georgetown University kicked off the program with a keynote talk outlining the legal tools currently available in the United States to address algorithmic fairness for people with disabilities. Next, the speakers explored topics including: fairness in AI models as applied to disability groups, reflections on definitions of fairness and justice, and research directions to pursue.  Going forward, key topics in continuing these discussions are:

◉ The complex interplay between diversity, disclosure and bias.

◉ Approaches to gathering datasets that represent people with diverse abilities while protecting privacy.

◉ The intersection of ableism with racism and other forms of discrimination.

◉ Oversight of AI applications.

Saturday, 28 December 2019

AI, machine learning and deep learning: What’s the difference?

AI, Machine Learning, Deep Learning, IBM Study Materials, IBM Certifications, IBM Guides, IBM Learning, IBM Tutorial and Material

It’s not unusual today to see people talking about artificial intelligence (AI). It’s in the media, popular culture, advertising and more. When I was a kid in the 1980s, AI was depicted in Hollywood movies, but its real-world use was unimaginable given the state of technology at that time. While we don’t have robots or androids that can think like a person or are likely to take over the world, AI is a reality now, and to understand what we mean when we talk about AI today we have to go through a — quick, I promise — introduction on some important terms.

AI is…


Simply put, AI is anything capable of mimicking human behavior. From the simplest application — say, a talking doll or an automated telemarketing call — to more robust algorithms like the deep neural networks in IBM Watson, they’re all trying to mimic human behavior.

Today, AI is a term being applied broadly in the technology world to describe solutions that can learn on their own. These algorithms are capable of looking at vast amounts of data and finding trends in it, trends that unveil insights, insights that would be extremely hard for a human to find. However, AI algorithms can’t think like you and me. They are trained to perform very specialized tasks, whereas the human brain is a pretty generic thinking system.

AI, Machine Learning, Deep Learning, IBM Study Materials, IBM Certifications, IBM Guides, IBM Learning, IBM Tutorial and Material

Fig 1: Specialization of AI algorithms

Machine learning


Now we know that anything capable of mimicking human behavior is called AI. If we start to narrow down to the algorithms that can “think” and provide an answer or decision, we’re talking about a subset of AI called “machine learning.” Machine learning algorithms apply statistical methodologies to identify patterns in past human behavior and make decisions. They’re good at predicting, such as predicting if someone will default on a loan being requested, predicting your next online purchase and offering multiple products as a bundle, or predicting fraudulent behavior. They get better at their predictions every time they acquire new data. However, even though they can get better and better at predicting, they only explore data based on programmed data feature extraction; that is, they only look at data in the way we programmed them to do so. They don’t adapt on their own to look at data in a different way.

Deep learning


Going a step narrower, we can look at the class of algorithms that can learn on their own — the “deep learning” algorithms. Deep learning essentially means that, when exposed to different situations or patterns of data, these algorithms adapt. That’s right, they can adapt on their own, uncovering features in data that we never specifically programmed them to find, and therefore we say they learn on their own. This behavior is what people are often describing when they talk about AI these days.

Is deep learning a new capability?


Deep learning algorithms are not new. They use techniques developed decades ago. I’m a computer engineer, and I recall having programmed deep learning algorithms in one of my university classes. Back then, my AI programs had to run for days to give me an answer, and most of the time it wasn’t very precise. There are a few reasons why:

◉ Deep learning algorithms are based on neural networks, which require a lot of processing power to get trained — processing power that didn’t exist back when I was in school.
◉ Deep learning algorithms require lots of data to get trained, and I didn’t have that much data back then.

So, even though the concepts have been around, it wasn’t until recently that we could really put deep learning to good use.

What has changed since then? We now have the computing power to process neural networks much faster, and we have tons of data to use as training data to feed these neural networks.

Figure 2 depicts a little bit of history of the excitement around AI.

AI, Machine Learning, Deep Learning, IBM Study Materials, IBM Certifications, IBM Guides, IBM Learning, IBM Tutorial and Material

Fig 2: The excitement around AI began a long time ago

Hopefully now you have a clear understanding of some of the key terms circulating in discussions of AI and a good sense of how AI, machine learning and deep learning relate and differ. In my next post, I’ll do a deep dive into a framework you can follow for your AI efforts — called the data, training and inferencing (DTI) AI model. So please stay tuned.

Friday, 27 December 2019

Three ways to intelligently scale your IT systems

Digital transformation, with its new paradigms, presents new challenges to your IT infrastructure. To stay ahead of competitors, your organization must launch innovations and release upgrades in cycles measured in weeks if not days, while boldly embracing efficiency practices such as DevOps. Success in this environment requires an enterprise IT environment that is scalable and agile – able to swiftly accommodate growing and shifting business needs.

As an IT manager, you constantly face the challenge of scaling your systems. How can you quickly meet growing and fluctuating resource demands while remaining efficient, cost-effective and secure? Here are three ways to meet the systems scalability challenge head-on.

1. Know when to scale up instead of out

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
If you’re like many IT managers, this scenario may sound familiar. You run your systems of record on a distributed x86 architecture. You’re adding servers at a steady rate as usage of applications, storage and bandwidth steadily increases. As your data center fills up with servers, you contemplate how to accommodate further expansion. As your IT footprint grows and you can’t avoid increasing server sprawl despite using virtualization, you wonder if there’s a more efficient solution.

There often is. IBM studies have found that once many workloads reach a certain threshold of processing cores in their data center, it often becomes more cost efficient to scale up instead of out. Hundreds of workloads, running on 50 or so processing cores each, require significant hardware, storage space and staffing resources; these requirements create draining costs and inefficiencies. Such a configuration causes not only server sprawl across the data center, but sprawl at the rack level as racks fill with servers.

By consolidating your sprawling x86 architecture into a single server or a handful of scale-up servers, you can reduce total cost of ownership, simplify operations and free up valuable data center space. A mix of virtual scale-up and scale-out capabilities, in effect allowing for diagonal scaling, will enable your workloads to flexibly respond to business demand.

IBM studies have found, and clients who have embraced these studies have proven, that moving from a scale-out to a scale-up architecture can save enterprises up to 50 percent on server costs over a three-year period. As your system needs continue to grow, a scale-up enterprise server will expand with them while running at extremely high utilization rates and meeting all service-level agreements. Eventually a scale-up server will need to scale out too, but not before providing ample extra capacity for growth.

 2. Think strategically about scalability

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
In the IT systems world, scalability is often thought of primarily in terms of processing power. As enterprise resource demands increase, you scale your processing capabilities, typically through either the scale-out or scale-up model.

Scalability for enterprises, however, is much broader than adding servers. For instance, your operations and IT leaders are always driving toward increased efficiency by scaling processes and workloads across the enterprise. And your CSO may want to scale the encryption used to protect the most sensitive data to all systems of record data. As your systems scale, every aspect must scale with it, from service-level agreements to networking resources to data replication and management to the IT staff required to operate and administer those additional systems. Don’t forget about the need to decommission resources, whether physical or virtual, as you outgrow them. Your IT systems should enable these aspects of scalability as well. Considering scalability in the strategic business sense will help you determine IT solutions that meet the needs of all enterprise stakeholders.

3. Meet scale demands with agile enterprise computing

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
An enterprise computing platform can drive efficiency and cost savings by helping you scale up instead of out. Yet the platform’s scalability and agility benefits go well beyond this. Research by Solitaire Interglobal Limited (SIL) has found that enterprise computing platforms provide superior agility to distributed architectures. New applications or systems can be deployed nearly three times faster on enterprise computing platforms than on competing platforms. This nimbleness allows you to stay ahead of competitors by more quickly launching innovations and upgrades. Also notably, enterprise platforms are 7.41 times more resilient than competing platforms. This means that these platforms can more effectively cope when resource demands drastically change.

Techcombank, a leading bank in Vietnam, has used an enterprise computing platform to meet scalability needs. As the Vietnamese banking industry grows rapidly, Techcombank is growing with it – with 30 percent more customers and 70 percent more online traffic annually. To support rapid business growth, Techcombank migrated its systems to an enterprise computing platform. The platform enables Techcombank to scale as demand grows while experiencing enhanced performance, reliability and security.

Thursday, 26 December 2019

Your onramp to IBM PowerAI

As offering managers on IBM’s PowerAI team, my colleagues and I assist worldwide clients, Business Partners, sellers and systems integrators daily to accelerate their journey to adopt machine learning (ML), deep learning (DL) and other AI applications, specifically based on IBM PowerAI and PowerAI Vision.

Thanks to our collective efforts we been able to establish an adoption roadmap for our PowerAI offering which we’d like to pass along to our future clients and partners. This fine-tuned roadmap is based on years of experience and has proven useful in hundreds of client engagements. An additional benefit is the summarization of the processes we recommend into a small number of high-value steps. These 5 steps, once actualized with factors local to each client’s needs, will accelerate your journey to trying AI and assessing its benefits in your environment, with actions in place to reduce risk and costs:

1. Provide important background info: First, make sure all members of your staff involved in future AI projects are offered the means to learn more about AI, ML and DL. We recommend starting here, however, you should feel free to leverage the vast amount of additional information that is publicly available.

2. Identify priorities: Next, working closely across a team of both business and technical leaders, identify top-priority business problems or growth opportunities that your client or company views as potential candidates for exploring the potential benefits of AI projects or machine learning/deep learning solutions. During this phase of the adoption effort, it may be very beneficial to augment your current staff or in-house skill base by bringing in additional subject matter experts in AI/ML/DL, sourcing them from hardware and software vendors that offer AI solutions, consulting firms with AI practices, or systems integrators.

3. Select data sources: Identify the wide variety of data sources, including “Big Data”, that you can and will leverage when applying ML/DL algorithms to solve these current business problems your team has identified.

4. Find the right software/hardware: Select and gain access to machine learning or deep learning software along with accelerated hardware so that you can start your AI project development, tests and proof of concept (POC) process. An effective POC journey will leverage your data along with your success criteria to help ensure that AI projects can move forward at full speed and expand rapidly at scale. Build and train the ML/DL models and then demonstrate where the AI POC effort was successful. Other areas may either need newer approaches or more work to yield the required incremental business benefits from AI.

5. Create the Production Environment: Plan out your production machine learning and/or deep learning deployment environment. Now would be a good time to establish a complete reference architecture for your production AI systems, and you can start identifying and planning to change any in-house business processes or workflows as a result of better-leveraged and newer AI algorithms, big data and the next generation of accelerated IT infrastructure.

Finally, as part of your onramp process, we recommend you learn more about IBM’s PowerAI. A good starting point to begin your journey is here. Feel free to reach out to us for more information and to leverage more detailed guidance. As a reference point you may also want take a second to review the top reasons that hundreds of clients have already chosen to run IBM’s PowerAI:

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Online Exam, IBM Exam Prep

Wednesday, 25 December 2019

A future of powerful clouds

IBM Tutorial and Material, IBM Study Materials, IBM Online Exam, IBM Certifications, IBM Learning

In a very good way, the future is filled with clouds.

In the realm of information technology, this statement is especially true. Already, the majority of organizations worldwide are taking advantage of more than one cloud provider. IBM calls this a “hybrid multicloud” environment – “hybrid” meaning both on- and off-premises resources are involved, and “multicloud” denoting more than one cloud provider.

Businesses, research facilities and government entities are rapidly moving to hybrid multicloud environments for some very compelling reasons. Customers and constituents are online and mobile. Substantial CapEx can be saved by leveraging cloud-based infrastructure. Driven by new microservices-based architectures, application development can be faster and less complex. Research datasets may be shared more easily. Many business applications are now only available from the cloud.

Container technologies are the foundation of microservices-based architectures and a key enabler of hybrid multicloud environments. Microservices are a development approach where large applications are built as a suite of modular components or services. Over 90 percent of surveyed enterprises are using or have plans to use microservices.

IBM Tutorial and Material, IBM Study Materials, IBM Online Exam, IBM Certifications, IBM Learning


Containers enable applications to be packaged with everything needed to run identically in any environment. Designed to be very flexible, lightweight and portable, containers will be used to run applications in everything from traditional and cloud data centers, to cars, cruise ships, airport terminals and even gateways to the Internet of Things (IoT).

Container technologies offer many benefits. Because of their lower overhead, containers offer better application start-up performance. They provide near bare metal speeds so management operations (boot, reboot, stop, etc.) can be done in seconds — or even milliseconds — while typical virtual machine (VM) operations may take minutes to complete. And the benefits don’t stop there. Applications typically depend on numerous libraries for correct execution. Seemingly minor changes in library versions can result in applications failing, or even worse, providing inconsistent results. This can make moving applications from one system to another — or out on to the cloud — problematic.

Containers, on the other hand, can make it very easy to package and move an application from one system to another. Users can run the applications they need, where they need them, while administrators can stop worrying about library clashes or helping users get their applications working in specific environments.

Nearly half of all enterprises are planning to start utilizing containers as soon as practical. In terms of use cases, the majority of these IT leaders say they will employ containers to build cloud-native applications. Nearly a third of surveyed organizations plan to use containers for cloud migrations and modernizing legacy applications. That suggests that beyond using them to build new microservices-based applications, containers are starting to play a critical role in migrating applications to the cloud.

But no one will be leveraging containers to build and manage enterprise hybrid multicloud environments without a powerful, purpose-built infrastructure.

IBM and Red Hat are two industry giants that have recognized the crucial role that IT infrastructure will play in enabling the container-driven multicloud architectures needed to support the ERP, database, big data and artificial intelligence (AI)-based applications that will power business and research far into the future.

Along with proven reliability and leading-edge functionality, two key ingredients of any effective infrastructure supporting and enabling multicloud environments are simplicity and automation. Red Hat OpenShift Container Platform and the many IBM Storage Solutions designed to support the Platform are purpose-engineered to automate and simplify the majority of management, monitoring and configuration tasks associated with the new multicloud environments. Thus, IT operations and application development staff can spend less time keeping the lights on and more time innovating.

IBM Tutorial and Material, IBM Study Materials, IBM Online Exam, IBM Certifications, IBM Learning

Red Hat is the market leader in providing enterprise container platform software. The Red Hat OpenShift Container Platform is an enterprise-ready Kubernetes container platform with full-stack automated operations to manage hybrid cloud and multicloud deployments. The Platform includes an enterprise-grade Linux operating system plus container runtime, networking, monitoring, container registry, authentication and authorization solutions. These components are tested together for unified operations on a complete Kubernetes platform spanning virtually any cloud.  

IBM and Red Hat have been working together to develop and offer storage solutions that support and enhance OpenShift functionality. In fact, IBM was one of the first enterprise storage vendors on Red Hat’s OperatorHub. IBM Storage for Red Hat OpenShift solutions provide a comprehensive, validated set of tools, integrated systems and flexible architectures that enable enterprises to implement modern container-driven hybrid multicloud environments that can reduce IT costs and enhance business agility.

IBM Storage solutions are designed to address modern IT infrastructure requirements. They incorporate the latest technologies, including NVMe, high performance scalable file systems and intelligent volume mapping for container deployments. These solutions provide pre-tested and validated deployment and configuration blueprints designed to facilitate implementation and reduce deployment risks and costs.

Everything from best practices to configuration and deployment guidance is available to make IBM Storage solutions easier and faster to deploy. IBM Storage provides solutions for a very wide range of container-based IT environments, including Kubernetes, Red Hat OpenShift, and the new IBM Cloud Paks. IBM is continually designing, testing, and adding to the performance, functionality, and cost-efficiency of solutions such as IBM Spectrum Virtualize and IBM Spectrum Scale software, IBM FlashSystem and Elastic Storage Server data systems and IBM Cloud Object Storage.

To accelerate business agility and gain more value from the full spectrum of ERP, database, AI, and big data applications, organizations of all types and sizes are rapidly moving to hybrid multicloud environments. Container technologies are helping to drive this transformation. IBM Storage for Red Hat OpenShift automates and simplifies container-driven hybrid multicloud environments.

Tuesday, 24 December 2019

AI today: Data, training and inferencing

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Learning, IBM Exam Prep, IBM Online Exam

I discussed artificial intelligence, machine learning and deep learning and some of the terms used when discussing them. Today, I’ll focus on how data, training and inferencing are key aspects to those solutions.

The large amounts of data available to organizations today have made possible many AI capabilities that once seemed like science fiction. In the IT industry, we’ve been talking for years about “big data” and the challenges businesses face in figuring out how to process and use all of their data. Most of it — around 80 percent — is unstructured, so traditional algorithms are not capable of analyzing it.

A few decades ago, researchers came up with neural networks, the deep learning algorithms that can unveil insights from data, sometimes insights we could never imagine. If we can run those algorithms in a feasible time frame, they can be used to analyze our data and uncover patterns in it, which might in turn aid in business decisions. These algorithms, however, are compute intensive.

Training neural networks


A deep learning algorithm is one that uses a neural network to solve a particular problem. A neural network is a type of AI algorithm that takes an input, has this input go through its network of neurons — called layers — and provides an output. The more layers of neurons it has, the deeper the network is. If the output is right, great. If the output is wrong, the algorithm learns it was wrong and “adapts” its neuron connections in such a way that, hopefully, the next time you provide that particular input it gives you the right answer.

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Learning, IBM Exam Prep, IBM Online Exam

Fig 1: Illustration of computer neural networks

This ability to retrain a neural network until it learns how to give you the right answer is an important aspect of cognitive computing. Neural networks learn from data they’re exposed to and rearrange the connection between the neurons.

The connections between the neurons is another important aspect, and the strength of the connection between neurons can vary (that is, their bond can be strong, weak or anywhere in between). So, when a neural network adapts itself, it’s really adjusting the strength of the connections among its neurons, so that next time it can provide a more accurate answer. To get a neural network to provide a good answer to a problem, these connections need to be adjusted by exhaustively exercising repeated training of the network — that is, exposing it to data. There can be zillions of neurons involved, and adjusting their connections is a compute-intensive matrix-based mathematical procedure.

We need data and compute power


Most organizations today, as we discussed, have tons of data that can be used to train these neural networks. But there’s still the problem of all of the massive and intensive math required to calculate the neuron connections during training. As powerful as today’s processors are, they can only perform so many math operations per second. A neural network with a zillion neurons trained over thousands of training iterations will still require a zillion thousand operations to be calculated. So now what?

Thanks to the advancements in industry (and I personally like to think that the gaming industry played a major role here), there’s a piece of hardware that’s excellent at handling matrix-based operations called the Graphics Processing Unit (GPU). GPUs can calculate virtually zillions of pixels in matrix-like operations in order to show high-quality graphics on a screen. And, as it turns out, the GPU can work on neural network math operations in the same way.

Please, allow me to introduce our top math student in the class: the GPU!

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Learning, IBM Exam Prep, IBM Online Exam

Fig 2: An NVIDIA SMX2 GPU module

A GPU is a piece of hardware capable of performing math computations over a huge amount of data at the same time. It’s not as fast as a central processing unit (CPU), but if one gives it a ton of data to process, it does so massively in parallel and, even though each operation runs more slowly, the parallelism of applying math operations to more data at once beats the CPU performance by far, allowing you to get your answers faster.

Big data and the GPU have provided the breakthroughs we needed to put neural networks to good practice. And that brings us to where we are with AI today. Organizations can now apply this combination to their business and uncover insights from their vast universe of data by training a neural network for that.

To successfully apply AI in your business, the first step is to make sure you have lots of data. A neural network performs poorly if trained with little data or with inadequate data. The second step is to prepare the data. If you’re creating a model capable of detecting malfunctioning insulators in power lines, you must provide it data about working ones and all types of malfunctioning ones. The third step is to train a neural network, which requires lots of computation power. Then after you train a neural network and it performs satisfactorily, it can be put to production to do inferencing.

Inferencing


Inferencing is the term that describes the act of using a neural network to provide insights after is has been trained. Think of it like someone who’s studying something (being trained) and then, after graduation, goes to work in a real-world scenario (inferencing). It takes years of study to become a doctor, just as like it takes lots of processing power to train a neural network. But doctors don’t take years to perform a surgery on a patient, and, likewise, neural networks take sub-seconds to provide an answer given real world data. This happens because the inferencing phase of a neural network-based solution doesn’t require much processing power. It requires only a fraction of the processing power needed for training. As a consequence, you don’t need a powerful piece of hardware to put a trained neural network to production, but you could use a more modest server, called an inference server, whose only purpose is to execute a trained AI model.

What the AI lifecycle looks like:


Deep learning projects have a peculiar lifecycle because of the way the training process works.

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Learning, IBM Exam Prep, IBM Online Exam


Fig 3: A deep learning project’s lifecycle

Organizations these days are facing the challenge of how to apply deep learning to analyzing their data and obtaining insights from it. They need to have enough data to train a neural network model. That data has to be representative to the problem they’re trying to solve; otherwise the results won’t be accurate. And they need a robust IT infrastructure made up of GPU-rich clusters of servers to train their AI models on. The training phase may go on for several iterations until the results are satisfactory and accurate. Once that happens, the trained neural network is put to production on much less powerful hardware. The data processed during the inferencing phase can retro feed the neural network model to correct it or enhance it according to the latest trends being created in newly acquired data. Therefore, this process of training and retraining happens iteratively over time. A neural network that’s never retrained will age over time and potentially become inaccurate with new data.

Sunday, 22 December 2019

Making the move to value-based health

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Certifications, IBM Online Exam

A few years ago, the IBV Institute for Business Value published a report which predicted the convergence of population health management and precision medicine into a new healthcare model we called Precision Health and Wellness. We believed a key component of that model would be a continued transition to outcome-based results and lower costs.

Fast forward to 2019 and our latest research found that not only had those predictions become real but that the speed of change had increased. Globally, healthcare systems are looking at how to maintain access, quality, and efficiency. Emphasis has shifted from volume of services toward patient outcomes, efficiency, wellness, and cost savings. And there is a recognition that the focus on “care” alone will be not deliver the degree of outcome improvement and cost reduction needed by providers or payers. Instead they will need engage the individuals, employers, communities, and social organizations as key partners in the process.

By using collaboration models, shared information, and innovative technology solutions across these stakeholders, better outcomes can be achieved across the whole lifespan of the individual—not just in doctors’ offices and hospitals, but in their daily lives, homes, and communities. It is this extension of health and wellness beyond the traditional clinical environment that takes care to the next level – value-based health.

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Certifications, IBM Online Exam

Value -based health entails keeping individuals healthy and well even when not receiving healthcare services. Engaging people and communities in health, identifying and addressing social determinants of health, and making sure community resources are available and accessible are cornerstones of value-based health.

In order to determine what is needed to transition from traditional value-based care towards value-based health, we spoke with a thousand healthcare executives in payer and provider organizations around the world.

Thursday, 21 November 2019

Accelerating data for NVIDIA GPUs

IBM Certification, IBM Guides, IBM Learning, IBM Tutorial and Material, IBM Online Exam, IBM Prep

These days, most AI and big data workloads need more compute power and memory than one node can provide. As both the number of computing nodes and the horsepower of processors and GPUs increases, so does the demand for I/O bandwidth. What was once a computing challenge can now become an I/O challenge.

IBM Certification, IBM Guides, IBM Learning, IBM Tutorial and Material, IBM Online Exam, IBM Prep

For those scaling up their AI workloads and teams, high-performance filesystems are being deployed because it addresses that I/O challenge. These deliver the bandwidth needed to feed the systems to keep them busy.

This week, NVIDIA announced a new solution–Magnum IO–that complements the capabilities of leading-edge data management systems such as IBM Spectrum Scale and helps address AI and big data analytics I/O challenges.

NVIDIA Magnum IO is a collection of software APIs and libraries to optimize storage and network I/O performance in multi-GPU, multi-node processing environments. NVIDIA developed Magnum IO in close collaboration with storage industry leaders, including IBM. The NVIDIA Magnum IO innovative software stack includes several NVIDIA GPUDirect technologies (Peer-to-Peer, RDMA, Storage, and Video) and communications APIs (NCCL, OpenMPI, and UCX). NVIDIA​ GPUDirect Storage is a key feature of Magnum IO, enabling a direct path between GPU memory and storage to improve system throughput and latency, therefore enhancing GPU and CPU utilization.

NVIDIA Magnum IO is designed to be a powerful complement to the IBM Spectrum Storage family. IBM Spectrum Scale, for example, was developed from the beginning for very high-performance environments. It incorporates support for Direct Memory Access technologies. Now, NVIDIA Magnum IO  is extending I/O technologies to speed NVIDIA GPU I/O.

For technology solution providers such as IBM and NVIDIA, the key is to integrate processors, GPUs, and appropriate software stacks into a unified platform designed specifically for AI. NVIDIA is also a major player in this space – 90 percent of accelerator-based systems incorporate NVIDIA GPUs for computation.

Recently, IBM and NVIDIA have been working together to develop modern IT infrastructure solutions that can help power AI well into the future. The synergies created by the IBM and NVIDIA collaboration have already been demonstrated at the largest scales. Currently, the two most powerful supercomputers on the planet – Summit at Oak Ridge and Sierra at Lawrence Livermore National Labs – are built from IBM Power processors, NVIDIA GPUs and IBM Storage. A key to these installations is the fact that they were assembled using only commercially available components. Leveraging this crucial ingredient, there are a range of solutions being offered by IBM and our Business Partners–from IBM supported versions of the complete stack to SuperPOD reference architectures featuring IBM Spectrum Scale and NVIDIA DGX systems.

Developing technology designed to increase data pipeline bandwidth and throughput is only part of the story. These solutions provide comprehensive reference architectures that incorporate a wide range of IBM Spectrum Storage family members, including IBM Cloud Object Storage for scalable data, IBM Spectrum Discover to manage and enhance metadata, and IBM Spectrum Protect to provide modern multicloud system security. The focus is on user productivity and support for the entire data pipeline.

The announcement of NVIDIA Magnum IO highlights the benefits of ecosystem collaboration to bring innovation to AI. As enterprises move rapidly toward adopting AI, they can do so with confidence and support of IBM Storage.


Magnum IO SC19: Accelerating data for NVIDIA GPUs with IBM Spectrum Scale

Source: ibm.com

Monday, 18 November 2019

My Strategy to Prepare for IBM Cognos Controller Developer (C2020-605) Certification Exam

About IBM Cognos Controller Developer Certification

IBM Cognos Controller is given with an integration component, Financial Analytics Publisher, that automates the process of obtaining data in near real-time from IBM Cognos Controller into IBM Cognos TM1. After the data is a source in IBM Cognos TM1, it can then be inserted as a data reference for IBM Cognos BI for enterprise reporting objectives.

The IBM Cognos Controller data in IBM Cognos TM1 is restored on a near real-time data through an incremental publishing process from the transactional controller database.

Cognos Controller, IBM Cognos Controller, C2020-605, C2020-605 Questions, C2020-605 Sample Questions, IBM Cognos Controller Developer Online Test, IBM Cognos 10 Controller Developer Certification

The Financial Analytics Publisher element is added on top of the IBM Cognos Controller and manages a temporary storage area before populating an IBM Cognos TM1 cube. When configured, the IBM Cognos TM1 cube is continuously updated, and you can set how often the service frames run.

From the IBM Cognos TM1 cube, the IBM Cognos Controller data can be obtained by several reporting tools, including IBM Cognos studios.

IBM C2020-605 Exam Summary:

  • Name: IBM Certified Developer - Cognos 10 Controller
  • Code: C2020-605
  • Duration: 120 minutes
  • Exam Questions: 94
  • Passing Score: 65%
  • Exam Price: 200 USD

IBM C2020-605 Exam Topics:

  • Create Company Structures (5%)
  • Create Account Structures (12%)
  • Set up General Configuration (14%)
  • Enable Data Entry and Data Import (19%)
  • Create Journals and Closing Versions (5%)
  • Prepare for Currency Conversion (11%)
  • Configure the Control Tables (9%)
  • Eliminate and Reconcile Intercompany transactions and acquisitions (10%)
  • Consolidate a Group's Reported Values (5%)
  • Secure the Application and the Data (4%)
  • Create Reports to Analyze Data (6%)

Preparation Tips for IBM Cognos Controller Developer (C2020-605) Certification Exam

1. If You Can Not Do the Past Papers Ask Someone For Help

Study groups work well, given you do not think this will mean other people are doing your studying for you.

You have to go and study a subject or try an exam paper by yourself first, then meet together to explain your answers. Don’t work through the past exam questions in the group.


The attraction to let other people do the work is too strong. You require to learn to do it yourself.

2. Do Not Be Tired

If you have to wait up all night to do last minute revision, you have already failed. It does not work. You end up so tired in the exam you cannot work anything out. It might work for the examination in a year, but you would not be able to hold it up during the C2020-605 exam.

3. Eat Protein Before Long Exam

An exam is just as much a physical exercise as a race. Perhaps not quite as much, but you can not ignore your body if you want your brain to work at its best. Filling it full of sugar, or some energy drink just before will work fine for the first hour or so, but by the end of a C2020-605 exam, you will have run entirely out of energy. You require some food that will slowly release energy.

4. Get the Important Facts into Short-term Memory

In the last 24 hours, it is too late to try and get anything new. What you can do is read some facts into short-term memory. This is the time to go through the notes, looking at those critical points sections. If you have not already done it as part of your revision and you should have done it, write out a sheet with just the key facts.

5. Practice Tests

If you want to prepare with highly up-to-date questions, then my strong suggestion to you starts your preparation now from AnalyticsExam.com. It is the best site from where you can get the valid, authentic, and verified dumps to prepare for your IBM Cognos 10 Controller Developer C2020-605 Exam.

You can prepare IBM Cognos 10 Controller Developer C2020-605 Practice Exam is ready to by testified professionals. As offer real, comprehensive, and verified Exam questions that are tested and prepared in a way to let you pass the C2020-605 exam in the just first attempt with excellent grades.

Cognos Controller, IBM Cognos Controller, C2020-605, C2020-605 Questions, C2020-605 Sample Questions, IBM Cognos Controller Developer Online Test, IBM Cognos 10 Controller Developer Certification

Conclusion

Just prepare your IBM Cognos 10 Controller Developer C2020-605 Exam for Guaranteed Success in IT Certifications Exams. They provide you the best material for exams and all the material which is given in these are according to your exams test.

It is a proven good helper to help you get success IBM Cognos 10 Controller Developer C2020-605 Exam. So what you still waiting for, go to AnalyticsExam.com and get your helping material now.

Monday, 29 July 2019

The Evolving Role of the Data Architect – Lift Up Your Career

Data architects are generally senior-level professionals and are highly admired in huge companies. A data architect is an individual who is responsible for designing, creating, expanding, and leading an organization's data architecture.

C2090-102 Questions, IBM Certification, C2090-102, C2090-102 Sample Questions, IBM Big Data Architect Sample Questions, IBM Big Data Architect Exam Questions, C2090-102 Practice Test, IBM Big Data Architect Certification Question Bank, IBM Big Data Architect Certification Questions and Answers, IBM Certified Data Architect - Big Data, IBM Certified Big Data Architect, C2090-102 Certification, IBM Big Data Certification, Exam C2090-102 Study Material, C2090-102 PDF, IBM Big Data Architect Quiz, C2090-102 Simulator

Data architects describe how the data will be collected, utilized, integrated, and managed by various data entities and IT systems, as well as any applications using or processing that data in some way.

Data architects must be original problem-solvers who use a considerable amount of programming tools to innovate and create new solutions to store and manage data.

At larger organizations, data architects are more removed from the physical storage and implementation of the data.  They may have a unit of database administrators, data analysts, data modelers working for or alongside them.

Data Architect Duties:

A data architect may be needed to:


  • Collaborate with IT organizations and administration to devise a data strategy that approaches industry demands.
  • Build an inventory of data required to complete the architecture.

Analysis of new opportunities for data acquisition:

  • Develop data forms for database structures
  • Found a solution, end-to-end concept for how data will flow through an organization
  • Design and support database development models
  • Recognize and evaluate prevailing data management technologies
  • Integrate technical functionality (e.g., scalability, security, performance, data retrieval, reliability, etc.)
  • Design, construct, document and deploy database structures and applications (e.g., enormous relational databases)
  • Manage a corporate storehouse of all data architecture artifacts and methods
  • Meld new methods with actual warehouse structures
  • Execute steps to ensure data efficiency and accessibility
  • Continually observe, refine and report on the production of data management systems

Abilities required to become a Data Architect:

Data architects are extremely trained workers, who are fluent in a broad range of programming signals as well as other technologies, and must be skilled communicators with extreme business insights. Data architects must have strong attention to detail, as any difficulties in coding can cost a business millions to improve.

Technical skills involved with being a data architect include strength in:

  • Data visualization and data immigration
  • Utilized math and statistics
  • RDMSs (relational database management systems) or foundational database abilities
  • Machine learning
  • Operating systems, including Linux, UNIX, Solaris, and MS-Windows
  • Database administration system software, especially Microsoft SQL Server
  • Programming languages, especially Python and Java, as well as C/C++ and Perl
  • Backup/archival software
Prosperous data architects have some other business abilities. Though they must have a depth and width of knowledge in the field, data architects must also be inventive queries-solvers, who can innovate new solutions and change with developing the technology.

As data architects are often senior executives on a project, they must be capable to adequately lead members of a team, such as data engineers, data modelers, and database administrators. They must also be able to communicate explications to associates with non-technical backgrounds.

How to Become a Data Architect

1. Examine additional certifications and additional learning.

  • There are many opportunities to develop your expertise and knowledge as a data architect from organizations such as IBM, Salesforce.
  • IBM Certified Data Architect – Big Data
  • This IBM professional certification program needs that applicants maintain a myriad of required skills from understanding cluster control and data replication to data lineage.

2. Develop and improve in your professional and business abilities from data scooping to analytical problem-solving.

  • Application server software
  • Development environment software
  • Data mining
  • Database administration system software
  • Technical Abilities for Data Architects
  • User interface and query software
  • Backup/archival software
  • UNIX, Linux, Solaris, and MS-Windows
  • Python, C/C++ Java, Perl
  • Data visualization
  • Machine learning

Business Skills for Data Architects:

Analytical Problem-Solving: Comparing high-level data difficulties with a clear eye on what is necessary; employing the right program/techniques to make the best use of time and human resources.

Management Knowledge: Knowing the way your chosen business functions and how data are collected, analyzed and used; maintaining adaptability in the face of important data improvements.

Compelling Communication: Carefully listening to administration, data investigators, and associated team to come up with the best data design; explaining complex ideas to non-technical associates.

Expert Management: Effectively directing and advising a team of data modelers, data engineers, database administrators, and junior architects.

To become a data architect, you should begin with a bachelor’s degree in computer science, computer engineering, or a similar field. Coursework should include coverage of data programming, management, significant data improvements, technology architectures, and systems analysis. For senior positions, a master’s degree is usually preferred.

Conclusion:

Data architects are usually skilled at logical data modeling, physical data modeling, data policies development, data procedure, data warehousing, data doubting languages and recognizing and choosing a system that is best for addressing data storage, retrieval, and administration.

Tuesday, 2 April 2019

Automate disaster recovery using IBM VM Recovery Manager

Business continuity is a top priority for every enterprise. And, at the foundation, it’s all about having a solid plan in place to deal with disruptions and potential threats.

If you’re an IT planner, you know that data protection and disaster recovery (DR) — the aspects of business continuity that are most relevant to IT professionals — can be major concerns. An outage of one or more of your mission-critical systems could cause significant business downtime, including loss of revenue and irreparable damage to long-term customer relationships.

So how can you stay ahead of the curve?

Planning ahead for disaster recovery: Five key questions


There are five key questions on the mind of technology planners when considering a disaster recovery solution:

1. Are there any tools that can automate disaster recovery?
2. How can we keep the remote location updated with the current state of the primary location?
3. Which methodology should we use?
4. How can we reduce the costs of redundant systems and licenses?
5. What’s the best way to test and verify disaster recovery site readiness?

IBM VM Recovery Manager can answer these questions. It’s an easy-to-use, automated and cost-effective disaster recovery solution for applications hosted on IBM Power Systems. In this blog post, I’ll illustrate how IBM Systems Lab Services helped an enterprise client address DR concerns for its SAP HANA landscape using IBM VM Recovery Manager.

How IBM VM Recovery Manager can help


IBM Systems Lab Services recently worked with a client that’s a multinational SAP consulting and hosting service provider, hosting SAP HANA and other SAP landscapes on IBM Power Systems servers for its customers. The client was looking for an easy-to-use, automated solution for disaster recovery.

Initially, they considered HANA System Replication for its database, but with this option, the similar replication would not be available for the SAP Central Services (SCS) and SAP Application Server (AS) components. Therefore, other solutions would have to be used, creating a more complex environment to manage.

Storage replication was another option considered for replicating all critical data to a remote site. However, management of storage replication during a DR operation is highly complex, involves manual processes, is more time consuming and requires additional training for your staff on these expert skills.

Both of these solutions require duplicate instances in the DR site, demand advanced skills, require further resources and come with higher license costs. On top of that, neither provides a DR rehearsal capability while production is running.

IBM Systems Lab Services conducted a proof of concept to demonstrate the benefits of IBM VM Recovery Manager for disaster recovery, including how it would address the client’s specific concerns.

VM Recovery Manager:

◈ Moves SAP HANA databases and other SAP instances to the disaster recovery site with a single command for planned or unplanned site movement. In turn, relieving the client of the complexities of server and storage management during DR operation and sparing them the need for additional staff training.
◈ Validates DR site readiness with its DR rehearsal capability, thereby giving the client confidence about its DR readiness.
◈ Offers other features like monitoring and auto update of configuration changes to the VMs like processor/memory, disk additions and so forth — capabilities that take away the administrative overhead from the staff.

In the end, the client was very happy with the product demonstration and decided to implement VM Recovery Manager as its DR solution — with IBM Systems Lab Services there every step of the way from design to implementation. From there, the Lab Services team provided knowledge transfer and enablement for the client’s team to manage the solution covering all of its SAP systems, including databases and applications.

The following pictures illustrate VM Recovery in normal operation and after a site recovery.

IBM VM Recovery Manager, IBM Certifications, IBM Tutorial and Material, IBM Guides
Picture 1: Production VMs running on primary site

IBM VM Recovery Manager, IBM Certifications, IBM Tutorial and Material, IBM Guides
Picture 2: Production VMs moved to secondary site and running after a DR move