Tuesday 31 December 2019

Guard your virtual environment with IBM Spectrum Protect Plus

The server virtualization market is expected to grow to approximately 8 billion dollars by 2023. This would mean a compound annual growth rate (CAGR) of 7 percent between 2017 and 2023. Protection of these virtual environments is therefore a significant and critical component of an organization’s data protection strategy.

This growth in virtual environments necessitates that data protection be managed centrally, with better policies and reporting systems. However, in an environment consisting of heterogeneous applications, databases and operating systems, overall data protection can become a complex task.

In my day-to-day meetings with clients from the automobile, retail, financial and other industries, the most common requirements for data protection include simplified management, better recovery time, data re-use and the ability to efficiently protect data for varied requirements such as testing and development, DevOps, reporting, analytics and so on.

Data protection challenges for a major auto company


I recently worked with a major automobile company in India that has a large setup of virtual machines. Management was looking for a disk-based, agentless data protection software that could protect its virtual and physical environments, one that was easy to deploy, could operate from a single window with better recovery times and provide reporting and analytics.

In its current environment, the company has three data centers. It was facing challenges with data protection management and disaster recovery. Backup agents were to be installed individually, which required expertise on the operating system, database and applications. A dedicated infrastructure was required at each site to manage the virtual environment and several other tools to perform granular recovery as needed. The company also had virtual tape libraries that were replicating data to other sites.

The benefits of data protection and availability for virtual environments


For the automobile company in India, IBM Systems Lab Services proposed a solution that would replace its current siloed and complex backup architecture with a simple IBM Spectrum Protect Plus solution. The proposed strategy included storing its short-term data on vSnap storage, which comes with IBM Spectrum Protect Plus, and storing long-term data to IBM Cloud Object Storage to reduce the backup storage costs. The multisite, geo-dispersed configuration of IBM Cloud Object Storage could help the auto company reduce the dependency of replication, which used to be there with virtual tape libraries.

Integration of IBM Spectrum Protect Plus with Spectrum Protect and offloading data to IBM Cloud Object Storage was proposed to the client to retire more expensive backup hardware such as virtual tape libraries from its data centers. This was all part of a roadmap to transform its backup environment.

IBM Spectrum Protect Plus is easy to install, eliminating the need for advanced technical expertise for backup software deployment. Its backup policy creation and scheduling features helped the auto company reduce its backup admin dependency. Its role-based access control (RBAC) feature enabled it to provide better bifurcation of its backup and restore workload between VM admins, backup admins, database administrators and so forth when it comes to its data protection environment. The ability to manage the data protection environment from a single place also helped the auto company to manage all three of its data centers from one location.


This company can now take advantage of Spectrum Protect Plus to perform quick recovery of data during disaster scenarios and reduce its development cycle with clone creation from backup data in minutes or even seconds. Spectrum Protect Plus’s search and recovery of single-file functionality made the auto company’s need for granular recovery of files easy to address.

One of its major challenges was to improve performance and manage the protection of databases and applications. Spectrum Protect Plus resolved this with its unique feature of agentless backup of databases and applications running on physical and virtual environments. Spectrum Protect Plus also offers better reporting and analytics functionality, which allowed the client to send intuitive reports to the company’s top management.

Sunday 29 December 2019

Exploring AI Fairness for People with Disabilities

This year’s International Day of Persons with Disabilities emphasizes participation and leadership. In today’s fast-paced world, more and more decisions affecting participation and opportunities for leadership are automated. This includes selecting candidates for a job interview, approving loan applicants, or admitting students into college. There is a trend towards using artificial intelligence (AI) methods such as machine learning models to inform or even make these decisions. This raises important questions around how such systems can be designed to treat all people fairly, especially people who already face barriers to participation in society.


AI Fairness Silhouette

A Diverse Abilities Lens on AI Fairness


Machine learning finds patterns in data, and compares a new input against these learned patterns.  The potential of these models to encode bias is well-known. In response, researchers are beginning to explore what this means in the context of disability and neurodiversity. Mathematical methods for identifying and addressing bias are effective when a disadvantaged group can be clearly identified. However, in some contexts it is illegal to gather data relating to disabilities. Adding to the challenge, individuals may choose not to disclose a disability or other difference, but their data may still reflect their status. This can lead to biased treatment that is difficult to detect. We need new methods for handling potential hidden biases.

Our diversity of abilities, and combinations of abilities, pose a challenge to machine learning solutions that depend on recognizing common patterns. It’s important to consider small groups, not represented strongly in training data. Even more challenging, unique individuals have data that does not look like anyone else’s.

First Workshop on AI Fairness


To stimulate progress on this important topic, IBM sponsored two workshops on AI Fairness for People with Disabilities. The first workshop in 2018, gathered individuals with lived experience of disability, advocates and researchers. Participants identified important areas of opportunity and risk, such as employment, education, public safety and healthcare.  That workshop resulted in a recently published report outlining practical steps towards accommodating people with diverse abilities throughout the AI development lifecycle. For example, review proposed AI systems for potential impact, and design-in ways to correct errors and raise fairness concerns. Perhaps the most important step is to include diverse communities in both development and testing. This should improve robustness and help develop algorithms that support inclusion.

ASSETS 2019 Workshop on AI Fairness


The second workshop was held at this year’s ACM SIGACCESS ASSETS Conference on Computers and Accessibility, and brought together thinkers from academia, industry, government, and non-profit groups.  The organizing team of accessibility researchers from industry and academia selected seventeen papers and posters. These represent the latest research on AI methods and fair treatment of people with disabilities in society. Alexandra Givens of Georgetown University kicked off the program with a keynote talk outlining the legal tools currently available in the United States to address algorithmic fairness for people with disabilities. Next, the speakers explored topics including: fairness in AI models as applied to disability groups, reflections on definitions of fairness and justice, and research directions to pursue.  Going forward, key topics in continuing these discussions are:

◉ The complex interplay between diversity, disclosure and bias.

◉ Approaches to gathering datasets that represent people with diverse abilities while protecting privacy.

◉ The intersection of ableism with racism and other forms of discrimination.

◉ Oversight of AI applications.

Saturday 28 December 2019

AI, machine learning and deep learning: What’s the difference?

AI, Machine Learning, Deep Learning, IBM Study Materials, IBM Certifications, IBM Guides, IBM Learning, IBM Tutorial and Material

It’s not unusual today to see people talking about artificial intelligence (AI). It’s in the media, popular culture, advertising and more. When I was a kid in the 1980s, AI was depicted in Hollywood movies, but its real-world use was unimaginable given the state of technology at that time. While we don’t have robots or androids that can think like a person or are likely to take over the world, AI is a reality now, and to understand what we mean when we talk about AI today we have to go through a — quick, I promise — introduction on some important terms.

AI is…


Simply put, AI is anything capable of mimicking human behavior. From the simplest application — say, a talking doll or an automated telemarketing call — to more robust algorithms like the deep neural networks in IBM Watson, they’re all trying to mimic human behavior.

Today, AI is a term being applied broadly in the technology world to describe solutions that can learn on their own. These algorithms are capable of looking at vast amounts of data and finding trends in it, trends that unveil insights, insights that would be extremely hard for a human to find. However, AI algorithms can’t think like you and me. They are trained to perform very specialized tasks, whereas the human brain is a pretty generic thinking system.

AI, Machine Learning, Deep Learning, IBM Study Materials, IBM Certifications, IBM Guides, IBM Learning, IBM Tutorial and Material

Fig 1: Specialization of AI algorithms

Machine learning


Now we know that anything capable of mimicking human behavior is called AI. If we start to narrow down to the algorithms that can “think” and provide an answer or decision, we’re talking about a subset of AI called “machine learning.” Machine learning algorithms apply statistical methodologies to identify patterns in past human behavior and make decisions. They’re good at predicting, such as predicting if someone will default on a loan being requested, predicting your next online purchase and offering multiple products as a bundle, or predicting fraudulent behavior. They get better at their predictions every time they acquire new data. However, even though they can get better and better at predicting, they only explore data based on programmed data feature extraction; that is, they only look at data in the way we programmed them to do so. They don’t adapt on their own to look at data in a different way.

Deep learning


Going a step narrower, we can look at the class of algorithms that can learn on their own — the “deep learning” algorithms. Deep learning essentially means that, when exposed to different situations or patterns of data, these algorithms adapt. That’s right, they can adapt on their own, uncovering features in data that we never specifically programmed them to find, and therefore we say they learn on their own. This behavior is what people are often describing when they talk about AI these days.

Is deep learning a new capability?


Deep learning algorithms are not new. They use techniques developed decades ago. I’m a computer engineer, and I recall having programmed deep learning algorithms in one of my university classes. Back then, my AI programs had to run for days to give me an answer, and most of the time it wasn’t very precise. There are a few reasons why:

◉ Deep learning algorithms are based on neural networks, which require a lot of processing power to get trained — processing power that didn’t exist back when I was in school.
◉ Deep learning algorithms require lots of data to get trained, and I didn’t have that much data back then.

So, even though the concepts have been around, it wasn’t until recently that we could really put deep learning to good use.

What has changed since then? We now have the computing power to process neural networks much faster, and we have tons of data to use as training data to feed these neural networks.

Figure 2 depicts a little bit of history of the excitement around AI.

AI, Machine Learning, Deep Learning, IBM Study Materials, IBM Certifications, IBM Guides, IBM Learning, IBM Tutorial and Material

Fig 2: The excitement around AI began a long time ago

Hopefully now you have a clear understanding of some of the key terms circulating in discussions of AI and a good sense of how AI, machine learning and deep learning relate and differ. In my next post, I’ll do a deep dive into a framework you can follow for your AI efforts — called the data, training and inferencing (DTI) AI model. So please stay tuned.

Friday 27 December 2019

Three ways to intelligently scale your IT systems

Digital transformation, with its new paradigms, presents new challenges to your IT infrastructure. To stay ahead of competitors, your organization must launch innovations and release upgrades in cycles measured in weeks if not days, while boldly embracing efficiency practices such as DevOps. Success in this environment requires an enterprise IT environment that is scalable and agile – able to swiftly accommodate growing and shifting business needs.

As an IT manager, you constantly face the challenge of scaling your systems. How can you quickly meet growing and fluctuating resource demands while remaining efficient, cost-effective and secure? Here are three ways to meet the systems scalability challenge head-on.

1. Know when to scale up instead of out

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
If you’re like many IT managers, this scenario may sound familiar. You run your systems of record on a distributed x86 architecture. You’re adding servers at a steady rate as usage of applications, storage and bandwidth steadily increases. As your data center fills up with servers, you contemplate how to accommodate further expansion. As your IT footprint grows and you can’t avoid increasing server sprawl despite using virtualization, you wonder if there’s a more efficient solution.

There often is. IBM studies have found that once many workloads reach a certain threshold of processing cores in their data center, it often becomes more cost efficient to scale up instead of out. Hundreds of workloads, running on 50 or so processing cores each, require significant hardware, storage space and staffing resources; these requirements create draining costs and inefficiencies. Such a configuration causes not only server sprawl across the data center, but sprawl at the rack level as racks fill with servers.

By consolidating your sprawling x86 architecture into a single server or a handful of scale-up servers, you can reduce total cost of ownership, simplify operations and free up valuable data center space. A mix of virtual scale-up and scale-out capabilities, in effect allowing for diagonal scaling, will enable your workloads to flexibly respond to business demand.

IBM studies have found, and clients who have embraced these studies have proven, that moving from a scale-out to a scale-up architecture can save enterprises up to 50 percent on server costs over a three-year period. As your system needs continue to grow, a scale-up enterprise server will expand with them while running at extremely high utilization rates and meeting all service-level agreements. Eventually a scale-up server will need to scale out too, but not before providing ample extra capacity for growth.

 2. Think strategically about scalability

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
In the IT systems world, scalability is often thought of primarily in terms of processing power. As enterprise resource demands increase, you scale your processing capabilities, typically through either the scale-out or scale-up model.

Scalability for enterprises, however, is much broader than adding servers. For instance, your operations and IT leaders are always driving toward increased efficiency by scaling processes and workloads across the enterprise. And your CSO may want to scale the encryption used to protect the most sensitive data to all systems of record data. As your systems scale, every aspect must scale with it, from service-level agreements to networking resources to data replication and management to the IT staff required to operate and administer those additional systems. Don’t forget about the need to decommission resources, whether physical or virtual, as you outgrow them. Your IT systems should enable these aspects of scalability as well. Considering scalability in the strategic business sense will help you determine IT solutions that meet the needs of all enterprise stakeholders.

3. Meet scale demands with agile enterprise computing

IBM Study Materials, IBM Guides, IBM Tutorial and Materials, IBM Certifications, IBM Learning, IBM Online Exam
An enterprise computing platform can drive efficiency and cost savings by helping you scale up instead of out. Yet the platform’s scalability and agility benefits go well beyond this. Research by Solitaire Interglobal Limited (SIL) has found that enterprise computing platforms provide superior agility to distributed architectures. New applications or systems can be deployed nearly three times faster on enterprise computing platforms than on competing platforms. This nimbleness allows you to stay ahead of competitors by more quickly launching innovations and upgrades. Also notably, enterprise platforms are 7.41 times more resilient than competing platforms. This means that these platforms can more effectively cope when resource demands drastically change.

Techcombank, a leading bank in Vietnam, has used an enterprise computing platform to meet scalability needs. As the Vietnamese banking industry grows rapidly, Techcombank is growing with it – with 30 percent more customers and 70 percent more online traffic annually. To support rapid business growth, Techcombank migrated its systems to an enterprise computing platform. The platform enables Techcombank to scale as demand grows while experiencing enhanced performance, reliability and security.

Thursday 26 December 2019

Your onramp to IBM PowerAI

As offering managers on IBM’s PowerAI team, my colleagues and I assist worldwide clients, Business Partners, sellers and systems integrators daily to accelerate their journey to adopt machine learning (ML), deep learning (DL) and other AI applications, specifically based on IBM PowerAI and PowerAI Vision.

Thanks to our collective efforts we been able to establish an adoption roadmap for our PowerAI offering which we’d like to pass along to our future clients and partners. This fine-tuned roadmap is based on years of experience and has proven useful in hundreds of client engagements. An additional benefit is the summarization of the processes we recommend into a small number of high-value steps. These 5 steps, once actualized with factors local to each client’s needs, will accelerate your journey to trying AI and assessing its benefits in your environment, with actions in place to reduce risk and costs:

1. Provide important background info: First, make sure all members of your staff involved in future AI projects are offered the means to learn more about AI, ML and DL. We recommend starting here, however, you should feel free to leverage the vast amount of additional information that is publicly available.

2. Identify priorities: Next, working closely across a team of both business and technical leaders, identify top-priority business problems or growth opportunities that your client or company views as potential candidates for exploring the potential benefits of AI projects or machine learning/deep learning solutions. During this phase of the adoption effort, it may be very beneficial to augment your current staff or in-house skill base by bringing in additional subject matter experts in AI/ML/DL, sourcing them from hardware and software vendors that offer AI solutions, consulting firms with AI practices, or systems integrators.

3. Select data sources: Identify the wide variety of data sources, including “Big Data”, that you can and will leverage when applying ML/DL algorithms to solve these current business problems your team has identified.

4. Find the right software/hardware: Select and gain access to machine learning or deep learning software along with accelerated hardware so that you can start your AI project development, tests and proof of concept (POC) process. An effective POC journey will leverage your data along with your success criteria to help ensure that AI projects can move forward at full speed and expand rapidly at scale. Build and train the ML/DL models and then demonstrate where the AI POC effort was successful. Other areas may either need newer approaches or more work to yield the required incremental business benefits from AI.

5. Create the Production Environment: Plan out your production machine learning and/or deep learning deployment environment. Now would be a good time to establish a complete reference architecture for your production AI systems, and you can start identifying and planning to change any in-house business processes or workflows as a result of better-leveraged and newer AI algorithms, big data and the next generation of accelerated IT infrastructure.

Finally, as part of your onramp process, we recommend you learn more about IBM’s PowerAI. A good starting point to begin your journey is here. Feel free to reach out to us for more information and to leverage more detailed guidance. As a reference point you may also want take a second to review the top reasons that hundreds of clients have already chosen to run IBM’s PowerAI:

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Online Exam, IBM Exam Prep

Wednesday 25 December 2019

A future of powerful clouds

IBM Tutorial and Material, IBM Study Materials, IBM Online Exam, IBM Certifications, IBM Learning

In a very good way, the future is filled with clouds.

In the realm of information technology, this statement is especially true. Already, the majority of organizations worldwide are taking advantage of more than one cloud provider. IBM calls this a “hybrid multicloud” environment – “hybrid” meaning both on- and off-premises resources are involved, and “multicloud” denoting more than one cloud provider.

Businesses, research facilities and government entities are rapidly moving to hybrid multicloud environments for some very compelling reasons. Customers and constituents are online and mobile. Substantial CapEx can be saved by leveraging cloud-based infrastructure. Driven by new microservices-based architectures, application development can be faster and less complex. Research datasets may be shared more easily. Many business applications are now only available from the cloud.

Container technologies are the foundation of microservices-based architectures and a key enabler of hybrid multicloud environments. Microservices are a development approach where large applications are built as a suite of modular components or services. Over 90 percent of surveyed enterprises are using or have plans to use microservices.

IBM Tutorial and Material, IBM Study Materials, IBM Online Exam, IBM Certifications, IBM Learning


Containers enable applications to be packaged with everything needed to run identically in any environment. Designed to be very flexible, lightweight and portable, containers will be used to run applications in everything from traditional and cloud data centers, to cars, cruise ships, airport terminals and even gateways to the Internet of Things (IoT).

Container technologies offer many benefits. Because of their lower overhead, containers offer better application start-up performance. They provide near bare metal speeds so management operations (boot, reboot, stop, etc.) can be done in seconds — or even milliseconds — while typical virtual machine (VM) operations may take minutes to complete. And the benefits don’t stop there. Applications typically depend on numerous libraries for correct execution. Seemingly minor changes in library versions can result in applications failing, or even worse, providing inconsistent results. This can make moving applications from one system to another — or out on to the cloud — problematic.

Containers, on the other hand, can make it very easy to package and move an application from one system to another. Users can run the applications they need, where they need them, while administrators can stop worrying about library clashes or helping users get their applications working in specific environments.

Nearly half of all enterprises are planning to start utilizing containers as soon as practical. In terms of use cases, the majority of these IT leaders say they will employ containers to build cloud-native applications. Nearly a third of surveyed organizations plan to use containers for cloud migrations and modernizing legacy applications. That suggests that beyond using them to build new microservices-based applications, containers are starting to play a critical role in migrating applications to the cloud.

But no one will be leveraging containers to build and manage enterprise hybrid multicloud environments without a powerful, purpose-built infrastructure.

IBM and Red Hat are two industry giants that have recognized the crucial role that IT infrastructure will play in enabling the container-driven multicloud architectures needed to support the ERP, database, big data and artificial intelligence (AI)-based applications that will power business and research far into the future.

Along with proven reliability and leading-edge functionality, two key ingredients of any effective infrastructure supporting and enabling multicloud environments are simplicity and automation. Red Hat OpenShift Container Platform and the many IBM Storage Solutions designed to support the Platform are purpose-engineered to automate and simplify the majority of management, monitoring and configuration tasks associated with the new multicloud environments. Thus, IT operations and application development staff can spend less time keeping the lights on and more time innovating.

IBM Tutorial and Material, IBM Study Materials, IBM Online Exam, IBM Certifications, IBM Learning

Red Hat is the market leader in providing enterprise container platform software. The Red Hat OpenShift Container Platform is an enterprise-ready Kubernetes container platform with full-stack automated operations to manage hybrid cloud and multicloud deployments. The Platform includes an enterprise-grade Linux operating system plus container runtime, networking, monitoring, container registry, authentication and authorization solutions. These components are tested together for unified operations on a complete Kubernetes platform spanning virtually any cloud.  

IBM and Red Hat have been working together to develop and offer storage solutions that support and enhance OpenShift functionality. In fact, IBM was one of the first enterprise storage vendors on Red Hat’s OperatorHub. IBM Storage for Red Hat OpenShift solutions provide a comprehensive, validated set of tools, integrated systems and flexible architectures that enable enterprises to implement modern container-driven hybrid multicloud environments that can reduce IT costs and enhance business agility.

IBM Storage solutions are designed to address modern IT infrastructure requirements. They incorporate the latest technologies, including NVMe, high performance scalable file systems and intelligent volume mapping for container deployments. These solutions provide pre-tested and validated deployment and configuration blueprints designed to facilitate implementation and reduce deployment risks and costs.

Everything from best practices to configuration and deployment guidance is available to make IBM Storage solutions easier and faster to deploy. IBM Storage provides solutions for a very wide range of container-based IT environments, including Kubernetes, Red Hat OpenShift, and the new IBM Cloud Paks. IBM is continually designing, testing, and adding to the performance, functionality, and cost-efficiency of solutions such as IBM Spectrum Virtualize and IBM Spectrum Scale software, IBM FlashSystem and Elastic Storage Server data systems and IBM Cloud Object Storage.

To accelerate business agility and gain more value from the full spectrum of ERP, database, AI, and big data applications, organizations of all types and sizes are rapidly moving to hybrid multicloud environments. Container technologies are helping to drive this transformation. IBM Storage for Red Hat OpenShift automates and simplifies container-driven hybrid multicloud environments.

Tuesday 24 December 2019

AI today: Data, training and inferencing

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Learning, IBM Exam Prep, IBM Online Exam

I discussed artificial intelligence, machine learning and deep learning and some of the terms used when discussing them. Today, I’ll focus on how data, training and inferencing are key aspects to those solutions.

The large amounts of data available to organizations today have made possible many AI capabilities that once seemed like science fiction. In the IT industry, we’ve been talking for years about “big data” and the challenges businesses face in figuring out how to process and use all of their data. Most of it — around 80 percent — is unstructured, so traditional algorithms are not capable of analyzing it.

A few decades ago, researchers came up with neural networks, the deep learning algorithms that can unveil insights from data, sometimes insights we could never imagine. If we can run those algorithms in a feasible time frame, they can be used to analyze our data and uncover patterns in it, which might in turn aid in business decisions. These algorithms, however, are compute intensive.

Training neural networks


A deep learning algorithm is one that uses a neural network to solve a particular problem. A neural network is a type of AI algorithm that takes an input, has this input go through its network of neurons — called layers — and provides an output. The more layers of neurons it has, the deeper the network is. If the output is right, great. If the output is wrong, the algorithm learns it was wrong and “adapts” its neuron connections in such a way that, hopefully, the next time you provide that particular input it gives you the right answer.

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Learning, IBM Exam Prep, IBM Online Exam

Fig 1: Illustration of computer neural networks

This ability to retrain a neural network until it learns how to give you the right answer is an important aspect of cognitive computing. Neural networks learn from data they’re exposed to and rearrange the connection between the neurons.

The connections between the neurons is another important aspect, and the strength of the connection between neurons can vary (that is, their bond can be strong, weak or anywhere in between). So, when a neural network adapts itself, it’s really adjusting the strength of the connections among its neurons, so that next time it can provide a more accurate answer. To get a neural network to provide a good answer to a problem, these connections need to be adjusted by exhaustively exercising repeated training of the network — that is, exposing it to data. There can be zillions of neurons involved, and adjusting their connections is a compute-intensive matrix-based mathematical procedure.

We need data and compute power


Most organizations today, as we discussed, have tons of data that can be used to train these neural networks. But there’s still the problem of all of the massive and intensive math required to calculate the neuron connections during training. As powerful as today’s processors are, they can only perform so many math operations per second. A neural network with a zillion neurons trained over thousands of training iterations will still require a zillion thousand operations to be calculated. So now what?

Thanks to the advancements in industry (and I personally like to think that the gaming industry played a major role here), there’s a piece of hardware that’s excellent at handling matrix-based operations called the Graphics Processing Unit (GPU). GPUs can calculate virtually zillions of pixels in matrix-like operations in order to show high-quality graphics on a screen. And, as it turns out, the GPU can work on neural network math operations in the same way.

Please, allow me to introduce our top math student in the class: the GPU!

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Learning, IBM Exam Prep, IBM Online Exam

Fig 2: An NVIDIA SMX2 GPU module

A GPU is a piece of hardware capable of performing math computations over a huge amount of data at the same time. It’s not as fast as a central processing unit (CPU), but if one gives it a ton of data to process, it does so massively in parallel and, even though each operation runs more slowly, the parallelism of applying math operations to more data at once beats the CPU performance by far, allowing you to get your answers faster.

Big data and the GPU have provided the breakthroughs we needed to put neural networks to good practice. And that brings us to where we are with AI today. Organizations can now apply this combination to their business and uncover insights from their vast universe of data by training a neural network for that.

To successfully apply AI in your business, the first step is to make sure you have lots of data. A neural network performs poorly if trained with little data or with inadequate data. The second step is to prepare the data. If you’re creating a model capable of detecting malfunctioning insulators in power lines, you must provide it data about working ones and all types of malfunctioning ones. The third step is to train a neural network, which requires lots of computation power. Then after you train a neural network and it performs satisfactorily, it can be put to production to do inferencing.

Inferencing


Inferencing is the term that describes the act of using a neural network to provide insights after is has been trained. Think of it like someone who’s studying something (being trained) and then, after graduation, goes to work in a real-world scenario (inferencing). It takes years of study to become a doctor, just as like it takes lots of processing power to train a neural network. But doctors don’t take years to perform a surgery on a patient, and, likewise, neural networks take sub-seconds to provide an answer given real world data. This happens because the inferencing phase of a neural network-based solution doesn’t require much processing power. It requires only a fraction of the processing power needed for training. As a consequence, you don’t need a powerful piece of hardware to put a trained neural network to production, but you could use a more modest server, called an inference server, whose only purpose is to execute a trained AI model.

What the AI lifecycle looks like:


Deep learning projects have a peculiar lifecycle because of the way the training process works.

IBM Tutorial and Material, IBM Guides, IBM Certifications, IBM Learning, IBM Exam Prep, IBM Online Exam


Fig 3: A deep learning project’s lifecycle

Organizations these days are facing the challenge of how to apply deep learning to analyzing their data and obtaining insights from it. They need to have enough data to train a neural network model. That data has to be representative to the problem they’re trying to solve; otherwise the results won’t be accurate. And they need a robust IT infrastructure made up of GPU-rich clusters of servers to train their AI models on. The training phase may go on for several iterations until the results are satisfactory and accurate. Once that happens, the trained neural network is put to production on much less powerful hardware. The data processed during the inferencing phase can retro feed the neural network model to correct it or enhance it according to the latest trends being created in newly acquired data. Therefore, this process of training and retraining happens iteratively over time. A neural network that’s never retrained will age over time and potentially become inaccurate with new data.

Sunday 22 December 2019

Making the move to value-based health

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Certifications, IBM Online Exam

A few years ago, the IBV Institute for Business Value published a report which predicted the convergence of population health management and precision medicine into a new healthcare model we called Precision Health and Wellness. We believed a key component of that model would be a continued transition to outcome-based results and lower costs.

Fast forward to 2019 and our latest research found that not only had those predictions become real but that the speed of change had increased. Globally, healthcare systems are looking at how to maintain access, quality, and efficiency. Emphasis has shifted from volume of services toward patient outcomes, efficiency, wellness, and cost savings. And there is a recognition that the focus on “care” alone will be not deliver the degree of outcome improvement and cost reduction needed by providers or payers. Instead they will need engage the individuals, employers, communities, and social organizations as key partners in the process.

By using collaboration models, shared information, and innovative technology solutions across these stakeholders, better outcomes can be achieved across the whole lifespan of the individual—not just in doctors’ offices and hospitals, but in their daily lives, homes, and communities. It is this extension of health and wellness beyond the traditional clinical environment that takes care to the next level – value-based health.

IBM Tutorial and Material, IBM Learning, IBM Certifications, IBM Certifications, IBM Online Exam

Value -based health entails keeping individuals healthy and well even when not receiving healthcare services. Engaging people and communities in health, identifying and addressing social determinants of health, and making sure community resources are available and accessible are cornerstones of value-based health.

In order to determine what is needed to transition from traditional value-based care towards value-based health, we spoke with a thousand healthcare executives in payer and provider organizations around the world.