Tuesday, 29 August 2023

MDM vs. MAM: Top 5 differences

MDM, MAM, IBM, IBM Exam Study, IBM Exam Prep, IBM Certification, IBM Learning, IBM Guides

It looks like an easy day for James, an IT Administrator. It is vacation time and most of his end users are out of the office, so he thinks it is time to have a look at some of the backlog tasks—maybe even procrastinate a bit. But then, the phone rings.

It’s Robert, one of the end users in his company. Robert is very nervous—he’s calling from the hotel because he has lost his iOS smartphone on the beach. Their company has both corporate devices and a BYOD (bring your own device) policy. Robert is enrolled in the BYOD program, so it was his personal device but with corporate data stored, including the latest financial projections he has shared with his team for a presentation.

James opens the mobile device management software that his company is using, immediately finds Robert’s iOS smartphone in the tool, and does a remote wipe. He wants to get back to the backlog tasks.

But, it’s not over. He sees a real-time notification that a user has tried to download a gaming app on the corporate device, which is not in policy. An automatic notification to the end user was left. It is his friend, Mary; Mary’s flight was delayed and her kid was bored and asked for her Android smartphone to watch YouTube.

What James has done with Robert’s lost iOS smartphone is part of mobile device management (MDM). In Mary’s case, the access settings for apps that are not in policyare part of mobile application management (MAM). Both MDM and MAM are part of unified endpoint management solutions. Whether a company has BYOD policies, uses only corporate-owned devices or both, and whether the users have iOS smartphones, Android smartphones or tablets, all devices and apps need to be managed and protected. Mobile security strategies need to be put into place, otherwise one can lose corporate data, personal data and sensitive data.

What is mobile device management (MDM)?


Mobile device management (MDM) is a solution that manages smartphones and tablets—no matter the operating system—and protects them against cyber threats and data loss. MDM has become a very popular technology after Apple launched the first iPhone. As the technology has evolved, MDM has transformed into enterprise mobility management (EMM) and is now part of unified endpoint management (UEM).

MDM software is used to manage both BYOD devices and corporate-owned devices that run on any mobile operating system (iOS, Android, iPadOS, Windows or purpose-built devices). MDM solutions use containerization—which separates the corporate apps and data from the personal ones—to maintain device security and the security of mobile apps.

What is mobile application management (MAM)?


Mobile application management (MAM) has emerged with the rise of mobile app usage. It is software used to manage and protect the mobile apps available on users’ devices. It is usually part of MDM software and UEM (unified endpoint management) solutions.

When using MAM software to protect company data either on BYOD policies or company-owned devices, James and other IT admins use the containerization features and security policies to make sure that the right users have the right access to the right enterprise apps—usually part of an app store available in the MAM solutions. This comes with features like access management, multi-factor authentication, granular permissions and control to protect users and ensure data security and control.

James has MDM and MAM software available at hand, which made sure that the data available on Robert’s and Mary’s smartphones are safe. When thinking about MDM vs. MAM, IT admins would need to think about their objectives. They both offer granular control, both have containerization and both use access management and identity management technologies.

So what sets them apart?

Top 5 differences between mobile device management (MDM) and mobile application management (MAM)


1. What they manage:

◉ MDM is performed at the device level for enrolled devices and users, including device settings, security policies and apps.

◉ MAM focuses on managing and protecting mobile enterprise applications and the business data available to them.

2. What they control:

◉ MDM controls the entire device, allowing actions like wipe, selective wipe, lock, locate, enforce passwords and more.

◉ MAM has control over the apps themselves. While it also enforces security policies, it does so at the application level.

3. What they secure:

◉ MDM focused on device security, user security, encryption, VPN and app security. MDM solutions use functions like wipe, remote wipe and geo-location, and may have threat management features against SMS and email phishing, jailbroken and rooted devices, and many more.

◉ MAM focuses on app security, including functions like setting up automatic app removal conditions to prevent unauthorized access. Some MAM software has app wrappers or software development kits (SDK) as security add-ons.

4. How they handle app deployment:

◉ MDM technologies usually allow IT teams to push and install apps.

◉ MAM technologies allow IT teams push and install apps from an app catalog, but also allow end users to install the approved enterprise apps.

5. How they manage:

◉ MDM has standard app management capabilities related to installation and updates. There are also UEM solutions that have MDM and mobile application management capabilities included.

◉ MAM offers granular and advanced app management spanning across all the application lifecycles. For example, it enables actions like installation, deployment, patching, integration with public app stores (like the iOS App Store and Google Play Store). IT Admins can also distribute apps and track the installation of apps remotely, over-the-air (OTA), to all users, groups of users or personal devices.

Source: ibm.com

Thursday, 24 August 2023

Will generative AI make the digital twin promise real in the energy and utilities industry?

IBM, IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Guides, IBM Certification, IBM Learning

A digital twin is the digital representation of a physical asset. It uses real-world data (both real time and historical) combined with engineering, simulation or machine learning (ML) models to enhance operations and support human decision-making.

Overcome hurdles to optimize digital twin benefits


To realize the benefits of a digital twin, you need a data and logic integration layer, as well as role-based presentation. As illustrated in Figure 1, in any asset-intensive industry, such as energy and utilities, you must integrate various data sets, such as:

◉ OT (real-time equipment, sensor and IoT data)
◉ IT systems such as enterprise asset management (for example, Maximo or SAP)
◉ Plant lifecycle management systems
◉ ERP and various unstructured data sets, such as P&ID, visual images and acoustic data

IBM, IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Guides, IBM Certification, IBM Learning
Figure 1. Digital twins and integrated data

For the presentation layer, you can leverage various capabilities, such as 3D modeling, augmented reality and various predictive model-based health scores and criticality indices. At IBM, we strongly believe that open technologies are the required foundation of the digital twin.

When leveraging traditional ML and AI modeling technologies, you must carry out focused training for siloed AI models, which requires a lot of human supervised training. This has been a major hurdle in leveraging data—historical, current and predictive—that is generated and maintained in the siloed process and technology.

As illustrated in Figure 2, the use of generative AI increases the power of the digital twin by simulating any number of physically possible and simultaneously reasonable object states and feeding them into the networks of the digital twin.

IBM, IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Guides, IBM Certification, IBM Learning
Figure 2. Traditional AI models versus foundation models

These capabilities can help to continuously determine the state of the physical object. For example, heat maps can show where in the electricity network bottlenecks may occur due to an expected heat wave caused by intensive air conditioning usage (and how these could be addressed by intelligent switching). Along with the open technology foundation, it is important that the models are trusted and targeted to the business domain.

Generative AI and digital twin use cases in asset-intensive industries


Various use cases come into reality when you leverage generative AI for digital twin technologies in an asset-intensive industry such as energy and utilities. Consider some of the examples of use cases from our clients in the industry:

1. Visual insights. By creating a foundational model of various utility asset classes—such as towers, transformers and lines—and by leveraging large scale visual images and adaptation to the client setup, we can utilize the neural network architectures. We can use this to scale the use of AI in identification of anomalies and damages on utility assets versus manually reviewing the image.

2. Asset performance management. We create large-scale foundational models based on time series data and its co-relationship with work orders, event prediction, health scores, criticality index, user manuals and other unstructured data for anomaly detection. We use the models to create individual twins of assets which contain all the historical information accessible for current and future operation.

3. Field services. We leverage retrieval-augmented generation tasks to create a question-answer feature or multi-lingual conversational chatbot (based on a documents or dynamic content from a broad knowledge base) that provides field service assistance in real time. This functionality can dramatically impact field services crew performance and increase the reliability of the energy services by answering asset-specific questions in real time without the need to redirect the end user to documentation, links or a human operator.

Generative AI and large language models (LLMs) introduce new hazards to the field of AI, and we do not claim to have all the answers to the questions that these new solutions introduce. IBM understands that driving trust and transparency in artificial intelligence is not a technological challenge, but a socio-technological challenge.

We a see large percentage of AI projects get stuck in the proof of concept, for reasons ranging from misalignment to business strategy to mistrust in the model’s results. IBM brings together vast transformation experience, industry expertise and proprietary and partner technologies. With this combination of skills and partnerships, IBM Consulting™ is uniquely suited to help businesses build the strategy and capabilities to operationalize and scale trusted AI to achieve their goals.

Currently, IBM is one of few in the market that both provides AI solutions and has a consulting practice dedicated to helping clients with the safe and responsible use of AI. IBM’s Center of Excellence for Generative AI helps clients operationalize the full AI lifecycle and develop ethically responsible generative AI solutions.

The journey of leveraging generative AI should: a) be driven by open technologies; b) ensure AI is responsible and governed to create trust in the model; and c) should empower those who use your platform. We believe that generative AI can make the digital twin promise real for the energy and utilities companies as they modernize their digital infrastructure for the clean energy transition. By engaging with IBM Consulting, you can become an AI value creator, which allows you to train, deploy and govern data and AI models.


Source: ibm.com

Tuesday, 22 August 2023

Unlock true Kubernetes cost savings without losing precious sleep over performance risks

IBM, IBM Exam, IBM Exam Prep, IBM Certification, IBM Learning, IBM Certifications, IBM Tutorial and Materials, IBM Guide

The race to innovate has likely left you (and many, many others) with unexpectedly high cloud bills and/or underutilized resources. In fact, according to Flexera’s 2023 State of the Cloud report, for the first time in a decade, “managing cloud spend” (82%) surpassed “security” (79%) to become the number one challenge facing organizations across the board.


We get it. Overprovisioning is the go-to strategy for avoiding performance risks.

Trying to find the balance between performance and efficiency is anything but a walk in the park. Sure, there are endless Kubernetes cost monitoring tools available that allow you to keep tabs on various aspects of your cluster’s resource usage, like CPU, memory, storage and network. Tracking these metrics can help identify resource-intensive workloads, inefficient resource allocation or unnecessary resource consumption that may lead to increased costs.

All this time-consuming monitoring is closely followed by the labor-intensive work of rightsizing containers and setting auto-scaling policies and thresholds.   

Hello, automation


IBM Turbonomic optimizes your Kubernetes environment through container rightsizing, pod suspension and provisioning, pod moves and cluster scaling actions. Every layer of the stack is analyzed and resourced based on real-time demand—from pods and services to containers to nodes, as well as the underlying cloud infrastructure. It’s purpose-built to help your teams automate and quickly achieve significant and continuous results.

Turbonomic supports all upstream versions of Kubernetes—Red Hat OpenShift, EKS, AKS, GKE and more—on any cloud, in any data center and with any hybrid or multicloud combination. It understands the resource needs of your applications and continuously determines the actions that ensure the apps get exactly what they need to perform.

Let’s begin by looking at your container clusters.

IBM, IBM Exam, IBM Exam Prep, IBM Certification, IBM Learning, IBM Certifications, IBM Tutorial and Materials, IBM Guide

Here you see your top clusters sorted by health, followed by top node pools sorted by potential savings. This dashboard provides a great overview of what you want to keep an eye on, but let’s take a look at what really matters—the actions.

IBM, IBM Exam, IBM Exam Prep, IBM Certification, IBM Learning, IBM Certifications, IBM Tutorial and Materials, IBM Guide

In this example, we see an action to resize a workload controller (a container). As the action shows, resizing here will improve performance. With Turbonomic, every action includes the data to back it up, as well as details around the action’s impact.

IBM, IBM Exam, IBM Exam Prep, IBM Certification, IBM Learning, IBM Certifications, IBM Tutorial and Materials, IBM Guide

In this next example, we see an action to suspend a node, which will improve efficiency. By how much, you ask?

IBM, IBM Exam, IBM Exam Prep, IBM Certification, IBM Learning, IBM Certifications, IBM Tutorial and Materials, IBM Guide

Look at how much is saved just by suspending this one unused node.

Still, it can be unnerving for application owners and development teams to scale back resources. We get it. Performance is paramount.

Turbonomic is all about performance


Turbonomic makes sure your apps get exactly what they need when they need it. The efficiency gains are a byproduct of that.

Have your app owner take it. It’s a low-risk way to get comfortable with automation. In fact, some of these actions are non-disruptive and reversible.  

Again, because every action also comes with metrics and the reasoning behind it, teams have an easier time trusting the decision to act. You need that trust in order to move from human decision-making to operationalizing automation.

An observability platform’s best friend


If you have application data from critical tools like IBM Instana Observability or any other application performance monitoring (APM) solution, Turbonomic can understand the response time and transactions of the application, stitching this application data to the Kubernetes platform and the infrastructure on which it runs.

IBM, IBM Exam, IBM Exam Prep, IBM Certification, IBM Learning, IBM Certifications, IBM Tutorial and Materials, IBM Guide

You and everyone else see exactly how dynamic resourcing improves application performance while minimizing cost.

IBM, IBM Exam, IBM Exam Prep, IBM Certification, IBM Learning, IBM Certifications, IBM Tutorial and Materials, IBM Guide

See here—even as demand fluctuates, response times are kept low.

If you have predefined service level objectives (SLOs), Turbonomic can ingest that data to dynamically scale microservice applications out and back based on demand to ensure those SLOs are always met. SLO policies can also be configured directly on the platform.

You can gradually take more and more actions, then integrate them with your pipelines and processes. Whether it’s Slack, GitOps, TerraForm, Ansible, ServiceNow or others, Turbonomic’s got you covered.

Feel free to start with small steps to get started, but unlocking Kubernetes elasticity for continuous performance at the lowest cost requires automation.

Let IBM Turbonomic handle it


With Turbonomic, you can automate these micro-improvements at a rate that exceeds human scale. Remove the labor-intensive work of rightsizing containers and setting auto-scaling policies and thresholds and let the software do it for you based on real-time application demand. The cumulative effect of these micro-improvements is Kubernetes applications that perform exactly like they should at the lowest cost possible.

In other words, put those performance-risk nightmares to bed.

Source: ibm.com

Saturday, 19 August 2023

US Open heralds new era of fan engagement with watsonx and generative AI

IBM Exam Study, IBM Exam Prep, IBM Tutorial and Materials, IBM Certification, IBM Learning, IBM Certification Prep

As the tournament’s official digital innovation partner, IBM has helped the US Open attract and engage viewers for more than three decades. Year after year, IBM Consulting works with the United States Tennis Association (USTA) to transform massive amounts of data into meaningful insight for tennis fans.


This year, the USTA is using watsonx, IBM’s new AI and data platform for business. Bringing together traditional machine learning and generative AI with a family of enterprise-grade, IBM-trained foundation models, watsonx allows the USTA to deliver fan-pleasing, AI-driven features much more quickly. With watsonx, users can collaborate to specialize and deploy models for a wide variety of use cases, or build their own—making massive AI scalability possible.

Watsonx powers AI-generated tennis commentary


This year the US Open is using the generative AI capabilities of watsonx to deliver audio commentary and text captions on video highlight reels of every men’s and women’s singles match. Fans can hear play-by-play narration at the start and end of each reel, and for key points within. The AI commentary feature will be available through the US Open app and the US Open website.

The process to create the commentary began by populating a data store on watsonx.data, which connects and governs trusted data from disparate sources (such as player rankings going into the match, head-to-head records, match details and statistics).

Next, the teams trained a foundation model using watsonx.ai, a powerful studio for training, validating, tuning and deploying generative AI models for business. The US Open’s model was trained on the unique language of tennis, incorporating a wide variety of contextual description (such as adjectives like brilliant, dominant or impressive) based on lengths of rallies, number of aces, first-serve percentages, relative rankings and other key stats.

Beyond helping enterprise clients embed AI in their daily workflows, watsonx helps them manage the entire AI lifecycle. That’s why the US Open will also use watsonx.governance to direct, manage and monitor its AI activities. It will help them operationalize and automate governance of their models to ensure responsible, transparent and explainable AI workflows, identify and mitigate bias and drift, capture and document model metadata and foster a collaborative environment.

Using watsonx to provide wide-ranging Match Insights


The US Open also relies on watsonx to provide Match Insights, an engaging variety of tennis statistics and predictions delivered through the US Open app and website.

For example, the IBM Power Index is a measure of momentum that melds performance and punditry. Structured historical data about every player is combined with an analysis of unstructured data (language and sentiment derived from millions of news articles about athletes in the tournament), using watsonx.data and watsonx.ai. As play progresses, a further 2.7 million data points are captured, drawn from every shot of every match. This creates a rich, up-to-the-minute data set on which to run predictive AI, project winners and identify keys to match success. The Power Index provides nuanced and timely predictions that spark lively engagement and debate.

When a tournament draw is released, pundits and fans often assess each player’s luck and path through the field: do they have a “good draw” or a “bad draw”? This year, IBM AI Draw Analysis helps them make more data-informed predictions by providing a statistical factor (a draw ranking) for each player in the men’s and women’s singles events. The analysis, derived from structured and unstructured data using watsonx, determines the level of advantage or disadvantage for each player and is updated throughout the day as the tournament progresses and players are eliminated. Every player has their draw ranked from 1 (most favorable) to 128 (most difficult). Fans can also click on individual matches to see a projected difficulty for that round.

Based on the AI Draw Analysis, users of the US Open app can explore a player’s road to the final and the difficulty of a player’s draw. The AI Draw Analysis feature shows potential matchups, informed by player data derived from the Power Index. For matches in progress, fans can also follow the live scores and stats provided by IBM SlamTracker.

A new era of scalable enterprise AI


Through their longstanding partnership, the IBM and the USTA collaborate to explore new ways to use automation and AI to deliver compelling fan experiences at the US Open. This year’s innovations demonstrate how watsonx can help organizations quickly and effectively implement both predictive and generative AI technologies. Through a collaborative, centrally governed environment that empowers non-technical users to make the most of their organization’s high-quality data and leverage foundation models trained on IBM-curated datasets, watsonx opens the door to true AI scalability for the enterprise.

Source: ibm.com

Friday, 18 August 2023

IBM’s dedication to responsible computing

IBM, IBM Exam, IBM Exam Study, IBM Prep, IBM Tutorial and Materials, IBM Learning, IBM Certification

In response to concerns faced by corporations about the impact of technology on our environment, IBM founded the Responsible.Computing() movement, creating a membership consortium with Dell that is managed by the Object Management Group.

Customers and partners expect responsible corporate policies and practices, and they also build loyalty with employees. Responsible computing establishes a cohesive, interconnected framework across six critical domains to provide every organization the ability to educate on their responsibilities, define goals and measure their progress against these aspirations:

IBM, IBM Exam, IBM Exam Study, IBM Prep, IBM Tutorial and Materials, IBM Learning, IBM Certification

The Responsible.Computing() framework’s six domains of focus


The framework consists of six domains of focus that demonstrate how IBM reflects responsibility:

◉ Data center: Reducing the environmental impact of building and maintaining data centers.

◉ Infrastructure: An end-to-end approach into the lifecycle of computing units, such as sourcing precious and rare metals and reducing their usage, reducing energy and computing units, and ultimately recycling the units

◉ Code: Writing efficient, open and secure code. 

◉ Data usage: Implementing data privacy and transparency practices and requesting consent for use and authorization for any data acquisition.

◉ Systems: Producing ethical and unbiased systems, including explainable AI.

◉ Impact: Addressing societal issues, such as social mobility.

In an era where technology is pivotal in shaping politics, digital economies and transitional industries (e.g., the automotive industry), the Responsible.Computing() framework expands beyond just sustainability and climate change and adds other domains, such as ethics, security and openness.

Download the white paper and learn more


IBM Cloud is a major contributor to the advancement of technology, but beyond providing technical breakthroughs, we believe in preserving data privacy and security and promoting trust and ethics in our products, right from design. IBM Cloud collaborates with industry, governments and regulators to protect the environment, reduce consumption, preserve data privacy, write efficient code and develop trusted systems, all while tackling modern issues in society.

Source: ibm.com

Thursday, 17 August 2023

How continuous automated red teaming (CART) can help improve your cybersecurity posture

IBM, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Tutorial and Materials, IBM Certification

It is not a matter of if an organization will be compromised, but when. An adept, well-resourced and experienced attacker could very well be your worst cyberthreat nightmare. Fortunately, if your organization engages a red team, an ethical hacker could also be your best friend. 

Conducting red team testing is the most realistic way to validate your defenses, find vulnerabilities and improve your organization’s cybersecurity posture. A red team engagement gives your blue team a chance to more accurately assess your security program’s effectiveness and make improvements. It’s also how more organizations bring a resilience-first mindset into their cybersecurity posture.  

Why red teams are important in cybersecurity 


As part of security testing, red teams are security professionals who play the “bad guys” to test the organization’s defenses against blue team defenders.  

Every bit as skilled as real threat actors, red teams probe an attack surface for ways to gain access, get a foothold, move laterally and exfiltrate data. This approach contrasts with the methodology behind penetration testing (or pen testing), where the focus is on finding sensitive information or exploitable security vulnerabilities and testing cybersecurity defenses to gain access to security controls.  

Unlike cybercriminals, red teamers do not intend to cause actual damage. Instead, their goal is to expose gaps in cybersecurity defenses, helping security teams learn and adjust their program before an actual attack happens.  

How red teaming builds resilience  


A famous quote states: “In theory, theory and practice are the same. In practice, they are not.” The best way to learn how to prevent and recover from cyberattacks is to practice by conducting red team activities. Otherwise, without proof of which security tactics are working, resources can easily be wasted on ineffective technologies and programs. 

It’s hard to tell what really works, what doesn’t, where you need to make additional investments and which investments weren’t worth it until you have the opportunity to engage with an adversary who is trying to beat you. 

During red team exercises, organizations pit their security controls, defenses, practices and internal stakeholders against a dedicated adversary that mounts an attack simulation. This is the real value of red team assessments. They give security leaders a true-to-life appraisal of their organization’s cybersecurity and insight into how hackers might exploit different security vulnerabilities. After all, you don’t get to ask a nation-state attacker what you missed or what they did that worked really well, so it’s hard for you to get the feedback you need to actually assess the program. 

Moreover, every red team operation creates an opportunity for measurement and improvement. It’s possible to gain a high-level picture of whether an investment—such as security tools, testers or awareness training—is helping in the mitigation of various security threats.  

Red team members also help companies evolve beyond a find-and-fix mentality to a categorical defense mentality. Turning attackers loose on your network security can be scary — but the hackers are already trying every door handle in your security infrastructure. Your best bet is to find the unlocked doors before they do.  

When to engage a red team  


It’s said that there are only two types of companies—those that have been hacked and those that will be hacked. Regrettably, it might not be far from the truth. Every company, no matter its size, can benefit from conducting a red teaming assessment. But for a red team engagement to provide the most benefit, an organization must have two things:  

- Something to practice (a security program in place)  
- Someone to practice it with (defenders)  

The best time for your organization to engage red team services is when you want to understand program-level questions. For example, how far would an attacker who wants to exfiltrate sensitive data get within my network before they trigger an alert?  

Red teaming is also a good option when your security team wants to test their incident response plan or train team members.  

When red teaming alone is not enough 


Red teaming is one of the best ways to test your organization’s security and its ability to withstand a potential attack. So, why don’t more companies opt for it?  

As beneficial as red teaming is, in today’s fast-paced, ever-changing environments, red team engagements can fall short of detecting break changes as they happen. A security program is only as effective as the last time it was validated, leading to gaps in visibility and a weakened risk posture.  

Building an internal red team capacity is expensive and few organizations are able to dedicate the necessary resources. To be truly impactful, a red team needs enough personnel to mimic the persistent and well-resourced threat level of modern cybercrime gangs and nation-state threats. A red team should include dedicated security operations members (or ethical hacking sub teams) for targeting, research, and attack exercises.  

A variety of third-party vendors exist to give organizations the option of contracting red team services. They range from large firms to boutique operators that specialize in particular industries or IT environments. While it is easier to contract red team services than to employ full-time staff, doing so can actually be more expensive, particularly if you do so regularly. As a result, only a small number of organizations use red teaming frequently enough to gain real insight. 

Benefits of continuous automated red teaming (CART) in cybersecurity 


Continuous automated red teaming (CART) utilizes automation to discover assets, prioritize discoveries and (once authorized) conduct real-world attacks utilizing tools and exploits developed and maintained by industry experts. 

With its focus on automation, CART allows you to focus on interesting and novel testing, freeing your teams from the repetitive and error-prone work that leads to frustration and ultimately burnout. 

CART provides you with the ability to proactively and continually assess your overall security posture at a fraction of the cost. It makes red teaming more accessible and provides you with up-to-the-minute visibility into your defense performance. 

Elevate your cybersecurity resilience with IBM Security Randori


IBM Security Randori offers a CART solution called IBM Security Randori Attack Targeted, which helps you clarify your cyber risk by proactively testing and validating your overall security program on an ongoing basis. 

The Total Economic Impact™ of IBM Security Randori study that IBM commissioned Forrester Consulting to conduct in 2023 found 75% labor savings from augmented red team activities. 

The solution’s functionality seamlessly integrates with or without an existing internal red team. Randori Attack Targeted also offers insights into the effectiveness of your defenses, making advanced security accessible even for mid-sized organizations. 

Source: ibm.com

Wednesday, 16 August 2023

Take advantage of AI and use it to make your business better

IBM Exam, IBM Exam Study, IBM Career, IBM Skill, IBM Certification, IBM Tutorial and Materials

Artificial intelligence (AI) adoption is here. Organizations are no longer asking whether to add AI capabilities, but how they plan to use this quickly emerging technology. In fact, the use of artificial intelligence in business is developing beyond small, use-case specific applications into a paradigm that places AI at the strategic core of business operations. By offering deeper insights and eliminating repetitive tasks, workers will have more time to fulfill uniquely human roles, such as collaborating on projects, developing innovative solutions and creating better experiences.

This advancement does not come without its challenges. While 42% of companies say they are exploring AI technology, the failure rate is high; on average, 54% of AI projects make it from pilot to production. To overcome these challenges will require a shift in many of the processes and models that businesses use today: changes in IT architecture, data management and culture. Here are some of the ways organizations today are making that shift and reaping the benefits of AI in a practical and ethical way.

How companies use artificial intelligence in business


Artificial intelligence in business leverages data from across the company as well as outside sources to gain insights and develop new business processes through the development of AI models. These models aim to reduce rote work and complicated, time-consuming tasks, as well as help companies make strategic changes to the way they do business for greater efficiency, improved decision-making and better business outcomes.

A common phrase you’ll hear around AI is that artificial intelligence is only as good as the data foundation that shapes it. Therefore, a well-built AI for business program must also have a good data governance framework. It ensures the data and AI models are not only accurate, providing a higher-quality outcome, but that the data is being used in a safe and ethical way.

Why we’re all talking about AI for business


It’s hard to avoid conversations about artificial intelligence in business today. Healthcare, retail, financial services, manufacturing—whatever the industry, business leaders want to know how using data can give them a competitive advantage and help address the post-COVID challenges they face each day.

Much of the conversation has been focused on generative AI capabilities and for good reason. But while this groundbreaking AI technology has been the focus of media attention, it only tells part of the story. Diving deeper, the potential of AI systems is also challenging us to go beyond these tools and think bigger: How will the application of AI and machine learning models advance big-picture, strategic business goals?

Artificial intelligence in business is already driving organizational changes in how companies approach data analytics and cybersecurity threat detection. AI is being implemented in key workflows like talent acquisition and retention, customer service, and application modernization, especially paired with other technologies like virtual agents or chatbots.

Recent AI developments are also helping businesses automate and optimize HR recruiting and professional development, DevOps and cloud management, and biotech research and manufacturing. As these organizational changes develop, businesses will begin to switch from using AI to assist in existing business processes to one where AI is driving new process automation, reducing human error, and providing deeper insights. It’s an approach known as AI first or AI+.

Building blocks of AI first


What does building a process with an AI first approach look like? Like all systemic change, it is a step-by-step process—a ladder to AI—that lets companies create a clear business strategy and build out AI capabilities in a thoughtful, fully integrated way with three clear steps.  

Configuring data storage specifically for AI

The first step toward AI first is modernizing your data in a hybrid multicloud environment. AI capabilities require a highly elastic infrastructure to bring together various capabilities and workflows in a team platform. A hybrid multicloud environment offers this, giving you choice and flexibility across your enterprise.

Building and training foundation models

Creating foundations models starts with clean data. This includes building a process to integrate, cleanse, and catalog the full lifecycle of your AI data. Doing so allows your organization the ability to scale with trust and transparency.

Adopting a governance framework to ensure safe, ethical use

Proper data governance helps organizations build trust and transparency, strengthening bias detection and decision making When data is accessible, trustworthy and accurate, it also enables companies to better implement AI throughout the organization. 

What are foundation models and how are they changing the game for AI?


Foundation models are AI models trained with machine learning algorithms on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning. The model can apply information it’s learned about one situation to another using self-supervised learning and transfer learning. For example, ChatGPT is built upon the GPT-3.5 and GPT-4 foundation models created by OpenAI.

Well-built foundation models offer significant benefits; the use of AI can save businesses countless hours building their own models. These time-saving advantages are what’s attracting many businesses to wider adoption. IBM expects that in two years, foundation models will power about a third of AI within enterprise environments.

From a cost perspective, foundation models require significant upfront investment; however, they allow companies to save on the initial cost of model building since they are easily scaled to other uses, delivering higher ROI and faster speed to market for AI investments.


To that end, IBM is building a set of domain-specific foundation models that go beyond natural language learning models and are trained on multiple types of business data, including code, time-series data, tabular data, geospatial data, semi-structured data, and mixed-modality data such as text combined with images. The first of which, Slate, was recently released.

AI starts with data

To launch a truly effective AI program for your business, you must have clean quality datasets and an adequate data architecture for storing and accessing it. The digital transformation of your organization must be mature enough to ensure data is collected at the needed touchpoints across the organization and the data must be accessible to whoever is doing the data analysis.

Building an effective hybrid multicloud model is essential for AI to manage the massive amounts of data that must be stored, processed and analyzed. Modern data architectures often employ a data fabric architectural approach, which simplifies data access and makes self-service data consumption easier. Adopting a data fabric architecture also creates an AI-ready composable architecture that offers consistent capabilities across hybrid cloud environments.

Governance and knowing where your data come from

The importance of accuracy and the ethical use of data makes data governance an important piece in any organization’s AI strategy. This includes adopting governance tools and incorporating governance into workflows to maintain consistent standards. A data management platform also enables organizations to properly document the data used to build or fine-tune models, providing users insight into what data was used to shape outputs and regulatory oversight teams the information they need to ensure safety and privacy.

Key considerations when building an AI strategy


Companies that adopt AI first to effectively and ethically use AI to drive revenue and improve operations will have the competitive advantage over those companies that fail to fully integrate AI into their processes. As you build your AI first strategy, here are some critical considerations:

How will AI deliver business value?

The first step when integrating AI into your organization is to identify the ways various AI platforms and types of AI align with key goals. Companies should not only discuss how AI will be implemented to achieve these goals, but also the desired outcomes.

For example, data opens opportunities for more personalized customer experiences and, in turn, a competitive edge. Companies can create automated customer service workflows with customized AI models built on customer data. More authentic chatbot interactions, product recommendations, personalized content and other AI functionality have the potential to give customers more of what they want. In addition, deeper insights on market and consumer trends can help teams develop new products.

For a better customer experience—and operational efficiency—focus on how AI can optimize critical workflows and systems, such as customer service, supply chain management and cybersecurity.

How will you empower teams to make use of your data?

One of the key elements in data democratization is the concept of data as a product. Your company data is spread across on-premises data centers, mainframes, private clouds, public clouds and edge infrastructure. To successfully scale your AI efforts, you will need to successfully use your data “product.”

A hybrid cloud architecture enables you to use data from disparate sources seamlessly and scale effectively throughout the business. Once you have a grasp on all your data and where it resides, decide which data is the most critical and which offers the strongest competitive advantage.

How will you ensure AI is trustworthy?

With the rapid acceleration of AI technology, many have begun to ask questions about ethics, privacy and bias. To ensure AI solutions are accurate, fair, transparent and protect customer privacy, companies must have well-structured data management and AI lifecycle systems in place.

Regulations to protect consumers are ever expanding; In July 2023, the EU Commission proposed new standards of GDPR enforcement and a data policy that would go into effect in September. Without proper governance and transparency, companies risk reputational damage, economic loss and regulatory violations.

Examples of AI being used in the workplace


Whether using AI technology to power chatbots or write code, there are countless ways deep learning, generative AI, natural language processing and other AI tools are being deployed to optimize business operations and customer experience. Here are some examples of business applications of artificial intelligence:

Coding and application modernization

Companies are using AI for application modernization and enterprise IT operations, putting AI to work automating coding, deploying and scaling. For example, Project Wisdom lets developers using Red Hat Ansible input a coding command as a straightforward English sentence through a natural-language interface and get automatically generated code. The project is the result of an IBM initiative called AI for Code and the release of IBM Project CodeNet, the largest dataset of its kind aimed at teaching AI to code.

Customer service

AI is effective for creating personalized experiences at scale through chatbots, digital assistants and other customer interfaces. McDonald’s, the world’s largest restaurant company, is building customer care solutions with IBM Watson AI technology and natural language processing (NLP) to accelerate the development of its automated order taking (AOT) technology. Not only will this help scale the AOT tech across markets, but it will also help tackle integrations including additional languages, dialects and menu variations.

Optimizing HR operations

When IBM implemented IBM watsonx Orchestrate as part of a pilot program for IBM Consulting in North America, the company saved 12,000 hours in one quarter on manual promotion assessment tasks, reducing a process that once took 10 weeks down to five. The pilot also made it easier to gain important HR insights. Using its digital worker tool, HiRo, IBM’s HR team now has a clearer view of each employee up for promotion and can more quickly assess whether key benchmarks have been met.

The future of AI in business


AI in business holds the potential to improve a wide range of business processes and domains, especially when the organization takes an AI first approach.

In the next five years, we will likely see businesses scale AI programs more quickly by looking to areas where AI has begun to make recent advancements, such as digital labor, IT automation, security, sustainability and application modernization.

Ultimately, success with new technologies in AI will rely on the quality of data, data management architecture, emerging foundation models and good governance. With these elements—and with business-driven, practical objectives—businesses can make the most out of AI opportunities.

Source: ibm.com

Monday, 14 August 2023

How generative AI correlates IT and business objectives to maximize business outcomes

IBM, IBM Exam, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Tutorial and Materials, IBM Tutorial and Materials

The effective use of IT resources to support business goals can be a game changer for any organization. But significant challenges delay the integration of transformative technology into business processes. Business owners often grapple with the frustrating reality of discovering IT issues impacting their operations only after customer complaints have arisen, leaving them with little opportunity to mitigate problems proactively. The lack of timely awareness hinders swift issue resolution and leads to a disconnect between the IT team’s efforts and the overall organizational business objectives. This disconnect is exacerbated further by the necessity of using multiple vendor support teams for problem resolution, siphoning time and resources away from core business functions.

The transformative potential of generative AI technology, along with strategic implementation and collaboration, can bridge the gap between IT and business objectives to drive continued success and ensure your organization delivers targeted business outcomes.

Breakthroughs in generative AI powered by large language models (LLMs) continue to inspire new solutions that help companies overcome these longstanding organizational challenges. These breakthroughs come hot on the heels of evolutionary leaps in IT and cloud technologies that enable enterprise businesses across industries to grow at scale, expand into new markets and find new pathways to success. Chief among these advancements is improvement in hybrid cloud technology, which makes it easier to deploy, manage and secure applications across multiple cloud environments. 

However, an extensive hybrid cloud estate can quickly become a complicated one that IT teams must spend significant time observing to ensure security and operationality. Many organizational IT networks host tens of thousands of applications operating within their hybrid cloud network. With this many applications, it becomes a significant challenge for IT operations to focus on achieving desired business outcomes. Every application creates a signal that IT professionals need to observe and understand quickly to determine application and network health, so they can react if something negatively impacts business performance. In a complex hybrid cloud IT landscape, it is difficult to correlate IT operations to business outcomes and take proactive actions.

The gap between IT observability and stakeholder communication


IT teams observe and make decisions by using various application performance monitoring tools to determine the health of the many applications running throughout their IT and hybrid cloud ecosystem. Business leaders don’t have easy access to this crucial information (or the technical training needed to understand it), which often leaves them in the dark about IT complications and how they may impact day-to-day work and business goals. This communication disparity can lead to confusion and inefficiency in addressing critical issues.

Effectively conveying the impact of technical issues to relevant business stakeholders is a big challenge. Organizations struggle with tailoring communication to different business personas, as various stakeholders have varying technical expertise.

IT operations must be sure that different integrated systems and platforms remain comprehensively observable, which requires considerable effort and coordination. Establishing the appropriate key performance indicators (KPIs) to measure the effectiveness of observability efforts can also be challenging, as relevant metrics must demonstrate the value and impact of observability on business operations (which isn’t always clear from an IT context). IT operations must show how observability directly contributes to business success and outcomes.

Unlocking the potential of generative AI for IT solutions and business impact


Standard observability tools allow IT experts to monitor and analyze IT alerts to determine their relevance to the business. However, this process often lacks alignment with business priorities, leading to inefficiencies and miscommunication. Communicating the business impact of IT issues to the right stakeholders is a complex task, as business leaders require contextualized information to make informed decisions.

Despite these challenges, the application of generative AI offers a promising solution to help organizations maximize business value while minimizing negative IT impacts. IT operations can put generative AI’s flexibility (in terms of multi-domain and broader functionality around content generation, summarization, code generation and entity extraction) to the task of observing the network to inform IT experts about possible issues and IT events. Meanwhile, large language models can provide detailed, contextual insights to articulate and specify IT impacts on different segments of the business.

Generative AI helps bridge the gap by conveying IT alert information to the right business stakeholders in language they can understand, with relevant details. It can deliver personalized information based on the business persona, enabling stakeholders to understand how the issue will impact them specifically.

The generative AI solution uses LLMs to inform business users about the impact on their processes, pointing out what specific aspect of their process is affected. It can provide information such as the point of impact, the implications for their division or profit center, and the overall effect on the organization.

For example, suppose an interface between Salesforce and SAP goes down. In that case, generative AI can provide details on how the IT event occurred (such as an interface or data load issue) and identify every downstream process that could affect business outcomes. IT ops can then inform stakeholders of the problem using AI-generated, domain-specific language to help leaders on the organization’s business side comprehend the event’s context and potential impacts. Additionally, generative AI can offer workarounds or alternative steps for business users to continue operations if their standard processes are affected. This level of contextualized information allows business leaders to continue their operations smoothly, even in the face of IT challenges.

Leveraging generative AI for business-driven decision making


Generative AI using LLMs provides faster and more precise analysis. This allows organizations to transform IT operations by prioritizing business-driven decision making, which leads to more effective and efficient operations. Using generative AI to validate and prioritize IT issues based on their relevance to the business and providing personalized communication of IT issues to appropriate stakeholders further empowers business leaders in making informed decisions.

While a fully integrated solution is still under development, generative AI using LLMs facilitates a more feasible way of notifying business leaders with contextual information and providing possible resolutions beyond basic event notifications. Organizations can begin incorporating various tools and systems to harness these benefits today. Integration efforts can focus on incorporating generative AI into existing technologies (such as SAP, CPI interfaces, Signavio and Salesforce) to achieve targeted outcomes. 

IBM, IBM Exam, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Tutorial and Materials, IBM Tutorial and Materials

These integrations allow for a holistic view and effective handling of IT alerts across different systems. IBM Consulting offers integrations across various tools, and we can ensure an enterprise-wide solution beyond specific proprietary platforms. 

Generative AI presents a transformative opportunity for organizations to maximize business value while minimizing negative IT impacts. Generative AI empowers organizations to make informed decisions and maintain smooth operations by aligning IT operations with business priorities, leveraging contextualized information and providing targeted workarounds.

Source: ibm.com

Saturday, 12 August 2023

Optimizing clinical trial site performance: A focus on three AI capabilities

IBM, IBM Exam, IBM Exam Study, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation

This article, part of the IBM and Pfizer’s series on the application of AI techniques to improve clinical trial performance, focuses on enrollment and real-time forecasting. Additionally, we are looking to explore the ways to increase patient volume, diversity in clinical trial recruitment, and the potential to apply Generative AI and quantum computing. More than ever, companies are finding that managing these interdependent journeys in a holistic and integrated way is essential to their success in achieving change.

Despite advancements in the pharmaceutical industry and biomedical research, delivering drugs to market is still a complex process with tremendous opportunity for improvement. Clinical trials are time-consuming, costly, and largely inefficient for reasons that are out of companies’ control. Efficient clinical trial site selection continues to be a prominent industry-wide challenge. Research conducted by the Tufts Center for Study of Drug Development and presented in 2020 found that 23% of trials fail to achieve planned recruitment timelines; four years later, many of IBM’s clients still share the same struggle. The inability to meet planned recruitment timelines and the failure of certain sites to enroll participants contribute to a substantial monetary impact for pharmaceutical companies that may be relayed to providers and patients in the form of higher costs for medicines and healthcare services. Site selection and recruitment challenges are key cost drivers to IBM’s biopharma clients, with estimates, between $15-25 million annually depending on size of the company and pipeline. This is in line with existing sector benchmarks.

When clinical trials are prematurely discontinued due to trial site underperformance, the research questions remain unanswered and research findings end up not published. Failure to share data and results from randomized clinical trials means a missed opportunity to contribute to systematic reviews and meta-analyses as well as a lack of lesson-sharing with the biopharma community.

As artificial intelligence (AI) establishes its presence in biopharma, integrating it into the clinical trial site selection process and ongoing performance management can help empower companies with invaluable insights into site performance, which may result in accelerated recruitment times, reduced global site footprint, and significant cost savings (Exhibit 1). AI can also empower trial managers and executives with the data to make strategic decisions. In this article, we outline how biopharma companies can potentially harness an AI-driven approach to make informed decisions based on evidence and increase the likelihood of success of a clinical trial site.

IBM, IBM Exam, IBM Exam Study, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation

Tackling complexities in clinical trial site selection: A playground for a new technology and AI operating model


Enrollment strategists and site performance analysts are responsible for constructing and prioritizing robust end-to-end enrollment strategies tailored to specific trials. To do so they require data, which is in no shortage. The challenges they encounter are understanding what data is indicative of site performance. Specifically, how can they derive insights on site performance that would enable them to factor non-performing sites into enrollment planning and real-time execution strategies.

In an ideal scenario, they would be able to, with relative and consistent accuracy, predict performance of clinical trial sites that are at risk of not meeting their recruitment expectations. Ultimately, enabling real-time monitoring of site activities and enrollment progress could prompt timely mitigation actions ahead of time. The ability to do so would assist with initial clinical trial planning, resource allocation, and feasibility assessments, preventing financial losses, and enabling better decision-making for successful clinical trial enrollment.

Additionally, biopharma companies may find themselves building out AI capabilities in-house sporadically and without overarching governance. Assembling multidisciplinary teams across functions to support a clinical trial process is challenging, and many biopharma companies do this in an  isolated fashion. This results in many groups using a large gamut of AI-based tools that are not fully integrated into a cohesive system and platform. Therefore, IBM observes that more clients tend to consult AI leaders to help establish governance and enhance AI and data science capabilities, an operating model in the form of co-delivery partnerships.

Embracing AI for clinical trials: The elements of success


By embracing three AI-enabled capabilities, biopharma companies can significantly optimize clinical trial site selection process while developing core AI competencies that can be scaled out and saving financial resources that can be reinvested or redirected. The ability to seize these advantages is one way that pharmaceutical companies may be able to gain sizable competitive edge.

AI-driven enrollment rate prediction

Enrollment prediction is typically conducted before the trial begins and helps enrollment strategist and feasibility analysts in initial trial planning, resource allocation, and feasibility assessment. Accurate enrollment rate prediction prevents financial losses, aids in strategizing enrollment plans by factoring in non-performance, and enables effective budget planning to avoid shortfalls and delays.

  • It can identify nonperforming clinical trial sites based on historical performance before the trial starts, helping in factoring site non-performance into their comprehensive enrollment strategy.  
  • It can assist in budget planning by estimating the early financial resources required and securing adequate funding, preventing budget shortfalls and the need for requesting additional funding later, which can potentially slow down the enrollment process.

AI algorithms have the potential to surpass traditional statistical approaches for analyzing comprehensive recruitment data and accurately forecasting enrollment rates.  

  • It offers enhanced capabilities to analyze complex and large volumes of comprehensive recruitment data to accurately forecast enrollment rates at study, indication, and country levels.
  • AI algorithms can help identify underlying patterns and trends through vast amounts of data collected during feasibility, not to mention previous experience with clinical trial sites. Blending historical performance data along with RWD (Real world data) may be able to elucidate hidden patterns that can potentially bolster enrollment rate predictions with higher accuracy compared to traditional statistical approaches. Enhancing current approaches by leveraging AI algorithms is intended to improve power, adaptability, and scalability, making them valuable tools in predicting complex clinical trial outcomes like enrollment rates. Often larger or established teams shy away from integrating AI due to complexities in rollout and validation. However, we have observed that greater value comes from employing ensemble methods to achieve more accurate and robust predictions.

Real-time monitoring and forecasting of site performance

Real-time insight into site performance offers up-to-date insights on enrollment progress, facilitates early detection of performance issues, and enables proactive decision-making and course corrections to facilitate clinical trial success.

  • Provides up-to-date insights into the enrollment progress and completion timelines by continuously capturing and analyzing enrollment data from various sources throughout the trial. 
  • Simulating enrollment scenarios on the fly from real time monitoring can empower teams to enhance enrollment forecasting facilitating early detection of performance issues at sites, such as slow recruitment, patient eligibility challenges, lack of patient engagement, site performance discrepancies, insufficient resources, and regulatory compliance.
  • Provides timely information that enables proactive evidence-based decision-making enabling minor course corrections with larger impact, such as adjusting strategies, allocating resources to ensure a clinical trial stays on track, thus helping to maximize the success of the trial.

AI empowers real-time site performance monitoring and forecasting by automating data analysis, providing timely alerts and insights, and enabling predictive analytics. 

  • AI models can be designed to detect anomalies in real-time site performance data. By learning from historical patterns and using advanced algorithms, models can identify deviations from expected site performance levels and trigger alerts. This allows for prompt investigation and intervention when site performance discrepancies occur, enabling timely resolution and minimizing any negative impact.
  • AI enables efficient and accurate tracking and reporting of key performance metrics related to site performance such as enrollment rate, dropout rate, enrollment target achievement, participant diversity, etc. It can be integrated into real-time dashboards, visualizations, and reports that provide stakeholders with a comprehensive and up-to-date insight into site performance.
  • AI algorithms may provide a significant advantage in real-time forecasting due to their ability to elucidate and infer complex patterns within data and allow for reinforcement to drive continuous learning and improvement, which can help lead to a more accurate and informed forecasting outcome.

Leveraging Next Best Action (NBA) engine for mitigation plan execution

Having a well-defined and executed mitigation plan in place during trial conduct is essential to the success of the trial.

  • A mitigation plan facilitates trial continuity by providing contingency measures and alternative strategies. By having a plan in place to address unexpected events or challenges, sponsors can minimize disruptions and keep the trial on track. This can help prevent the financial burden of trial interruptions if the trial cannot proceed as planned.
  • Executing the mitigation plan during trial conduct can be challenging due to the complex trial environment, unforeseen circumstances, the need for timelines and responsiveness, compliance and regulatory considerations, etc. Effectively addressing these challenges is crucial for the success of the trial and its mitigation efforts.

A Next Best Action (NBA) engine is an AI-powered system or algorithm that can recommend the most effective mitigation actions or interventions to optimize site performance in real-time.

  • The NBA engine utilizes AI algorithms to analyze real-time site performance data from various sources, identify patterns, predict future events or outcomes, anticipate potential issues that require mitigation actions before they occur.
  • Given the specific circumstances of the trial, the engine employs optimization techniques to search for the best combination of actions that align with the pre-defined key trial conduct metrics. It explores the impact of different scenarios, evaluate trade-offs, and determine the optimal actions to be taken.
  • The best next actions will be recommended to stakeholders, such as sponsors, investigators, or site coordinators. Recommendations can be presented through an interactive dashboard to facilitate understanding and enable stakeholders to make informed decisions.

Shattering the status quo


Clinical trials are the bread and butter of the pharmaceutical industry; however, trials often experience delays which can significantly extend the duration of a given study. Fortunately, there are straightforward answers to address some trial management challenges: understand the process and people involved, adopt a long-term AI strategy while building AI capabilities within this use case, invest in new machine learning models to enable enrollment forecasting, real-time site monitoring, data-driven recommendation engine. These steps can help not only to generate sizable savings but also to make biopharma companies feel more confident about the investments in artificial intelligence with impact.

IBM Consulting and Pfizer are working together to revolutionize the pharmaceutical industry by reducing the time and cost associated with failed clinical trials so that medicines can reach patients in need faster and more efficiently.

Combining the technology and data strategy and computing prowess of IBM and the extensive clinical experience of Pfizer, we have also established a collaboration to explore quantum computing in conjunction with classical machine learning to more accurately predict clinical trial sites at risk of recruitment failure. Quantum computing is a rapidly emerging and transformative technology that utilizes the principles of quantum mechanics to solve industry critical problems too complex for classical computers. 

Source: ibm.com