Saturday 28 January 2023

Understanding Data Governance

IBM, IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Skill, IBM Jobs, IBM Learning

If you’re in charge of managing data at your organization, you know how important it is to have a system in place for ensuring that your data is accurate, up-to-date, and secure. That’s where data governance comes in.

What exactly is data governance and why is it so important?


Simply put, data governance is the process of establishing policies, procedures, and standards for managing data within an organization. It involves defining roles and responsibilities, setting standards for data quality, and ensuring that data is being used in a way that is consistent with the organization’s goals and values.

But don’t let the dry language fool you – data governance is crucial for the success of any organization. Without it, you might as well be throwing your data to the wolves (or the intern who just started yesterday and has no idea what they’re doing). Poor data governance can lead to all sorts of problems, including:

Inconsistent or conflicting data

Imagine trying to make important business decisions based on data that’s all over the place. Not only is it frustrating, but it can also lead to costly mistakes.

Data security breaches

If your data isn’t properly secured, you’re leaving yourself open to all sorts of nasty surprises. Hackers and cyber-criminals are always looking for ways to get their hands on sensitive data, and without proper data governance, you’re making it way too easy for them.

Loss of credibility

If your data is unreliable or incorrect, it can seriously damage your organization’s reputation. No one is going to trust you if they can’t trust your data.

As you can see, data governance is no joke. But that doesn’t mean it can’t be fun! Okay, maybe “fun” is a stretch, but there are definitely ways to make data governance less of a chore. Here are a few best practices to keep in mind:

Establish clear roles and responsibilities

Make sure everyone knows who is responsible for what. Provide the necessary training and resources to help people do their jobs effectively.

Define policies and procedures

Set clear guidelines for how data is collected, stored, and used within your organization. This will help ensure that everyone is on the same page and that your data is being managed consistently.

Ensure data quality

Regularly check your data for accuracy and completeness. Put processes in place to fix any issues that you find. Remember: garbage in, garbage out.

Break down data silos

Data silos are the bane of any data governance program. By breaking down these silos and encouraging data sharing and collaboration, you’ll be able to get a more complete picture of what’s going on within your organization.

Of course, implementing a successful data governance program isn’t always easy. You may face challenges like getting buy-in from stakeholders, dealing with resistance to change, and managing data quality. But with the right approach and a little bit of persistence, you can overcome these challenges and create a data governance program that works for you.

So don’t be afraid to roll up your sleeves and get your hands dirty with data governance. Your data – and your organization – will thank you for it.

In future posts, my Data Elite team and I will help guide you in this journey with our point of view and insights on how IBM can help accelerate your organization’s data readiness with our solutions.

Source: ibm.com

Tuesday 24 January 2023

The people and operations challenge: How to enable an evolved, single hybrid cloud operating model

IBM Cloud Computing, IBM Exam, IBM Prep, IBM Tutorial and Materials, IBM Career, IBM Skills, IBM Jobs, IBM

In a year’s time, the average enterprise will have more than 10 clouds, but limited architectural guardrails and implementation pressures will cause the IT landscape to become more complex, costlier and less likely to deliver better business outcomes. As businesses adopt a hybrid cloud approach to help drive digital transformation, leaders recognize the siloed, suboptimal workflows on their public cloud and private and on-prem estates. In fact, 71% of executives see integration across the cloud estate as a problem.

These leaders must overcome many challenges as they work to simplify and bring order to their hybrid IT estate, including talent shortages, operating model confusion and managing the journey from the current operating model to the target operating model.

By taking three practical steps, leaders can empower their teams and design workflows that break down siloed working practices into an evolved, single hybrid cloud operating model.

Three steps on the journey to hybrid cloud mastery


1. Empower a Cloud Center of Excellence (CCoE) to bring the hybrid cloud operating model to life and to accelerate execution

In a talent constrained environment, the CCoE houses cross-disciplinary subject-matter experts who will define and lead the transition to a new operating model and new working practices. These experts must be empowered to work across all of the cloud silos, as the goal is to dissolve silos into an integrated, common way of working that serves customers and employees better than a fragmented approach.

This might be uncomfortable, especially in hardened silos. We recommend that you treat developers and delivery teams as customers on this journey. Help them answer the question of how this new way of working is better than the old way of doing things. Seeing around corners requires investing in a small team of scouts (“Look Ahead Squads”) that stays one or two steps ahead of current implementations. These scouts should be flexible, as implementing this change is a learning experience.

2. Empower your people with the skills and experience they’ll need to thrive in a hybrid cloud operating model

69% of business leaders are lacking teams with the right cloud skills. There aren’t enough cloud architects, microservice developers or data engineers, especially if the pool of specialists is spread across cloud silos. With hybrid cloud, a consistent DevSecOps toolchain and a coherent operating model, you don’t need to train everyone on every silo of technology and practice.

IBM Cloud Computing, IBM Exam, IBM Prep, IBM Tutorial and Materials, IBM Career, IBM Skills, IBM Jobs, IBM
Address the skill gap by prioritizing the specializations required, make learning experiential so people get coaching on how to use new skills in the context of their roles in the new hybrid cloud operating model, and shape new ways of working by conducting training more efficiently and at scale within a garage environment. Drive toward true DevSecOps practices by emphasizing how the skillsets and practices involved need to be applied in an integrated, cross-disciplinary operating model. As a hybrid cloud operating model evolves, it becomes clear that cloud-native teams don’t work in isolation. Organizations must spend more time defining and evolving the proficiency framework that has previously been done in silos.

3. Reframe the talent problem as an operating model design opportunity

Operating model problems are often misread as talent problems. As W. Edwards Deming says, “A bad system will beat a good person every time.” So, design the work required for hybrid cloud operations first, and adjust your organization second.

Be aware of the fact that operating models and organization charts are different animals. To clarify, an operating model is primarily concerned with how the work of service delivery flows from customer request to fulfillment. In contrast, the primary concern of an organization chart is a hierarchy of how power and control are distributed, which should be designed to make very best use of the people you have now.

As leaders navigate the transition to a hybrid cloud environment, well-designed solutions that span business and IT become more valuable than ever. These steps ensure that the IT roadmap moves in lockstep with the business roadmap and enables leaders to consider how each interim state contributes to the evolution from the current operating model to the target operating model. This awareness can be an organization’s superpower for incorporating cloud-native, efficient and connected working practices across the hybrid environment to deliver innovation at speed (as well as alleviating issues with skill, talent and experience).

Source: ibm.com

Sunday 22 January 2023

Make data protection a 2023 competitive differentiator

IBM Exam, IBM Certification, IBM Prep, IBM Preparation

Data privacy regulations, such as the General Data Protection Regulation (GDPR) in the European Union or the California Consumer Privacy Act (CCPA) in the state of California, are inescapable. By 2024, for instance, 75% of the entire world’s population will have its personal data protected by encryption, multifactor authentication, masking and erasure, as well as data resilience. It’s clear data protection, and its three pillars—data security, data ethics and data privacy—is the baseline expectation for organizations and stakeholders, both now and into the future.

While this trend is about protecting customer data and maintaining trust amid shifting regulations, it can also be a game changing, competitive advantage for organizations. In implementing cohesive data protection initiatives, organizations that can secure their users’ data see huge wins in brand image and customer loyalty and stand out in the marketplace.

The key to differentiation comes in getting data protection right, as part of an overall data strategy. Keep reading to learn how investing in the right data protection supports and optimizes your brand image.

Use a data protection strategy to maintain your brand image and customer trust


How a company collects, stores and protects consumer data goes beyond cutting data storage costs—it is a central driving force of its reputation and brand image. As a baseline, consumers expect organizations to adhere to data privacy regulations and compliance requirements; they also expect the data and AI lifecycle to be fair, explainable, robust, equitable and transparent.

Operating with a ‘data protection first’ point of view forces organizations to ask the hard hitting, moral questions that matter to clients and prospects: Is it ethical to collect this person’s data in the first place? As an organization, what are we doing with this information? Have we shared our intentions with respondents from whom we’ve collected this data? How long and where will this data be retained? Are we going to harm anybody by doing what we do with data?

Differentiate your brand image with privacy practices rooted in data ethics


When integrated appropriately, data protection and the surrounding data ethics creates a deep trust with clients and the market overall. Take Apple, for example. They have been exceedingly clear in communicating with consumers what data is collected, why they’re collecting that data, and whether they’re making any revenue from it. They go to great lengths to integrate trust, transparency and risk management into the DNA of the company culture and the customer experience. A lot of organizations aren’t as mature in this area of data ethics.

One of the key ingredients to optimizing your brand image through data protection and trust is active communication, both internally and externally. This requires organizations to rethink the way they do business in the broadest sense. To do this, organizations must lean into data privacy programs that build transparency and risk management into everyday workflows. It goes beyond preventing data breaches or having secure means for data collection and storage. These efforts must be supported by integrating data privacy and data ethics into an organization’s culture and customer experiences.

When cultivating a culture rooted in data ethics, keep these three things in mind:


◉ Regulatory compliance is a worthwhile investment, as it mitigates risk and helps generate revenue and growth.
◉ The need for compliance is not disruptive; it’s an opportunity to differentiate your brand and earn consumer trust.
◉ Laying the foundation for data privacy allows your organization to manage its data ethics better.

Embrace the potential of data protection at the core of your competitive advantage


Ultimately, data protection fosters ongoing trust. It isn’t a one-and-done deal. It’s a continuous, iterative journey that evolves with changing privacy laws and regulations, business needs and customer expectations. Your ongoing efforts to differentiate your organization from the competition should include strategically adopt and integrate data protection as a cultural foundation of how work gets done.

By enabling an ethical, sustainable and adaptive data protection strategy that ensures compliance and security in an ever-evolving data landscape, you are building your organization into a market leader.

Source: ibm.com

Saturday 21 January 2023

Four starting points to transform your organization into a data-driven enterprise

IBM Exam Study, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Tutorial and Materials, IBM Certification

Due to the convergence of events in the data analytics and AI landscape, many organizations are at an inflection point. Regardless of size, industry or geographical location, the sprawl of data across disparate environments, increase in velocity of data and the explosion of data volumes has resulted in complex data infrastructures for most enterprises. Furthermore, a global effort to create new data privacy laws, and the increased attention on biases in AI models, has resulted in convoluted business processes for getting data to users. How do business leaders navigate this new data and AI ecosystem and make their company a data-driven organization? The solution is a data fabric.

A data fabric architecture elevates the value of enterprise data by providing the right data, at the right time, regardless of where it resides.  To simplify the process of becoming data-driven with a data fabric, we are focusing on the four most common entry points we see with data fabric journeys. In 2023, we have four entry points aligned to common data & AI stakeholder challenges.

We are also introducing IBM Cloud Park for Data Express. These are solutions that are aligned to the data fabric entry points. IBM Cloud Pak for Data Express solutions provide new clients with affordable and high impact capabilities to expeditiously explore and validate the path to become a data-driven enterprise. IBM Cloud Pak for Data Express solutions offer clients a simple on ramp to start realizing the business value of a modern architecture.

Data governance


The data governance capability of a data fabric focuses on the collection, management and automation of an organization’s data. The automated metadata generation is essential to turn a manual process into one that is better controlled. In this way it helps avoid human error and tags data so that policy enforcement can be achieved at the point of access rather than individual repositories.  This data-driven approach makes it easier to find the data that best fits their needs of business users. More importantly, this capability enables business users to quickly and easily find the quality data that conforms to regulatory requirements. IBM’s data governance capability enables the enforcement of policies at runtime anywhere, in essence “policies that move with the data”. This capability will provide data users with visibility into origin, transformations, and destination of data as it is used to build products.  The result is more useful data for decision-making, less hassle and better compliance.

Data integration


The rapid growth of data continues to proceed unabated and is now accompanied by not only the issue of siloed data but a plethora of different repositories across numerous clouds. The reasoning is simple and well-justified with the exception of data silos; more data allows the opportunity to provide more accurate data-driven insights, while using multiple clouds helps avoid vendor lock-in and allows data to be stored where it best fits. The challenge, of course, is the added complexity of data management that hinders the actual use of that data for better decisions, analysis and AI.

As part of a data fabric, IBM’s data integration capability creates a roadmap that helps organizations connect data from disparate data sources, build data pipelines, remediate data issues, enrich data quality, and deliver integrated data to multicloud platforms. From there, it can be easily accessed via dashboards by data consumers or those building into a data product. The kind of digital transformation that an organization gets with data integration ensures that the right data can be delivered to the right person at the right time. With IBM’s data integration portfolio, you are not locked into just a single integration style. You can select a hybrid integration strategy that aligns with your organization’s business strategy to meet the needs of your data consumers wanting to access and utilize the data.

Data science and MLOps


AI is no longer experimental. These technologies are becoming mainstream across industries and are proving key drivers of enterprise innovation and growth, leading to more accurate, quicker strategic decisions. When AI is done right, enterprises are seeing increased revenues, improved customer experiences and faster time-to-market, all of which leads to revenue gains and improvements in their competitive positioning.

The data science and MLOps capability provides data science tools and solutions that enable enterprises to accelerate AI-driven innovation, simplify the MLOps lifecycle, and run any AI model with a flexible deployment. With this capability, not only can data-driven companies operationalize data science models on any cloud while instilling trust in AI outcomes, but they are also in a position to improve the ability to manage and govern the AI lifecycle to optimize business decisions with prescriptive analytics.

AI governance


Artificial intelligence (AI) is no longer a choice. Adoption is imperative to beat the competition, release innovative products and services, better meet customer expectations, reduce risk and fraud, and drive profitability. However, successful AI is not guaranteed and does not always come easy. AI initiatives require governance, compliance with corporate and ethical principles, laws and regulations.

A data fabric addresses the need for AI governance by providing capabilities to direct, manage and monitor the AI activities of an organization. AI governance is not just a “nice to have”. It is an integral part of an organization adopting a data-driven culture. It is critical to avoid audits, hefty fines or damage to the organization’s reputation. The IBM AI governance solution provides automated tools and processes enabling an organization to direct, manage and monitor across the AI lifecycle.

IBM Cloud Pak for Data Express solutions


As previously mentioned, we now provide a simple, lightweight, and fast means of validating the value of a data fabric. Through the IBM Cloud Pak for Data Express solutions, you can leverage data governance, ELT Pushdown, or data science and MLOps capabilities to quickly evaluate the ability to better utilize data by simplifying data access and facilitating self-service data consumption. In addition, our comprehensive AI Governance solution complements the data science & MLOps express offering. Rapidly experience the benefits of a data fabric architecture in a platform solution that makes all data available to drive business outcomes.

Source: ibm.com

Friday 20 January 2023

It’s 2023… are you still planning and reporting from spreadsheets?

IBM, IBM Exam, IBM Exam Study, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials

I have worked with IBM Planning Analytics with Watson, the platform formerly known as TM1, for over 20 years. I have never been more excited to share in our customers’ enthusiasm for this solution and how it is revolutionising many manual, disconnected, and convoluted processes to support an organisation’s planning and decision-making activities.

Over the last few months, we have been collecting stories from our customers and projects delivered in 2022 and hearing the feedback on how IBM Planning Analytics has helped support organisations across not only the office of finance – but all departments in their organisation.

The need for a better planning and management system


More and more we are seeing demand for solutions that bring together a truly holistic view of both operational and financial business planning data and models across the entire width and breadth of the enterprise. Having such a platform that also allows planning team members to leverage predictive forecasting and integrate decision management and optimisation models puts organisations at a significant advantage over those that continue to rely on manual worksheet or spreadsheet-based planning processes.

A better planning solution in action


One such example comes from a recent project with Oceania Dairy. Oceania Dairy operates a substantial plant in the small New Zealand, South Island town of Glenavy, on the banks of the Waitaki River. The plant can convert 65,000 litres of milk per hour into 10 tons of powder, or 47,000 tons of powder per year, from standard whole milk powders through to specialty powders including infant formula. The site runs infant formula blending and canning lines, UHT production lines, and produces Anhydrous Milk Fat. In total, the site handles more than 250 million litres of milk per year and generates export revenue of close to NZD$500 million.

Oceania Supply Chain Manager, Leo Zhang shares in our recently published case study: “Connectivity has two perspectives: people to people, facilitating information flows between our 400 employees on site. Prior to CorPlan’s work to implement the IBM Planning Analytics product, there was low information efficiency, with people working on legacy systems or individual spreadsheets. The second perspective is integration. While data is supposed to be logically connected, decision makers were collating Excel sheets, resulting in poor decision efficiency.

CorPlan”, adds Zhang, “has fulfilled this aspect by delivering a common platform which creates a single version of the truth, and a central system where data updates uniformly”. In terms of Collaboration, he says teams working throughout the supply chain managing physical stock flows are being connected from the start to the finish of product delivery. “It’s hard for people to work towards a common goal in the absence of a bigger picture. Collaboration brings that bigger picture to every individual while CorPlan provides that single, common version of the truth,” Zhang comments.

The merits of a holistic planning platform


While the approach of selecting a platform to address a single piece of the planning puzzle – such as Merchandise Planning, or S&OP (Sales and Operational Planning), Workforce Planning or even FP&A (Financial Planning and Analysis) – may be a organisations desired strategy, selecting a platform that can grow and support all planning elements across the organisation has significant merits. Customers such as Oceania Dairy are realising true ROI metrics by having:
 
◉ All an organisation’s stakeholders operating from a single set of agreed planning assumptions, permissions, variables, and results

◉ A platform that supports the ability to run any number of live forecast models to support the data analysis and what-if scenarios that are needed to support stakeholder decision-making

◉ An integrated consolidation of the various data sources capturing the actual transactional data sets, such as ERP, Payroll, CRM, Data Marts/ Warehouses, external data stores and more

◉ Enterprise-level security

◉ In the cloud, as a service delivery

My team and I get a big kick out of delivering that first real-time demonstration to a soon-to-be customer, showing them what the IBM Planning Analytics platform can do. It is not just the extensive features and workflow functionality that generates excitement. It is that moment when they experience a sudden and striking realisation – an epiphany – that this product is going to revolutionise the painful, challenging, and time-consuming process of pulling together a plan.

What is even better is then delivering a project successfully, on time and within budget, that delivers the desired results and exceeds expectations.

If you want to experience what “good planning” looks like, feel free to reach out. The CorPlan team would love to help you start your performance management journey. We can help with product trials, proof of concept or simply supply more information about the solution to support your internal business case. It’s time to address the challenges that a disconnected, manual planning and reporting process brings.

Source: ibm.com

Thursday 19 January 2023

Leveraging machine learning and AI to improve diversity in clinical trials

IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Prep, IBM Prearation, IBM Tutorial and Materials, IBM Guides, IBM Job

The modern medical system does not serve all its patients equally—not even nearly so. Significant disparities in health outcomes have been recognized and persisted for decades. The causes are complex, and solutions will involve political, social and educational changes, but some factors can be addressed immediately by applying artificial intelligence to ensure diversity in clinical trials.

A lack of diversity in clinical trial patients has contributed to gaps in our understanding of diseases, preventive factors and treatment effectiveness. Diversity factors include gender, age group, race, ethnicity, genetic profile, disability, socioeconomic background and lifestyle conditions. As the Action Plan of the FDA Safety and Innovation Act succinctly states, “Medical products are safer and more effective for everyone when clinical research includes diverse populations.” But certain demographic groups are underrepresented in clinical trials due to financial barriers, lack of awareness, and lack of access to trial sites. Beyond these factors, trust, transparency and consent are ongoing challenges when recruiting trial participants from disadvantaged or minority groups.

There are also ethical, sociological and economic consequences to this disparity. An August 2022 report by the National Academies of Sciences, Engineering, and Medicine projected that hundreds of billions of dollars will be lost over the next 25 years due to reduced life expectancy, shortened disability-free lives, and fewer years working among populations that are underrepresented in clinical trials.

In the US, diversity in trials is a legal imperative. The FDA office of Minority Health and Health Equity provides extensive guidelines and resources for trials and recently released guidance to improve participation from underrepresented populations.

From moral, scientific, and financial perspectives, designing more diverse and inclusive clinical trials is an increasingly prominent goal for the life science industry. A data-driven approach, aided by machine learning and artificial intelligence (AI), can aid these efforts.

The opportunity


Life science companies have been required by FDA regulations to present the effectiveness of new drugs by demographic characteristics such as age group, gender, race and ethnicity. In the coming decades, the FDA will also increasingly focus on genetic and biological influences that affect disease and response to treatment. As summarized in a 2013 FDA report, “Scientific advances in understanding the specific genetic variables underlying disease and response to treatment are increasingly becoming the focus of modern medical product development as we move toward the ultimate goal of tailoring treatments to the individual, or class of individuals, through personalized medicine.”

Beyond demographic and genetic data, there is a trove of other data to analyze, including electronic medical records (EMR) data, claims data, scientific literature and historical clinical trial data.

Using advanced analytics, machine learning and AI on the cloud, organizations now have powerful ways to:

◉ Form a large, complicated, diverse set of patient demographics, genetic profiles and other patient data
◉ Understand the underrepresented subgroups
◉ Build models that encompass diverse populations
◉ Close the diversity gap in the clinical trial recruitment process
◉ Ensure that data traceability and transparency align with FDA guidance and regulations

Initiating a clinical trial consists of four steps:

1. Understanding the nature of the disease
2. Gathering and analyzing the existing patient data
3. Creating a patient selection model
4. Recruiting participants

Addressing diversity disparity during steps two and three will help researchers better understand how drugs or biologics work, shorten clinical trial approval time, increase trial acceptability amongst patients and achieve medical product and business goals.

A data-driven framework for diversity


Here are some examples to help us understand the diversity gaps. Hispanic/Latinx patients make up 18.5% of the population but only 1% of typical trial participants; African-American/Black patients make up 13.4% of the population but only 5% of typical trial participants. Between 2011 and 2020, 60% of vaccine trials did not include any patients over 65—even though 16% of the U.S. population is over 65. To fill diversity gaps like these, the key is to include the underrepresented populations in the clinical trial recruitment process.

For the steps leading up to recruitment, we can evaluate the full range of data sources listed above. Depending on the disease or condition, we can evaluate which diversity parameters are applicable and what data sources are relevant. From there, clinical trial design teams can define patient eligibility criteria, or expand trials to additional sites to ensure all populations are properly represented in the trial design and planning phase.

How IBM can help


To effectively enable diversity in clinical trials, IBM has various solutions, including data management, performing AI and advanced analytics on the cloud, and setting up an ML Ops framework. It helps trial designers provision and prepare data, merge various aspects of patient data, identify diversity parameters and eliminate bias in modeling. It does this using an AI-assisted process that optimizes patient selection and recruitment by better defining clinical trial inclusion and exclusion criteria.

Because the process is traceable and equitable, it provides a robust selection process for trial participant recruitment. As life sciences companies adopt such frameworks, they can build trust that clinical trials have diverse populations and thus build trust in their products. Such processes also help healthcare practitioners better understand and anticipate possible impacts products may have on specific populations, rather than responding ad hoc, where it may be too late to treat conditions.

Summary


IBM’s solutions and consulting services can help you leverage additional data sources and identify more relevant diversity parameters so that trial inclusion and exclusion criteria can be re-examined and optimized. These solutions can also help you determine whether your patient selection process accurately represents disease prevalence and improve clinical trial recruitment. Using machine learning and AI, these processes can easily be scaled across a range of trials and populations as part of a streamlined, automated workflow.

These solutions can help life sciences companies build trust with communities that have been historically underrepresented in clinical trials and improve health outcomes.

Source: ibm.com

Tuesday 17 January 2023

Experiential Shopping: Why retailers need to double down on hybrid retail


Shopping can no longer be divided into online or offline experiences. Most consumers now engage in a hybrid approach, where a single shopping experience involves both in-store and digital touchpoints. In fact, this hybrid retail journey is the primary buying method for 27% of all consumers, and the specific retail category and shopper age can significantly increase this number. According to Consumers want it all, a 2022 study from the IBM Institute for Business Value (IBV), “today’s consumers no longer see online and offline shopping as distinct experiences. They expect everything to be connected all the time.”

Experiential hybrid retail is a robust omnichannel approach to strategically blending physical, digital, and virtual channels. It empowers customers with the freedom to engage with brands on whichever shopping channel is most convenient, valued or preferred at any given time. For example, consumers may engage in product discovery on social platforms, purchase online and pick up items at a store. They may also be in a store using digital tools to locate or research products. The possibilities are endless.

While hybrid retail is now an imperative for brands, it has created new complexities for retailers. “Channel explosion is a reality and retailers are challenged to scale their operations across what is essentially a moving target,” says Richard Berkman, VP & Sr. Partner, Digital Commerce, IBM iX. The result is often disconnected shopping journeys that fail consumers. Imagine selecting the “in-store pickup” option for an online purchase, only to discover that fulfillment of the order was impossible since the store was out of stock.

According to Shantha Farris, Global Digital Commerce Strategy and Offerings Leader at IBM iX, the real cost of an unsuccessful approach to hybrid retail is losing customers—potentially forever. There are still a lot of pandemic-weary consumers for whom patience and tolerance for shopping-related friction is at an all-time low. Additionally, people remain desperate to feel connected. Retailers must be totally on point, pleasing customers with friction-free and highly experiential omnichannel commerce journeys. When this doesn’t happen, customers can react harshly. Farris refers to this phenomenon as “rage shopping” and observes that consumers will choose to shop elsewhere based on one disappointing experience. “End customers demand frictionless experiences,” she says. “They’re empowered. They have choices. They want to trust that their brand experience will be trusted, relevant, and convenient—and that this will remain true every time they shop with you.” Retailers must modernize their technology ecosystem for omnichannel and cross-channel consistency.

“Whether the transaction itself occurs digitally or physically is beside the point. It’s got to be experiential.”

Websites. Mobile apps. Social, live streaming and metaverse platforms. Determining which channels to strategically activate is tricky, but it’s not impossible. Commerce begins with brand engagement and product discovery, so it is critical to leverage data-driven insights to understand customers: everything from who they are to how they prefer to progress through the end-to-end shopping journey and how compelling they rate the experience. Then, Berkman says, “retailers need an experience-led vision of the future of their commerce initiatives across channels, with an ability to activate data and dynamically manage those channels.”

Which channels offer the best chance for positive consumer engagement? It depends on the brand. Additionally, measuring the success of each individual channel cannot be assessed using only conversion metrics. Farris comments, “You might discover a product on TikTok, but conversion will probably take place elsewhere.”

A primary benefit of augmented reality is increased consumer engagement and confidence at the earliest stage of a purchase.

The reality of rage shopping is a useful premise for retailers re-examining the current efficacy of every interaction along the purchase journey. Each step, from product discovery to last-mile fulfillment and delivery, needs to “meet customers where they are and evolve into one connected experience,” Berkman says.

Here are three ways to approach hybrid retail using technology along the customer journey. “Whether the transaction itself occurs digitally or physically is beside the point. It’s got to be experiential,” Farris says. “And to provide that experience, you need technology.”

Enhance product discovery with AR


A primary benefit of augmented reality (AR) is increased consumer engagement and confidence at the earliest stage of a purchase. Farris points to work done for a paint company in which IBM designed and deployed a color selection tool, which allows consumers to virtually test different paints on their walls. “There’s a huge fear factor in committing to a paint color for a room,” she says, but with virtual testing, “all of a sudden, your confidence in this purchase goes through the roof.”

AR has a measurable impact on reducing returns, which can cost retailers up to 66% of a product’s sale price.

Similar AR tools have been a hit for retailers like Ikea and Wayfair, allowing consumers to see how furniture will look in their actual homes. Smart mirrors provide another example: This interactive AR tool enables a quicker try-it-on experience, creating an expanded range of omnichannel buying opportunities for in-store shoppers. Effective AR use is also shown to have a measurable impact on reducing returns, which can cost retailers up to 66% of a product’s sale price, according to 2021 data from Optoro, a reverse logistics company. And a 2022 report from IDC noted: “AR/VR—over 70% view this technology as important, but less than 30% are using it.” That said, a study shared by Snap Inc found that by 2025, nearly 75% of the global population—and almost all smartphone users—will be frequent AR users.

Empower decision-making with 3D modeling


“Digital asset management is a fundamental part of commerce; 3D assets are just the next generation of it,” Farris says. 3D coupled with AR allows consumers to manipulate products in space. “It’s about making product information really convenient and relevant for consumers,” she says. In 2021, Shopify reported on average a 94% increase in the conversion rate for merchants who featured 3D content. One example is being able to virtually “try on” a shoe and see every angle of it by rotating your foot. The technology is useful for B2B too. “Instead of reading 50 pages of requirements and specs for some widget, buyers can actually turn the part in space and see if it’ll fit on an assembly line,” Farris says.

3D assets coupled with AR go beyond providing retailers with today’s tools. It’s a measure of futureproofing. “Some of these technologies will give you immediate returns today,” Farris says, “but they will also help retailers build capabilities that will be applicable to deploying a full metaverse commerce experience.”

Digitize how consumers interact with physical stores


In-store customer experiences can be significantly enriched with the use of digital tools and seamless omnichannel integration. Farris points to a major home improvement retailer that does this well. “If you go into one of these stores and can’t find an associate to help you, you can whip out your phone, go to the store’s website, and it’ll tell you what bin a product is in, down to the height of the shelf it’s on. Your phone becomes your in-store guide.”

The employee experience is also dramatically improved with the right digital technologies and omnichannel access. “Store associates need to have real-time data and insights relative to anyone who might walk in the door,” Berkman says, noting that associates, service agents and salespeople should act more like “a community of hosts.” Armed with the right information and access to technology like predictive analytics and automation, Berkman says, “those employees would have the insights to effectively engage customers and create more immersive and personalized experiences to drive brand growth.”

Source: ibm.com

Thursday 12 January 2023

How to unlock a scientific approach to change management with powerful data insights

IBM Exam, IBM Tutorial and Materials, IBM Certification, IBM Guides, IBM Prep, IBM Preparation, IBM Career, IBM Skills, IBM Job

Without a doubt, there is exponential growth in the access to and volume of process data we all, as individuals, have at our fingertips. Coupled with a current climate that is proving to be increasingly ambiguous and complex, there is a huge opportunity to leverage data insights to drive a more robust, evidence-based methodology to the way we work and manage change. Not only can data support a more compelling change management strategy, but it’s also able to identify, accelerate and embed change faster, all of which is critical in our continuously changing world. Grasping these opportunities at IBM, we’re increasingly building our specialism in process mining and data analysis tools and techniques we believe to be true ‘game changers’ when it comes to building cultures of continuous change and innovation.

For us, one stand-out area of opportunity presented by process data analytics capability is process mining: the process of finding anomalies, patterns and correlations within large data sets to predict process outcomes. Process mining presents the ‘art of the possible’ when it comes to investing time and energy into organizational change initiatives that can make change stick. We see four key areas where this process mining capability can be applied to truly transform the value derived by a change program:

1. Starting from the top: Focusing on the right change and making it feel right


We see time and time again the detrimental impact of misaligned or disengaged leaders, especially when it comes to driving a case for effective change and inspiring their teams to take action. A recent Leadership IQ survey in 2021 reported by Forbes states that ‘only 29% of employees say that their leader’s vision for the future always seems to be aligned with the organization’s’. Ultimately, if leaders at every level are not on the same page, and if people are not fully bought into the change, results become vulnerable. Having a brilliant vision and strategy doesn’t make a difference if you can’t get your leaders and employees to buy into that vision.

With the Business Case and Case for Change often being the first and potentially the most critical ‘must get right’ factor of any change program, the need for real-time data is clear. How much easier would it be to base a business transformation on data-driven insights that could demonstrate and quantify the opportunity based on the facts.

Leveraging data to replace the ‘gut feel’ on which too many business decisions are made enables change practitioners to separate perceptions from reality and decide which processes need the most focus. In other words, the data science of change management focuses you on that which will make the greatest impact on your business objectives, KPIs and strategic outcomes. It can show whether perceptions are real, as well as unearthing unexpected insights. Ultimately, insights based on hard data can bring transparency to your decision making and provide a more compelling story that everyone can agree on.

2. Picking up the pace: Accelerating the change journey


Do you know how your business really operates on the ground? Change programs can spend many weeks conducting interviews and workshops to identify ‘As Is’ pictures. However, we often find a disconnect between what users say is happening, and what data shows is actually happening. Process mining tools analyse data from thousands of transactions across all locations to paint a true story, providing faster and more reliable insights into the root causes of issues and identify the strongest opportunities for improvements.

Likewise, change agents spend a lot of time and effort on ‘To Be’ design and identifying the impact on those affected by the potential ‘new normal’. Process mining tools can perform a fit-gap analysis on new processes to rapidly and more accurately identify the greatest change impacts. This can, in turn, accelerate the creation of your communications and training programmes to support the introduction of the change with more exact and bespoke change journeys. Detailed insights about levels of process compliance can be brought to design teams to inform and focus the design of future processes and systems.

CoE = Center of Excellence = Accelerator for Change
A center of excellence (CoE) is a dedicated team that has been mandated to provide leadership, best practices, technical deployment, support and training for Celonis in your organization.

3. Making it happen: Driving faster adoption


There is a common conception that introducing new technology will make things faster and easier, but this isn’t always the case. Forbes states that ‘despite significant investments in new technologies over the past decade, many organisations are actually watching their operations slow down due to underutilization of technology’. Quite clearly, we can’t just ‘plug in a system’ and expect it to be adopted. No matter how ‘fit for purpose’ it is in its design, and how well adopted it is when introduced, these results will undoubtedly change due to a variety of factors within and outside of our control.

So how do you spot this early, and react or even prevent this in a timely and effective manner?

Though advanced analytics, process mining solutions can highlight gaps and opportunities in a digital transformation. This means you can intervene with targeted change interventions to course correct more quickly. For example, pin-pointing a certain user group or process which is reflecting lower levels of activity or inaccuracy of tasks. You can design training and engagement interventions to further upskill or educate team members; or, identify process improvement opportunities to better support them in the activities they undertake.

Furthermore, this capability can also tackle another stumbling block we face: once you have introduced a change, how do you easily measure its success and adoption?

By applying process mining, you can quickly see levels of process compliance right across the organisation and right across the value chain. Process mining tools automate and generate dashboards which illustrate an ‘at a glance’ view of adoption rates. They also allow you to quantify business value based on improvements and allows you to assign and track key metrics with business objectives.

4. Making it stick: Driving continuous change


Embedding a culture of continuous change and improvement into the DNA of an organisation is a significant undertaking. The McKinsey Organizational Health Index shows that the most common type of culture in high performing companies is the continuous improvement one. This Index proved that in almost 2000 companies, organizational health is closely linked to performance. In its very nature, data mining tools target this continuous improvement and equip its users with the data to continuously identify new opportunities and relentlessly reinvent the way things get done.

Due to their simplicity and intuitive qualities, process mining tools make it easy for non-data specialists to find and use data-driven insights, making it much easier to build a culture of continuous improvement and innovation across the organisation. However, data-based approaches such as process mining provide most value when organisations commit to them for the long term. Thus, adoption of these tools is a mini-change project in itself. Embedding the use of data and metrics in everyday work needs not only an understanding of the tool but also a change of mindset towards using data insights as part of continuous improvement efforts. Just as with any IT implementation, spending time taking users and stakeholders through the change curve from awareness to adoption will pay dividends in the long run.

Operationalize Change / Execution Management =>

Execute on shopfloor level based on facts & figures / using Machine Learning / AI

Final thoughts on data insights for change management


To conclude, through the power of data, change management will move from a project-based discipline struggling to justify adequate investment to one that is advising on business outcomes and how to successfully deliver them. It presents opportunities not only to super-charge and accelerate typical change management interventions – such as the development of change journeys or tracking adoption – but flex its power to influence and change the mindsets of those working with it. Process mining equips and empowers people to think, challenge and reinvent what we do. This goes hand in hand with our appreciation that ‘change is always changing’ and we need to keep pace or risk falling behind.

Source: ibm.com

Tuesday 10 January 2023

Using a digital self-serve experience to accelerate and scale partner innovation with IBM embeddable AI

IBM Exam Study, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Learning

IBM has invested $1 billion into our partner ecosystem. We want to ensure that partners like you have the resources to build your business and develop software for your customers using IBM’s industry-defining hybrid cloud and AI platform. Together, we build and sell powerful solutions that elevate our clients’ businesses through digital transformation.

To that end, IBM recently announced a set of embeddable AI libraries that empower partners to create new AI solutions. In fact, IBM supports an easy and fast way to embed and adopt IBM AI technologies through the new Digital Self-Serve Co-Create Experience (DSCE).

The Build Lab team created the DSCE to complement its high-touch engagement process and provide a digital self-service experience that scales to tens of thousands of Independent Software Vendors (ISVs) adopting IBM’s embeddable AI. Using the DSCE self-serve portal, partners can discover and try the recently launched IBM embeddable AI portfolio of IBM Watson Libraries, IBM Watson APIs, and IBM applications at their own pace and on their schedule. In addition, DSCE’s digitally guided experience enables partners to effortlessly package and deploy their software at scale.

IBM Exam Study, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Learning

Your on-ramp to embeddable AI from IBM


The IBM Build Lab team collaborates with qualified ISVs to build Proofs of Experience (PoX) demonstrating the value of combining the best of IBM Hybrid Cloud and AI technology to create innovative solutions and deliver unique market value.

DSCE is a wizard-driven experience. Users respond to contextual questions and get suggested prescriptive assets, education, and trials while rapidly integrating IBM technology into products. Rather than manually searching IBM websites and repositories for potentially relevant information and resources, DSCE does the legwork for you, providing assets, education, and trial resources based on your development intent. The DSCE guided path directs you to reference architectures, tutorials, best practices, boilerplate code, and interactive sandboxes for a customized roadmap with assets and education to speed your adoption of IBM AI.

Embark on a task-based journey


DSCE works seamlessly for both data scientist and machine learning operations (ML-Ops) engineers’ personas.

For example, data scientist, Miles wants to customize an emotion classification model to discover what makes customers happiest. His startup provides analysis of customer feedback to help the retail e-commerce customers it serves. He wants to provide high-quality analysis of the most satisfied customers, so he chooses a Watson NLP emotion classification model that he can fine-tune using an algorithm that predicts ‘happiness’ with greater confidence than pre-trained models. This type of modeling can all be done in just a few simple clicks:

◉ Find and try AI ->
◉ Build with AI Libraries ->
◉ Build with Watson NLP ->
◉ Emotion classification ->
◉ Library and container ->
◉ Custom train the model ->
◉ Results Page

IBM Exam Study, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Learning

The bookmarkable Results Page gives a comprehensive set of assets for both training and deploying a model. For accomplishing the task of “Training the Model,” Miles can explore interactive demos, reserve a Watson Studio environment, copy a snippet from a Jupyter notebook, and much more.

If Miles, or his ML-Ops counterpart, Leena, wants to “Deploy the Model,” they can get access to the trial license and container of the new Watson NLP Library for 180 days. From there it’s easy to package and deploy the solution on Kubernetes, Red Hat OpenShift, AWS Fargate, or IBM Code Engine. It’s that simple!

Try embeddable AI now


Try the experience here: https://dsce.ibm.com/ and accelerate your AI-enabled innovation now. DSCE will be extended to include more IBM embeddable offerings, satisfying modern developer preferences for digital and self-serve experiences, while helping thousands of ISVs innovate rapidly and concurrently. If you want to provide any feedback on the experience, get in touch through the “Contact us” link on your customized results page.

Source: ibm.com

Saturday 7 January 2023

Data architecture strategy for data quality

IBM Exam Study, IBM Tutorial and Materials, IBM Certification, IBM Career, IBM Skills, IBM Jobs, Modernize: Cloud-ready data,Collect: Make data accessible, Data Management

Poor data quality is one of the top barriers faced by organizations aspiring to be more data-driven. Ill-timed business decisions and misinformed business processes, missed revenue opportunities, failed business initiatives and complex data systems can all stem from data quality issues. Just one of these problems can prove costly to an organization. Having to deal with all of them can be devastating.

Several factors determine the quality of your enterprise data like accuracy, completeness, consistency, to name a few. But there’s another factor of data quality that doesn’t get the recognition it deserves: your data architecture.

How the right data architecture improves data quality


The right data architecture can help your organization improve data quality because it provides the framework that determines how data is collected, transported, stored, secured, used and shared for business intelligence and data science use cases.

The first generation of data architectures represented by enterprise data warehouse and business intelligence platforms were characterized by thousands of ETL jobs, tables, and reports that only a small group of specialized data engineers understood, resulting in an under-realized positive impact on the business. Next generation of big data platforms and long running batch jobs operated by a central team of data engineers have often led to data lake swamps.

Both approaches were typically monolithic and centralized architectures organized around mechanical functions of data ingestion, processing, cleansing, aggregation, and serving. This created number of organizational and technological bottlenecks prohibiting data integration and scale along several dimensions: constant change of data landscape, proliferation of data sources and data consumers, diversity of transformation and data processing that use cases require, and speed of response to change.

What does a modern data architecture do for your business?


A modern data architecture like Data Mesh and Data Fabric aims to easily connect new data sources and accelerate development of use case specific data pipelines across on-premises, hybrid and multicloud environments. Combined with effective data lifecycle management, which evolves into data as product management, a modern data architecture can enable your organization to:
 
◉ Allow data stewards to ensure data compliance, protection and security
◉ Enhance trust in data by getting visibility into where data came from, how it has changed, and who is using it
◉ Monitor and identify data quality issues closer to the source to mitigate the potential impact on downstream processes or workloads
◉ Efficiently adopt data platforms and new technologies for effective data management
◉ Apply metadata to contextualize existing and new data to make it searchable and discoverable
◉ Perform data profiling (the process of examining, analyzing and creating summaries of datasets)
◉ Reduce data duplication and fragmentation

Because your data architecture dictates how your data assets and data management resources are structured, it plays a critical role in how effective your organization is at performing these tasks. Meaning, data architecture is a foundational element of your business strategy for higher data quality. Critical capabilities of modern high-quality data quality management solutions require an organization to:

◉ Perform data quality monitoring based on pre-configured rules
◉ Build data modeling lineage to perform root cause analysis of data quality issues
◉ Make a dataset’s value immediately understandable
◉ Practice proper data hygiene across interfaces

How to build a data architecture that improves data quality


A data strategy can help data architects create and implement a data architecture that improves data quality. Steps for developing an effective data strategy include:

1. Outlining business objectives you want your data to help you accomplish

For example, a financial institution may look to improve regulatory compliance, lower costs, and increase revenues. Stakeholders can identify business use cases for certain data types, such as running data analytics on real-time data as it’s ingested to automate decision-making to drive cost reduction.

2. Taking an inventory of existing data assets and mapping current data flows

This step includes identifying and cataloging all data throughout the organization into a centralized or federated inventory list, thereby removing data silos. The list should detail where each dataset resides and what applications and use cases rely on it. Next, select the data needed for your key use cases and prioritize those data domains that included it.

3. Developing a standardized nomenclature

A naming convention and aligned data format (data classes) for data used throughout the organization helps to ensure data consistency and interoperability across departments (domains) and use cases.

4. Determining what changes must be made to the existing architecture

Decide on the changes that will optimize your data for achieving your business objectives. Researching the different types of modern data architectures, such as a data fabric and data mesh can help you decide on the data structure most suitable to your business requirements.

5. Deciding on KPIs to gauge a data architecture’s effectiveness

Create KPIs and use advanced analytics that link the measure of your architecture’s success to how well it supports data quality.

6. Creating a data architecture roadmap

Companies can develop a rollout plan for implementing data architecture and governance in three to four data domains per quarter.

Data architecture and IBM


A well-designed data architecture creates a foundation for data quality through transparency and standardization that frames how your organization views, uses and talks about data.

As previously mentioned, a data fabric is one such architecture. A data fabric automates data discovery, governance and data quality management and simplifies self-service data access to data distributed across a hybrid cloud landscape. It can encompass the applications that generate and use data, as well as any number of data storage repositories such as data warehouses, data lakes (which store vast amounts of big data), NoSQL databases (which store unstructured data) and relational databases that utilize SQL.

Source: ibm.com

Thursday 5 January 2023

Do you need coding skills to build a chatbot?

IBM, IBM Exam Study, IBM Exam Prep, IBM Preparation, IBM Tutorial and Materials, IBM Certification, IBM Guides, IBM Skills, IBM Jobs

Virtual assistants are valuable, transformative business tools that offer a compelling ROI while improving the customer experience. By 2030, conversational AI chatbots and virtual assistants will handle 30% of interactions that would have otherwise been handled by a human agent—up from 2% in 2022. So why haven’t more companies adopted this solution?

Due to the maturity and complexity of the technology, it can take years to reap the full benefits of developing a conversational AI platform. One challenge to implementing conversational AI from core Natural Language AI technology is that it requires expensive technical specialists. These specialists (data analytics, AI and graph technologies) are often in short supply and typically require annual salaries of $175,000 or more. But for most companies there is a simpler, cost-effective way to create a chatbot for the ideal support experience.  

A multitude of solutions to create and maintain virtual assistants to improve customer service.  


Keep in mind that not all pre-built tools are the same. Historically, conversational AI solutions have fallen into one of two categories:  

Simple to use solutions


These tools make it easy to start working with conversational AI but in the end offer sub-par customer experiences. Typically, the answers are hardcoded or use minimal AI or machine learning. These chatbots are programmed to give exact answers to specific questions — if client says this, then say this, if client says that, then say this. 10 times out of 10 they will give the same answer regardless of whether it is right or wrong.  

Robust solutions that can create powerful experiences


On the other end of the spectrum are these tools which allow you to create the type of experiences your customers expect, but they’re often very complex (and expensive) to use. For instance, they may have robust natural language processing powering their AI but to build a customer facing solution requires a degree in computer science and a front-end developer to create the experience and embed it into your website. 

Watson Assistant changes the game with actions and the low code/no code interface  


You may feel limited by expensive AI solutions that require a wide variety of tools and technologies to build, integrate with existing systems, and operate at scale to meet peek customer demand. If your organization is not equipped to develop conversational AI applications to handle dialog and process and understand natural language, consider a Conversation AI platform with “low code” and “no code” interfaces that line-of-business (LOB) users can use to quickly develop conversational AI applications that are ready for your enterprise.

Breaking the code: Get started fast with a Conversational AI platform 


Watson Assistant’s new no-code visual chatbot builder focuses on using actions to build customer conversations. It’s simple enough for anyone to build a virtual assistant, with a build guide that is tailored to the people who interact with customers daily.  

Pre-built conversation flows


First, we focus on the logical steps to complete the action, with AI that is powerful enough to understand the intent, recognize specific pieces of information (entities) from a message and keep the user on track, all without being redundant. For example, a customer may write, “I want to buy one large cheese pizza.” Should the assistant ask then “What size pizza?” “What toppings?” and “How many?” No, it should just skip to the payment. Watson Assistant AI will understand that if you start on part 3, parts 1 and 2 should be skipped.  

Disambiguation


It’s simple to design a virtual assistant that asks clarifying questions. If you say something in the chat and the virtual assistant doesn’t understand it, it will automatically ask clarifying questions to get the conversation back on track. Which means that developing your bot doesn’t have to be perfect right out of the box. It will automatically get the conversation back on the rails. You can also have predetermined responses. The chatbot could ask: “What is your account?” and if it’s not getting a number, it will automatically clarify and say, “I’m looking for a number only, please.”

Maintenance


With Watson Assistant, you can identify and address any chatbot problems in a matter of minutes, as opposed to the traditional development cycle.  This allows you to avoid the wasted time and resources associated with going to IT, assigning a developer, and waiting weeks or even months before the changes are actually made. 

Low-code and no-code interfaces like Watson Assistant’s visual chatbot builder go beyond professional developers and seasoned technologists to open up a whole new class of citizen developers. Companies can take back control to quickly and easily build chatbots for customer service, no code needed. 

Source: ibm.com

Tuesday 3 January 2023

Call Center Modernization with AI

IBM, IBM Exam, IBM Exam Study, IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Learning, IBM Study, IBM

Picture this: A traveler sets off on a camping trip. She decides to extend her RV rental halfway through her trip, so she calls customer service for assistance, but finds herself waiting minutes, then what feels like hours. When she finally does get a hold of somebody, her call is redirected. More waiting follows. Suddenly her new plan doesn’t seem worth the aggravation. Now, imagine the same scenario from the agent’s perspective, dealing with a dissatisfied customer, scrambling for information that takes time to collect. Instances like these are far too common—the debacle ends up being costly for the company, and frustrating for both customer and agent.

Conversational AI solutions for customer service have come a long way, helping organizations meet customer expectations while reducing containment rates, complexity, and costs. It starts with bringing AI into the mix and ends with more cost-efficient operations and more satisfied customers.

So how can conversational AI help fulfill customer expectations in today’s ever-demanding landscape?


When you deploy conversational AI in your call center, you get:

1. Increased customer and agent satisfaction. Think of the example above—long wait times and unanswered questions can only lead to frustrated customers and agents and slower businesses. With leading natural language understanding (NLU) and automation leading to faster resolution, everybody wins.
2. Improved call resolution rates. AI and machine learning enable more self-service answers and actions and help route customers who need live agent support to the right place – continuously analyzing customer interactions to improve response. Agents benefit from this assistance too; empowering them to perform at their best when call traffic is high. Ultimately, improved resolution rates mean better customer experiences and improved brand reputation.
3. Reduced operational costs. With the capabilities of AI-powered virtual agents, you can contain up to 70% of calls without any human interaction and save an estimated USD 5.50 per contained call. This is money saved for your business, and time saved for your customers.

Not all AI platforms are built the same


On the lowest rung of the AI ladder, you have rules-based bots with limited response function. For example, you want to know if your telecom provider offers an unlimited data plan, so you call customer service and are given a set of basic questions following strict if-then scenarios—“…say yes if you want to review service plans; say yes if you want unlimited data.”

Climb up one rung, and there’s level two AI with machine learning and intent detection. You accidentally type “speal to an agenr”— but the virtual assistant understands your intention and responds properly: “I will connect you with an agent who can assist you.”

Then there’s IBM Watson® Assistant—the always-learning, highly resourceful virtual agent. Watson Assistant sits at the top—level three. Level three offers powerful AI that has unparalleled data and research capabilities.

The Watson Assistant deployed at Vodafone, the second-largest telecommunications company in Germany, exhibits level-three capacities—in addition to answering questions across a variety of platforms, such as WhatsApp, Facebook and RCS, Watson Assistant answers requests pulled from databases and can converse in multiple languages. It mines data, customizes interactions and is continuously learning. “*Insert Name*, transferring you to one of our agents who can answer your question about coverage abroad.” 

With Watson AI, you can expect more for your call center: 24/7 support, speedy response times and higher resolution rates. Seamlessly integrate your virtual agent with your existing back-end systems and processes, with every customer channel and touchpoint, without migrating your tech stack—IBM can meet you wherever you are in your customer service journey. Watson AI offers:

◉ Best-in-class NLU
◉ Intent detection
◉ Large language models
◉ Unsupervised learning
◉ Advanced analytics
◉ AI-powered agent assist
◉ Easy integration with existing systems
◉ Consulting services

All these features work in concert to redefine customer care at the speed of your business.

Why add complexity when you can simplify with AI? 


According to a Gartner® report, in 2031, conversational AI chatbots and virtual assistants will handle 30% of interactions that would have otherwise been handled by a human agent, up from 2% in 2022. To remain among the leaders, modern contact centers will need to keep up with AI innovations. Of course, like Watson, leading businesses are constantly learning, analyzing, and striving to become better.

Watson Assistant plugs into your company’s infrastructure, is reliable, easy to use, and always there to provide answers and self-service actions. Take Arvee, for example, an IBM Watson AI-powered virtual assistant for Camping World, the number one retailer of RVs. When customer demand surged early in the global pandemic, Camping World deployed Arvee in their call center and agent efficiency increased 33%.  Customer engagement also increased by 40%.

Similarly, IBM is working together with CcaaS providers like Nice to make it even simpler to build, deploy and scale AI-powered virtual voice agents.

Watson Assistant helps streamline processes and create agent efficiency—and when calls go to human agents, they can deliver higher quality personal service. Remember that aggravated customer from earlier? With the power and capabilities of Watson Assistant, she can enjoy her time camping—goodbye hold music, hello sounds of nature.

Source: ibm.com