Thursday, 2 February 2023

Data platform trinity: Competitive or complementary?

IBM, IBM Exam, IBM Exam Prep, IBM Tutorial and Materials, IBM Guides, IBM Certification, IBM Skill, IBM Job

Data platform architecture has an interesting history. Towards the turn of millennium, enterprises started to realize that the reporting and business intelligence workload required a new solution rather than the transactional applications. A read-optimized platform that can integrate data from multiple applications emerged. It was Datawarehouse.

In another decade, the internet and mobile started the generate data of unforeseen volume, variety and velocity. It required a different data platform solution. Hence, Data Lake emerged, which handles unstructured and structured data with huge volume.

Yet another decade passed. And it became clear that data lake and datawarehouse are no longer enough to handle the business complexity and new workload of the enterprises. It is too expensive. Value of the data projects are difficult to realize. Data platforms are difficult to change. Time demanded a new solution, again.

Guess what? This time, at least three different data platform solutions are emerging: Data Lakehouse, Data Fabric, and Data Mesh. While this is encouraging, it is also creating confusion in the market. The concepts and values are overlapping. At times different interpretations are emerging depending on who is being asked.

This article endeavors to alleviate those confusions. The concepts will be explained. And then a framework will be introduced, which will show how these three concepts may lead to one another or be used with each other.

Data lakehouse: A mostly new platform


Concept of lakehouse was made popular by Databricks. They defined it as: “A data lakehouse is a new, open data management architecture that combines the flexibility, cost-efficiency, and scale of data lakes with the data management and ACID transactions of data warehouses, enabling business intelligence (BI) and machine learning (ML) on all data.”

While traditional data warehouses made use of an Extract-Transform-Load (ETL) process to ingest data, data lakes instead rely on an Extract-Load-Transform (ELT) process. Extracted data from multiple sources is loaded into cheap BLOB storage, then transformed and persisted into a data warehouse, which uses expensive block storage.

This storage architecture is inflexible and inefficient. Transformation must be performed continuously to keep the BLOB and data warehouse storage in sync, adding costs. And continuous transformation is still time-consuming. By the time the data is ready for analysis, the insights it can yield will be stale relative to the current state of transactional systems.

Furthermore, data warehouse storage cannot support workloads like Artificial Intelligence (AI) or Machine Learning (ML), which require huge amounts of data for model training. For these workloads, data lake vendors usually recommend extracting data into flat files to be used solely for model training and testing purposes. This adds an additional ETL step, making the data even more stale.

Data lakehouse was created to solve these problems. The data warehouse storage layer is removed from lakehouse architectures. Instead, continuous data transformation is performed within the BLOB storage. Multiple APIs are added so that different types of workloads can use the same storage buckets. This is an architecture that’s well suited for the cloud since AWS S3 or Azure DLS2 can provide the requisite storage.

Data fabric: A mostly new architecture


The data fabric represents a new generation of data platform architecture. It can be defined as: A loosely coupled collection of distributed services, which enables the right data to be made available in the right shape, at the right time and place, from heterogeneous sources of transactional and analytical natures, across any cloud and on-premises platforms, usually via self-service, while meeting non-functional requirements including cost effectiveness, performance, governance, security and compliance.

The purpose of the data fabric is to make data available wherever and whenever it is needed, abstracting away the technological complexities involved in data movement, transformation and integration, so that anyone can use the data. Some key characteristics of data fabric are:

A network of data nodes

A data fabric is comprised of a network of data nodes (e.g., data platforms and databases), all interacting with one another to provide greater value. The data nodes are spread across the enterprise’s hybrid and multicloud computing ecosystem.

Each node can be different from the others

A data fabric can consist of multiple data warehouses, data lakes, IoT/Edge devices and transactional databases. It can include technologies that range from Oracle, Teradata and Apache Hadoop to Snowflake on Azure, RedShift on AWS or MS SQL in the on-premises data center, to name just a few.

All phases of the data-information lifecycle

The data fabric embraces all phases of the data-information-insight lifecycle. One node of the fabric may provide raw data to another that, in turn, performs analytics. These analytics can be exposed as REST APIs within the fabric, so that they can be consumed by transactional systems of record for decision-making.

Analytical and transactional worlds come together

Data fabric is designed to bring together the analytical and transactional worlds. Here, everything is a node, and the nodes interact with one another through a variety of mechanisms. Some of these require data movement, while others enable data access without movement. The underlying idea is that data silos (and differentiation) will eventually disappear in this architecture.

Security and governance are enforced throughout

Security and governance policies are enforced whenever data travels or is accessed throughout the data fabric. Just as Istio applies security governance to containers in Kubernetes, the data fabric will apply policies to data according to similar principles, in real time.

Data discoverability

Data fabric promotes data discoverability. Here, data assets can be published into categories, creating an enterprise-wide data marketplace. This marketplace provides a search mechanism, utilizing metadata and a knowledge graph to enable asset discovery. This enables access to data at all stages of its value lifecycle.

The advent of the data fabric opens new opportunities to transform enterprise cultures and operating models. Because data fabrics are distributed but inclusive, their use promotes federated but unified governance. This will make the data more trustworthy and reliable. The marketplace will make it easier for stakeholders across the business to discover and use data to innovate. Diverse teams will find it easier to collaborate, and to manage shared data assets with a sense of common purpose.

Data fabric is an embracing architecture, where some new technologies (e.g., data virtualization) play a key role. But it allows existing databases and data platforms to participate in a network, where a data catalogue or data marketplace can help in discovering new assets. Metadata plays a key role here in discovering the data assets.

Data mesh: A mostly new culture


Data mesh as a concept is introduced by Thoughtworks. They defined it as: “…An analytical data architecture and operating model where data is treated as a product and owned by teams that most intimately know and consume the data.” The concept stands on four principles: Domain ownership, data as a product, self-serve data platforms, and federated computational governance.

Data fabric and data mesh as concepts have overlaps. For example, both recommend a distributed architecture – unlike centralized platforms such as datawarehouse, data lake, and data lakehouse. Both want to bring out the idea of a data product offered through a marketplace.

Differences exist also. As it is clear from the definition above, unlike data fabric, data mesh is about analytical data. It is narrower in focus than data fabric. Secondly, it emphasizes operational model and culture, meaning it is beyond just an architecture like data fabric. The nature of data product can be generic in data fabric, whereas data mesh clearly prescribes domain-driven ownership of data products.

The relationship between data lakehouse, data fabric and data mesh


Clearly, these three concepts have their own focus and strength. Yet, the overlap is evident.

Lakehouse stands apart from the other two. It is a new technology, like its predecessors. It can be codified. Multiple products exist in the market, including Databricks, Azure Synapse and Amazon Athena.

Data mesh requires a new operating model and cultural change. Often such cultural changes require a shift in the collective mindset of the enterprise. As a result, data mesh can be revolutionary in nature. It can be built from ground up at a smaller part of the organization before spreading into the rest of it.

Data fabric does not have such pre-requisites as data mesh. It is does not expect such cultural shift. It can be built up using existing assets, where the enterprise has invested over the period of years. Thus, its approach is evolutionary.

So how can an enterprise embrace all these concepts?

Address old data platforms by adopting a data lakehouse

It can embrace adoption of a lakehouse as part of its own data platform evolution journey. For example, a bank may get rid of its decade old datawarehouse and deliver all BI and AI use cases from a single data platform, by implementing a lakehouse.

Address data complexity with a data fabric architecture

If the enterprise is complex and has multiple data platforms, if data discovery is a challenge, if data delivery at different parts of the organization is difficult – data fabric may be a good architecture to adopt. Along with existing data platform nodes, one or multiple lakehouse nodes may also participate there. Even the transactional databases may also join the fabric network as nodes to offer or consume data assets.

Address business complexity with a data mesh journey

To address the business complexity, if the enterprise embarks upon a cultural shift towards domain driven data ownership, promotes self-service in data discovery and delivery, and adopts federated governance – they are on a data mesh journey. If the data fabric architecture is already in place, the enterprise may use it as a key enabler in their data mesh journey. For example, the data fabric marketplace may offer domain centric data products – a key data mesh outcome – from it. The metadata driven discovery already established as a capability through data fabric can be useful in discovering the new data products coming out of mesh.

Every enterprise can look at their respective business goals and decide which entry point suits them best. But even though entry points or motivations can be different, an enterprise may easily use all three concepts together in their quest to data-centricity.

Source: ibm.com

Saturday, 28 January 2023

Understanding Data Governance

IBM, IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Skill, IBM Jobs, IBM Learning

If you’re in charge of managing data at your organization, you know how important it is to have a system in place for ensuring that your data is accurate, up-to-date, and secure. That’s where data governance comes in.

What exactly is data governance and why is it so important?


Simply put, data governance is the process of establishing policies, procedures, and standards for managing data within an organization. It involves defining roles and responsibilities, setting standards for data quality, and ensuring that data is being used in a way that is consistent with the organization’s goals and values.

But don’t let the dry language fool you – data governance is crucial for the success of any organization. Without it, you might as well be throwing your data to the wolves (or the intern who just started yesterday and has no idea what they’re doing). Poor data governance can lead to all sorts of problems, including:

Inconsistent or conflicting data

Imagine trying to make important business decisions based on data that’s all over the place. Not only is it frustrating, but it can also lead to costly mistakes.

Data security breaches

If your data isn’t properly secured, you’re leaving yourself open to all sorts of nasty surprises. Hackers and cyber-criminals are always looking for ways to get their hands on sensitive data, and without proper data governance, you’re making it way too easy for them.

Loss of credibility

If your data is unreliable or incorrect, it can seriously damage your organization’s reputation. No one is going to trust you if they can’t trust your data.

As you can see, data governance is no joke. But that doesn’t mean it can’t be fun! Okay, maybe “fun” is a stretch, but there are definitely ways to make data governance less of a chore. Here are a few best practices to keep in mind:

Establish clear roles and responsibilities

Make sure everyone knows who is responsible for what. Provide the necessary training and resources to help people do their jobs effectively.

Define policies and procedures

Set clear guidelines for how data is collected, stored, and used within your organization. This will help ensure that everyone is on the same page and that your data is being managed consistently.

Ensure data quality

Regularly check your data for accuracy and completeness. Put processes in place to fix any issues that you find. Remember: garbage in, garbage out.

Break down data silos

Data silos are the bane of any data governance program. By breaking down these silos and encouraging data sharing and collaboration, you’ll be able to get a more complete picture of what’s going on within your organization.

Of course, implementing a successful data governance program isn’t always easy. You may face challenges like getting buy-in from stakeholders, dealing with resistance to change, and managing data quality. But with the right approach and a little bit of persistence, you can overcome these challenges and create a data governance program that works for you.

So don’t be afraid to roll up your sleeves and get your hands dirty with data governance. Your data – and your organization – will thank you for it.

In future posts, my Data Elite team and I will help guide you in this journey with our point of view and insights on how IBM can help accelerate your organization’s data readiness with our solutions.

Source: ibm.com

Tuesday, 24 January 2023

The people and operations challenge: How to enable an evolved, single hybrid cloud operating model

IBM Cloud Computing, IBM Exam, IBM Prep, IBM Tutorial and Materials, IBM Career, IBM Skills, IBM Jobs, IBM

In a year’s time, the average enterprise will have more than 10 clouds, but limited architectural guardrails and implementation pressures will cause the IT landscape to become more complex, costlier and less likely to deliver better business outcomes. As businesses adopt a hybrid cloud approach to help drive digital transformation, leaders recognize the siloed, suboptimal workflows on their public cloud and private and on-prem estates. In fact, 71% of executives see integration across the cloud estate as a problem.

These leaders must overcome many challenges as they work to simplify and bring order to their hybrid IT estate, including talent shortages, operating model confusion and managing the journey from the current operating model to the target operating model.

By taking three practical steps, leaders can empower their teams and design workflows that break down siloed working practices into an evolved, single hybrid cloud operating model.

Three steps on the journey to hybrid cloud mastery


1. Empower a Cloud Center of Excellence (CCoE) to bring the hybrid cloud operating model to life and to accelerate execution

In a talent constrained environment, the CCoE houses cross-disciplinary subject-matter experts who will define and lead the transition to a new operating model and new working practices. These experts must be empowered to work across all of the cloud silos, as the goal is to dissolve silos into an integrated, common way of working that serves customers and employees better than a fragmented approach.

This might be uncomfortable, especially in hardened silos. We recommend that you treat developers and delivery teams as customers on this journey. Help them answer the question of how this new way of working is better than the old way of doing things. Seeing around corners requires investing in a small team of scouts (“Look Ahead Squads”) that stays one or two steps ahead of current implementations. These scouts should be flexible, as implementing this change is a learning experience.

2. Empower your people with the skills and experience they’ll need to thrive in a hybrid cloud operating model

69% of business leaders are lacking teams with the right cloud skills. There aren’t enough cloud architects, microservice developers or data engineers, especially if the pool of specialists is spread across cloud silos. With hybrid cloud, a consistent DevSecOps toolchain and a coherent operating model, you don’t need to train everyone on every silo of technology and practice.

IBM Cloud Computing, IBM Exam, IBM Prep, IBM Tutorial and Materials, IBM Career, IBM Skills, IBM Jobs, IBM
Address the skill gap by prioritizing the specializations required, make learning experiential so people get coaching on how to use new skills in the context of their roles in the new hybrid cloud operating model, and shape new ways of working by conducting training more efficiently and at scale within a garage environment. Drive toward true DevSecOps practices by emphasizing how the skillsets and practices involved need to be applied in an integrated, cross-disciplinary operating model. As a hybrid cloud operating model evolves, it becomes clear that cloud-native teams don’t work in isolation. Organizations must spend more time defining and evolving the proficiency framework that has previously been done in silos.

3. Reframe the talent problem as an operating model design opportunity

Operating model problems are often misread as talent problems. As W. Edwards Deming says, “A bad system will beat a good person every time.” So, design the work required for hybrid cloud operations first, and adjust your organization second.

Be aware of the fact that operating models and organization charts are different animals. To clarify, an operating model is primarily concerned with how the work of service delivery flows from customer request to fulfillment. In contrast, the primary concern of an organization chart is a hierarchy of how power and control are distributed, which should be designed to make very best use of the people you have now.

As leaders navigate the transition to a hybrid cloud environment, well-designed solutions that span business and IT become more valuable than ever. These steps ensure that the IT roadmap moves in lockstep with the business roadmap and enables leaders to consider how each interim state contributes to the evolution from the current operating model to the target operating model. This awareness can be an organization’s superpower for incorporating cloud-native, efficient and connected working practices across the hybrid environment to deliver innovation at speed (as well as alleviating issues with skill, talent and experience).

Source: ibm.com

Sunday, 22 January 2023

Make data protection a 2023 competitive differentiator

IBM Exam, IBM Certification, IBM Prep, IBM Preparation

Data privacy regulations, such as the General Data Protection Regulation (GDPR) in the European Union or the California Consumer Privacy Act (CCPA) in the state of California, are inescapable. By 2024, for instance, 75% of the entire world’s population will have its personal data protected by encryption, multifactor authentication, masking and erasure, as well as data resilience. It’s clear data protection, and its three pillars—data security, data ethics and data privacy—is the baseline expectation for organizations and stakeholders, both now and into the future.

While this trend is about protecting customer data and maintaining trust amid shifting regulations, it can also be a game changing, competitive advantage for organizations. In implementing cohesive data protection initiatives, organizations that can secure their users’ data see huge wins in brand image and customer loyalty and stand out in the marketplace.

The key to differentiation comes in getting data protection right, as part of an overall data strategy. Keep reading to learn how investing in the right data protection supports and optimizes your brand image.

Use a data protection strategy to maintain your brand image and customer trust


How a company collects, stores and protects consumer data goes beyond cutting data storage costs—it is a central driving force of its reputation and brand image. As a baseline, consumers expect organizations to adhere to data privacy regulations and compliance requirements; they also expect the data and AI lifecycle to be fair, explainable, robust, equitable and transparent.

Operating with a ‘data protection first’ point of view forces organizations to ask the hard hitting, moral questions that matter to clients and prospects: Is it ethical to collect this person’s data in the first place? As an organization, what are we doing with this information? Have we shared our intentions with respondents from whom we’ve collected this data? How long and where will this data be retained? Are we going to harm anybody by doing what we do with data?

Differentiate your brand image with privacy practices rooted in data ethics


When integrated appropriately, data protection and the surrounding data ethics creates a deep trust with clients and the market overall. Take Apple, for example. They have been exceedingly clear in communicating with consumers what data is collected, why they’re collecting that data, and whether they’re making any revenue from it. They go to great lengths to integrate trust, transparency and risk management into the DNA of the company culture and the customer experience. A lot of organizations aren’t as mature in this area of data ethics.

One of the key ingredients to optimizing your brand image through data protection and trust is active communication, both internally and externally. This requires organizations to rethink the way they do business in the broadest sense. To do this, organizations must lean into data privacy programs that build transparency and risk management into everyday workflows. It goes beyond preventing data breaches or having secure means for data collection and storage. These efforts must be supported by integrating data privacy and data ethics into an organization’s culture and customer experiences.

When cultivating a culture rooted in data ethics, keep these three things in mind:


◉ Regulatory compliance is a worthwhile investment, as it mitigates risk and helps generate revenue and growth.
◉ The need for compliance is not disruptive; it’s an opportunity to differentiate your brand and earn consumer trust.
◉ Laying the foundation for data privacy allows your organization to manage its data ethics better.

Embrace the potential of data protection at the core of your competitive advantage


Ultimately, data protection fosters ongoing trust. It isn’t a one-and-done deal. It’s a continuous, iterative journey that evolves with changing privacy laws and regulations, business needs and customer expectations. Your ongoing efforts to differentiate your organization from the competition should include strategically adopt and integrate data protection as a cultural foundation of how work gets done.

By enabling an ethical, sustainable and adaptive data protection strategy that ensures compliance and security in an ever-evolving data landscape, you are building your organization into a market leader.

Source: ibm.com

Saturday, 21 January 2023

Four starting points to transform your organization into a data-driven enterprise

IBM Exam Study, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Tutorial and Materials, IBM Certification

Due to the convergence of events in the data analytics and AI landscape, many organizations are at an inflection point. Regardless of size, industry or geographical location, the sprawl of data across disparate environments, increase in velocity of data and the explosion of data volumes has resulted in complex data infrastructures for most enterprises. Furthermore, a global effort to create new data privacy laws, and the increased attention on biases in AI models, has resulted in convoluted business processes for getting data to users. How do business leaders navigate this new data and AI ecosystem and make their company a data-driven organization? The solution is a data fabric.

A data fabric architecture elevates the value of enterprise data by providing the right data, at the right time, regardless of where it resides.  To simplify the process of becoming data-driven with a data fabric, we are focusing on the four most common entry points we see with data fabric journeys. In 2023, we have four entry points aligned to common data & AI stakeholder challenges.

We are also introducing IBM Cloud Park for Data Express. These are solutions that are aligned to the data fabric entry points. IBM Cloud Pak for Data Express solutions provide new clients with affordable and high impact capabilities to expeditiously explore and validate the path to become a data-driven enterprise. IBM Cloud Pak for Data Express solutions offer clients a simple on ramp to start realizing the business value of a modern architecture.

Data governance


The data governance capability of a data fabric focuses on the collection, management and automation of an organization’s data. The automated metadata generation is essential to turn a manual process into one that is better controlled. In this way it helps avoid human error and tags data so that policy enforcement can be achieved at the point of access rather than individual repositories.  This data-driven approach makes it easier to find the data that best fits their needs of business users. More importantly, this capability enables business users to quickly and easily find the quality data that conforms to regulatory requirements. IBM’s data governance capability enables the enforcement of policies at runtime anywhere, in essence “policies that move with the data”. This capability will provide data users with visibility into origin, transformations, and destination of data as it is used to build products.  The result is more useful data for decision-making, less hassle and better compliance.

Data integration


The rapid growth of data continues to proceed unabated and is now accompanied by not only the issue of siloed data but a plethora of different repositories across numerous clouds. The reasoning is simple and well-justified with the exception of data silos; more data allows the opportunity to provide more accurate data-driven insights, while using multiple clouds helps avoid vendor lock-in and allows data to be stored where it best fits. The challenge, of course, is the added complexity of data management that hinders the actual use of that data for better decisions, analysis and AI.

As part of a data fabric, IBM’s data integration capability creates a roadmap that helps organizations connect data from disparate data sources, build data pipelines, remediate data issues, enrich data quality, and deliver integrated data to multicloud platforms. From there, it can be easily accessed via dashboards by data consumers or those building into a data product. The kind of digital transformation that an organization gets with data integration ensures that the right data can be delivered to the right person at the right time. With IBM’s data integration portfolio, you are not locked into just a single integration style. You can select a hybrid integration strategy that aligns with your organization’s business strategy to meet the needs of your data consumers wanting to access and utilize the data.

Data science and MLOps


AI is no longer experimental. These technologies are becoming mainstream across industries and are proving key drivers of enterprise innovation and growth, leading to more accurate, quicker strategic decisions. When AI is done right, enterprises are seeing increased revenues, improved customer experiences and faster time-to-market, all of which leads to revenue gains and improvements in their competitive positioning.

The data science and MLOps capability provides data science tools and solutions that enable enterprises to accelerate AI-driven innovation, simplify the MLOps lifecycle, and run any AI model with a flexible deployment. With this capability, not only can data-driven companies operationalize data science models on any cloud while instilling trust in AI outcomes, but they are also in a position to improve the ability to manage and govern the AI lifecycle to optimize business decisions with prescriptive analytics.

AI governance


Artificial intelligence (AI) is no longer a choice. Adoption is imperative to beat the competition, release innovative products and services, better meet customer expectations, reduce risk and fraud, and drive profitability. However, successful AI is not guaranteed and does not always come easy. AI initiatives require governance, compliance with corporate and ethical principles, laws and regulations.

A data fabric addresses the need for AI governance by providing capabilities to direct, manage and monitor the AI activities of an organization. AI governance is not just a “nice to have”. It is an integral part of an organization adopting a data-driven culture. It is critical to avoid audits, hefty fines or damage to the organization’s reputation. The IBM AI governance solution provides automated tools and processes enabling an organization to direct, manage and monitor across the AI lifecycle.

IBM Cloud Pak for Data Express solutions


As previously mentioned, we now provide a simple, lightweight, and fast means of validating the value of a data fabric. Through the IBM Cloud Pak for Data Express solutions, you can leverage data governance, ELT Pushdown, or data science and MLOps capabilities to quickly evaluate the ability to better utilize data by simplifying data access and facilitating self-service data consumption. In addition, our comprehensive AI Governance solution complements the data science & MLOps express offering. Rapidly experience the benefits of a data fabric architecture in a platform solution that makes all data available to drive business outcomes.

Source: ibm.com

Friday, 20 January 2023

It’s 2023… are you still planning and reporting from spreadsheets?

IBM, IBM Exam, IBM Exam Study, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials

I have worked with IBM Planning Analytics with Watson, the platform formerly known as TM1, for over 20 years. I have never been more excited to share in our customers’ enthusiasm for this solution and how it is revolutionising many manual, disconnected, and convoluted processes to support an organisation’s planning and decision-making activities.

Over the last few months, we have been collecting stories from our customers and projects delivered in 2022 and hearing the feedback on how IBM Planning Analytics has helped support organisations across not only the office of finance – but all departments in their organisation.

The need for a better planning and management system


More and more we are seeing demand for solutions that bring together a truly holistic view of both operational and financial business planning data and models across the entire width and breadth of the enterprise. Having such a platform that also allows planning team members to leverage predictive forecasting and integrate decision management and optimisation models puts organisations at a significant advantage over those that continue to rely on manual worksheet or spreadsheet-based planning processes.

A better planning solution in action


One such example comes from a recent project with Oceania Dairy. Oceania Dairy operates a substantial plant in the small New Zealand, South Island town of Glenavy, on the banks of the Waitaki River. The plant can convert 65,000 litres of milk per hour into 10 tons of powder, or 47,000 tons of powder per year, from standard whole milk powders through to specialty powders including infant formula. The site runs infant formula blending and canning lines, UHT production lines, and produces Anhydrous Milk Fat. In total, the site handles more than 250 million litres of milk per year and generates export revenue of close to NZD$500 million.

Oceania Supply Chain Manager, Leo Zhang shares in our recently published case study: “Connectivity has two perspectives: people to people, facilitating information flows between our 400 employees on site. Prior to CorPlan’s work to implement the IBM Planning Analytics product, there was low information efficiency, with people working on legacy systems or individual spreadsheets. The second perspective is integration. While data is supposed to be logically connected, decision makers were collating Excel sheets, resulting in poor decision efficiency.

CorPlan”, adds Zhang, “has fulfilled this aspect by delivering a common platform which creates a single version of the truth, and a central system where data updates uniformly”. In terms of Collaboration, he says teams working throughout the supply chain managing physical stock flows are being connected from the start to the finish of product delivery. “It’s hard for people to work towards a common goal in the absence of a bigger picture. Collaboration brings that bigger picture to every individual while CorPlan provides that single, common version of the truth,” Zhang comments.

The merits of a holistic planning platform


While the approach of selecting a platform to address a single piece of the planning puzzle – such as Merchandise Planning, or S&OP (Sales and Operational Planning), Workforce Planning or even FP&A (Financial Planning and Analysis) – may be a organisations desired strategy, selecting a platform that can grow and support all planning elements across the organisation has significant merits. Customers such as Oceania Dairy are realising true ROI metrics by having:
 
◉ All an organisation’s stakeholders operating from a single set of agreed planning assumptions, permissions, variables, and results

◉ A platform that supports the ability to run any number of live forecast models to support the data analysis and what-if scenarios that are needed to support stakeholder decision-making

◉ An integrated consolidation of the various data sources capturing the actual transactional data sets, such as ERP, Payroll, CRM, Data Marts/ Warehouses, external data stores and more

◉ Enterprise-level security

◉ In the cloud, as a service delivery

My team and I get a big kick out of delivering that first real-time demonstration to a soon-to-be customer, showing them what the IBM Planning Analytics platform can do. It is not just the extensive features and workflow functionality that generates excitement. It is that moment when they experience a sudden and striking realisation – an epiphany – that this product is going to revolutionise the painful, challenging, and time-consuming process of pulling together a plan.

What is even better is then delivering a project successfully, on time and within budget, that delivers the desired results and exceeds expectations.

If you want to experience what “good planning” looks like, feel free to reach out. The CorPlan team would love to help you start your performance management journey. We can help with product trials, proof of concept or simply supply more information about the solution to support your internal business case. It’s time to address the challenges that a disconnected, manual planning and reporting process brings.

Source: ibm.com

Thursday, 19 January 2023

Leveraging machine learning and AI to improve diversity in clinical trials

IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Prep, IBM Prearation, IBM Tutorial and Materials, IBM Guides, IBM Job

The modern medical system does not serve all its patients equally—not even nearly so. Significant disparities in health outcomes have been recognized and persisted for decades. The causes are complex, and solutions will involve political, social and educational changes, but some factors can be addressed immediately by applying artificial intelligence to ensure diversity in clinical trials.

A lack of diversity in clinical trial patients has contributed to gaps in our understanding of diseases, preventive factors and treatment effectiveness. Diversity factors include gender, age group, race, ethnicity, genetic profile, disability, socioeconomic background and lifestyle conditions. As the Action Plan of the FDA Safety and Innovation Act succinctly states, “Medical products are safer and more effective for everyone when clinical research includes diverse populations.” But certain demographic groups are underrepresented in clinical trials due to financial barriers, lack of awareness, and lack of access to trial sites. Beyond these factors, trust, transparency and consent are ongoing challenges when recruiting trial participants from disadvantaged or minority groups.

There are also ethical, sociological and economic consequences to this disparity. An August 2022 report by the National Academies of Sciences, Engineering, and Medicine projected that hundreds of billions of dollars will be lost over the next 25 years due to reduced life expectancy, shortened disability-free lives, and fewer years working among populations that are underrepresented in clinical trials.

In the US, diversity in trials is a legal imperative. The FDA office of Minority Health and Health Equity provides extensive guidelines and resources for trials and recently released guidance to improve participation from underrepresented populations.

From moral, scientific, and financial perspectives, designing more diverse and inclusive clinical trials is an increasingly prominent goal for the life science industry. A data-driven approach, aided by machine learning and artificial intelligence (AI), can aid these efforts.

The opportunity


Life science companies have been required by FDA regulations to present the effectiveness of new drugs by demographic characteristics such as age group, gender, race and ethnicity. In the coming decades, the FDA will also increasingly focus on genetic and biological influences that affect disease and response to treatment. As summarized in a 2013 FDA report, “Scientific advances in understanding the specific genetic variables underlying disease and response to treatment are increasingly becoming the focus of modern medical product development as we move toward the ultimate goal of tailoring treatments to the individual, or class of individuals, through personalized medicine.”

Beyond demographic and genetic data, there is a trove of other data to analyze, including electronic medical records (EMR) data, claims data, scientific literature and historical clinical trial data.

Using advanced analytics, machine learning and AI on the cloud, organizations now have powerful ways to:

◉ Form a large, complicated, diverse set of patient demographics, genetic profiles and other patient data
◉ Understand the underrepresented subgroups
◉ Build models that encompass diverse populations
◉ Close the diversity gap in the clinical trial recruitment process
◉ Ensure that data traceability and transparency align with FDA guidance and regulations

Initiating a clinical trial consists of four steps:

1. Understanding the nature of the disease
2. Gathering and analyzing the existing patient data
3. Creating a patient selection model
4. Recruiting participants

Addressing diversity disparity during steps two and three will help researchers better understand how drugs or biologics work, shorten clinical trial approval time, increase trial acceptability amongst patients and achieve medical product and business goals.

A data-driven framework for diversity


Here are some examples to help us understand the diversity gaps. Hispanic/Latinx patients make up 18.5% of the population but only 1% of typical trial participants; African-American/Black patients make up 13.4% of the population but only 5% of typical trial participants. Between 2011 and 2020, 60% of vaccine trials did not include any patients over 65—even though 16% of the U.S. population is over 65. To fill diversity gaps like these, the key is to include the underrepresented populations in the clinical trial recruitment process.

For the steps leading up to recruitment, we can evaluate the full range of data sources listed above. Depending on the disease or condition, we can evaluate which diversity parameters are applicable and what data sources are relevant. From there, clinical trial design teams can define patient eligibility criteria, or expand trials to additional sites to ensure all populations are properly represented in the trial design and planning phase.

How IBM can help


To effectively enable diversity in clinical trials, IBM has various solutions, including data management, performing AI and advanced analytics on the cloud, and setting up an ML Ops framework. It helps trial designers provision and prepare data, merge various aspects of patient data, identify diversity parameters and eliminate bias in modeling. It does this using an AI-assisted process that optimizes patient selection and recruitment by better defining clinical trial inclusion and exclusion criteria.

Because the process is traceable and equitable, it provides a robust selection process for trial participant recruitment. As life sciences companies adopt such frameworks, they can build trust that clinical trials have diverse populations and thus build trust in their products. Such processes also help healthcare practitioners better understand and anticipate possible impacts products may have on specific populations, rather than responding ad hoc, where it may be too late to treat conditions.

Summary


IBM’s solutions and consulting services can help you leverage additional data sources and identify more relevant diversity parameters so that trial inclusion and exclusion criteria can be re-examined and optimized. These solutions can also help you determine whether your patient selection process accurately represents disease prevalence and improve clinical trial recruitment. Using machine learning and AI, these processes can easily be scaled across a range of trials and populations as part of a streamlined, automated workflow.

These solutions can help life sciences companies build trust with communities that have been historically underrepresented in clinical trials and improve health outcomes.

Source: ibm.com

Tuesday, 17 January 2023

Experiential Shopping: Why retailers need to double down on hybrid retail


Shopping can no longer be divided into online or offline experiences. Most consumers now engage in a hybrid approach, where a single shopping experience involves both in-store and digital touchpoints. In fact, this hybrid retail journey is the primary buying method for 27% of all consumers, and the specific retail category and shopper age can significantly increase this number. According to Consumers want it all, a 2022 study from the IBM Institute for Business Value (IBV), “today’s consumers no longer see online and offline shopping as distinct experiences. They expect everything to be connected all the time.”

Experiential hybrid retail is a robust omnichannel approach to strategically blending physical, digital, and virtual channels. It empowers customers with the freedom to engage with brands on whichever shopping channel is most convenient, valued or preferred at any given time. For example, consumers may engage in product discovery on social platforms, purchase online and pick up items at a store. They may also be in a store using digital tools to locate or research products. The possibilities are endless.

While hybrid retail is now an imperative for brands, it has created new complexities for retailers. “Channel explosion is a reality and retailers are challenged to scale their operations across what is essentially a moving target,” says Richard Berkman, VP & Sr. Partner, Digital Commerce, IBM iX. The result is often disconnected shopping journeys that fail consumers. Imagine selecting the “in-store pickup” option for an online purchase, only to discover that fulfillment of the order was impossible since the store was out of stock.

According to Shantha Farris, Global Digital Commerce Strategy and Offerings Leader at IBM iX, the real cost of an unsuccessful approach to hybrid retail is losing customers—potentially forever. There are still a lot of pandemic-weary consumers for whom patience and tolerance for shopping-related friction is at an all-time low. Additionally, people remain desperate to feel connected. Retailers must be totally on point, pleasing customers with friction-free and highly experiential omnichannel commerce journeys. When this doesn’t happen, customers can react harshly. Farris refers to this phenomenon as “rage shopping” and observes that consumers will choose to shop elsewhere based on one disappointing experience. “End customers demand frictionless experiences,” she says. “They’re empowered. They have choices. They want to trust that their brand experience will be trusted, relevant, and convenient—and that this will remain true every time they shop with you.” Retailers must modernize their technology ecosystem for omnichannel and cross-channel consistency.

“Whether the transaction itself occurs digitally or physically is beside the point. It’s got to be experiential.”

Websites. Mobile apps. Social, live streaming and metaverse platforms. Determining which channels to strategically activate is tricky, but it’s not impossible. Commerce begins with brand engagement and product discovery, so it is critical to leverage data-driven insights to understand customers: everything from who they are to how they prefer to progress through the end-to-end shopping journey and how compelling they rate the experience. Then, Berkman says, “retailers need an experience-led vision of the future of their commerce initiatives across channels, with an ability to activate data and dynamically manage those channels.”

Which channels offer the best chance for positive consumer engagement? It depends on the brand. Additionally, measuring the success of each individual channel cannot be assessed using only conversion metrics. Farris comments, “You might discover a product on TikTok, but conversion will probably take place elsewhere.”

A primary benefit of augmented reality is increased consumer engagement and confidence at the earliest stage of a purchase.

The reality of rage shopping is a useful premise for retailers re-examining the current efficacy of every interaction along the purchase journey. Each step, from product discovery to last-mile fulfillment and delivery, needs to “meet customers where they are and evolve into one connected experience,” Berkman says.

Here are three ways to approach hybrid retail using technology along the customer journey. “Whether the transaction itself occurs digitally or physically is beside the point. It’s got to be experiential,” Farris says. “And to provide that experience, you need technology.”

Enhance product discovery with AR


A primary benefit of augmented reality (AR) is increased consumer engagement and confidence at the earliest stage of a purchase. Farris points to work done for a paint company in which IBM designed and deployed a color selection tool, which allows consumers to virtually test different paints on their walls. “There’s a huge fear factor in committing to a paint color for a room,” she says, but with virtual testing, “all of a sudden, your confidence in this purchase goes through the roof.”

AR has a measurable impact on reducing returns, which can cost retailers up to 66% of a product’s sale price.

Similar AR tools have been a hit for retailers like Ikea and Wayfair, allowing consumers to see how furniture will look in their actual homes. Smart mirrors provide another example: This interactive AR tool enables a quicker try-it-on experience, creating an expanded range of omnichannel buying opportunities for in-store shoppers. Effective AR use is also shown to have a measurable impact on reducing returns, which can cost retailers up to 66% of a product’s sale price, according to 2021 data from Optoro, a reverse logistics company. And a 2022 report from IDC noted: “AR/VR—over 70% view this technology as important, but less than 30% are using it.” That said, a study shared by Snap Inc found that by 2025, nearly 75% of the global population—and almost all smartphone users—will be frequent AR users.

Empower decision-making with 3D modeling


“Digital asset management is a fundamental part of commerce; 3D assets are just the next generation of it,” Farris says. 3D coupled with AR allows consumers to manipulate products in space. “It’s about making product information really convenient and relevant for consumers,” she says. In 2021, Shopify reported on average a 94% increase in the conversion rate for merchants who featured 3D content. One example is being able to virtually “try on” a shoe and see every angle of it by rotating your foot. The technology is useful for B2B too. “Instead of reading 50 pages of requirements and specs for some widget, buyers can actually turn the part in space and see if it’ll fit on an assembly line,” Farris says.

3D assets coupled with AR go beyond providing retailers with today’s tools. It’s a measure of futureproofing. “Some of these technologies will give you immediate returns today,” Farris says, “but they will also help retailers build capabilities that will be applicable to deploying a full metaverse commerce experience.”

Digitize how consumers interact with physical stores


In-store customer experiences can be significantly enriched with the use of digital tools and seamless omnichannel integration. Farris points to a major home improvement retailer that does this well. “If you go into one of these stores and can’t find an associate to help you, you can whip out your phone, go to the store’s website, and it’ll tell you what bin a product is in, down to the height of the shelf it’s on. Your phone becomes your in-store guide.”

The employee experience is also dramatically improved with the right digital technologies and omnichannel access. “Store associates need to have real-time data and insights relative to anyone who might walk in the door,” Berkman says, noting that associates, service agents and salespeople should act more like “a community of hosts.” Armed with the right information and access to technology like predictive analytics and automation, Berkman says, “those employees would have the insights to effectively engage customers and create more immersive and personalized experiences to drive brand growth.”

Source: ibm.com

Thursday, 12 January 2023

How to unlock a scientific approach to change management with powerful data insights

IBM Exam, IBM Tutorial and Materials, IBM Certification, IBM Guides, IBM Prep, IBM Preparation, IBM Career, IBM Skills, IBM Job

Without a doubt, there is exponential growth in the access to and volume of process data we all, as individuals, have at our fingertips. Coupled with a current climate that is proving to be increasingly ambiguous and complex, there is a huge opportunity to leverage data insights to drive a more robust, evidence-based methodology to the way we work and manage change. Not only can data support a more compelling change management strategy, but it’s also able to identify, accelerate and embed change faster, all of which is critical in our continuously changing world. Grasping these opportunities at IBM, we’re increasingly building our specialism in process mining and data analysis tools and techniques we believe to be true ‘game changers’ when it comes to building cultures of continuous change and innovation.

For us, one stand-out area of opportunity presented by process data analytics capability is process mining: the process of finding anomalies, patterns and correlations within large data sets to predict process outcomes. Process mining presents the ‘art of the possible’ when it comes to investing time and energy into organizational change initiatives that can make change stick. We see four key areas where this process mining capability can be applied to truly transform the value derived by a change program:

1. Starting from the top: Focusing on the right change and making it feel right


We see time and time again the detrimental impact of misaligned or disengaged leaders, especially when it comes to driving a case for effective change and inspiring their teams to take action. A recent Leadership IQ survey in 2021 reported by Forbes states that ‘only 29% of employees say that their leader’s vision for the future always seems to be aligned with the organization’s’. Ultimately, if leaders at every level are not on the same page, and if people are not fully bought into the change, results become vulnerable. Having a brilliant vision and strategy doesn’t make a difference if you can’t get your leaders and employees to buy into that vision.

With the Business Case and Case for Change often being the first and potentially the most critical ‘must get right’ factor of any change program, the need for real-time data is clear. How much easier would it be to base a business transformation on data-driven insights that could demonstrate and quantify the opportunity based on the facts.

Leveraging data to replace the ‘gut feel’ on which too many business decisions are made enables change practitioners to separate perceptions from reality and decide which processes need the most focus. In other words, the data science of change management focuses you on that which will make the greatest impact on your business objectives, KPIs and strategic outcomes. It can show whether perceptions are real, as well as unearthing unexpected insights. Ultimately, insights based on hard data can bring transparency to your decision making and provide a more compelling story that everyone can agree on.

2. Picking up the pace: Accelerating the change journey


Do you know how your business really operates on the ground? Change programs can spend many weeks conducting interviews and workshops to identify ‘As Is’ pictures. However, we often find a disconnect between what users say is happening, and what data shows is actually happening. Process mining tools analyse data from thousands of transactions across all locations to paint a true story, providing faster and more reliable insights into the root causes of issues and identify the strongest opportunities for improvements.

Likewise, change agents spend a lot of time and effort on ‘To Be’ design and identifying the impact on those affected by the potential ‘new normal’. Process mining tools can perform a fit-gap analysis on new processes to rapidly and more accurately identify the greatest change impacts. This can, in turn, accelerate the creation of your communications and training programmes to support the introduction of the change with more exact and bespoke change journeys. Detailed insights about levels of process compliance can be brought to design teams to inform and focus the design of future processes and systems.

CoE = Center of Excellence = Accelerator for Change
A center of excellence (CoE) is a dedicated team that has been mandated to provide leadership, best practices, technical deployment, support and training for Celonis in your organization.

3. Making it happen: Driving faster adoption


There is a common conception that introducing new technology will make things faster and easier, but this isn’t always the case. Forbes states that ‘despite significant investments in new technologies over the past decade, many organisations are actually watching their operations slow down due to underutilization of technology’. Quite clearly, we can’t just ‘plug in a system’ and expect it to be adopted. No matter how ‘fit for purpose’ it is in its design, and how well adopted it is when introduced, these results will undoubtedly change due to a variety of factors within and outside of our control.

So how do you spot this early, and react or even prevent this in a timely and effective manner?

Though advanced analytics, process mining solutions can highlight gaps and opportunities in a digital transformation. This means you can intervene with targeted change interventions to course correct more quickly. For example, pin-pointing a certain user group or process which is reflecting lower levels of activity or inaccuracy of tasks. You can design training and engagement interventions to further upskill or educate team members; or, identify process improvement opportunities to better support them in the activities they undertake.

Furthermore, this capability can also tackle another stumbling block we face: once you have introduced a change, how do you easily measure its success and adoption?

By applying process mining, you can quickly see levels of process compliance right across the organisation and right across the value chain. Process mining tools automate and generate dashboards which illustrate an ‘at a glance’ view of adoption rates. They also allow you to quantify business value based on improvements and allows you to assign and track key metrics with business objectives.

4. Making it stick: Driving continuous change


Embedding a culture of continuous change and improvement into the DNA of an organisation is a significant undertaking. The McKinsey Organizational Health Index shows that the most common type of culture in high performing companies is the continuous improvement one. This Index proved that in almost 2000 companies, organizational health is closely linked to performance. In its very nature, data mining tools target this continuous improvement and equip its users with the data to continuously identify new opportunities and relentlessly reinvent the way things get done.

Due to their simplicity and intuitive qualities, process mining tools make it easy for non-data specialists to find and use data-driven insights, making it much easier to build a culture of continuous improvement and innovation across the organisation. However, data-based approaches such as process mining provide most value when organisations commit to them for the long term. Thus, adoption of these tools is a mini-change project in itself. Embedding the use of data and metrics in everyday work needs not only an understanding of the tool but also a change of mindset towards using data insights as part of continuous improvement efforts. Just as with any IT implementation, spending time taking users and stakeholders through the change curve from awareness to adoption will pay dividends in the long run.

Operationalize Change / Execution Management =>

Execute on shopfloor level based on facts & figures / using Machine Learning / AI

Final thoughts on data insights for change management


To conclude, through the power of data, change management will move from a project-based discipline struggling to justify adequate investment to one that is advising on business outcomes and how to successfully deliver them. It presents opportunities not only to super-charge and accelerate typical change management interventions – such as the development of change journeys or tracking adoption – but flex its power to influence and change the mindsets of those working with it. Process mining equips and empowers people to think, challenge and reinvent what we do. This goes hand in hand with our appreciation that ‘change is always changing’ and we need to keep pace or risk falling behind.

Source: ibm.com

Tuesday, 10 January 2023

Using a digital self-serve experience to accelerate and scale partner innovation with IBM embeddable AI

IBM Exam Study, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Learning

IBM has invested $1 billion into our partner ecosystem. We want to ensure that partners like you have the resources to build your business and develop software for your customers using IBM’s industry-defining hybrid cloud and AI platform. Together, we build and sell powerful solutions that elevate our clients’ businesses through digital transformation.

To that end, IBM recently announced a set of embeddable AI libraries that empower partners to create new AI solutions. In fact, IBM supports an easy and fast way to embed and adopt IBM AI technologies through the new Digital Self-Serve Co-Create Experience (DSCE).

The Build Lab team created the DSCE to complement its high-touch engagement process and provide a digital self-service experience that scales to tens of thousands of Independent Software Vendors (ISVs) adopting IBM’s embeddable AI. Using the DSCE self-serve portal, partners can discover and try the recently launched IBM embeddable AI portfolio of IBM Watson Libraries, IBM Watson APIs, and IBM applications at their own pace and on their schedule. In addition, DSCE’s digitally guided experience enables partners to effortlessly package and deploy their software at scale.

IBM Exam Study, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Learning

Your on-ramp to embeddable AI from IBM


The IBM Build Lab team collaborates with qualified ISVs to build Proofs of Experience (PoX) demonstrating the value of combining the best of IBM Hybrid Cloud and AI technology to create innovative solutions and deliver unique market value.

DSCE is a wizard-driven experience. Users respond to contextual questions and get suggested prescriptive assets, education, and trials while rapidly integrating IBM technology into products. Rather than manually searching IBM websites and repositories for potentially relevant information and resources, DSCE does the legwork for you, providing assets, education, and trial resources based on your development intent. The DSCE guided path directs you to reference architectures, tutorials, best practices, boilerplate code, and interactive sandboxes for a customized roadmap with assets and education to speed your adoption of IBM AI.

Embark on a task-based journey


DSCE works seamlessly for both data scientist and machine learning operations (ML-Ops) engineers’ personas.

For example, data scientist, Miles wants to customize an emotion classification model to discover what makes customers happiest. His startup provides analysis of customer feedback to help the retail e-commerce customers it serves. He wants to provide high-quality analysis of the most satisfied customers, so he chooses a Watson NLP emotion classification model that he can fine-tune using an algorithm that predicts ‘happiness’ with greater confidence than pre-trained models. This type of modeling can all be done in just a few simple clicks:

◉ Find and try AI ->
◉ Build with AI Libraries ->
◉ Build with Watson NLP ->
◉ Emotion classification ->
◉ Library and container ->
◉ Custom train the model ->
◉ Results Page

IBM Exam Study, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Learning

The bookmarkable Results Page gives a comprehensive set of assets for both training and deploying a model. For accomplishing the task of “Training the Model,” Miles can explore interactive demos, reserve a Watson Studio environment, copy a snippet from a Jupyter notebook, and much more.

If Miles, or his ML-Ops counterpart, Leena, wants to “Deploy the Model,” they can get access to the trial license and container of the new Watson NLP Library for 180 days. From there it’s easy to package and deploy the solution on Kubernetes, Red Hat OpenShift, AWS Fargate, or IBM Code Engine. It’s that simple!

Try embeddable AI now


Try the experience here: https://dsce.ibm.com/ and accelerate your AI-enabled innovation now. DSCE will be extended to include more IBM embeddable offerings, satisfying modern developer preferences for digital and self-serve experiences, while helping thousands of ISVs innovate rapidly and concurrently. If you want to provide any feedback on the experience, get in touch through the “Contact us” link on your customized results page.

Source: ibm.com