Thursday 25 April 2024

5 steps for implementing change management in your organization

5 steps for implementing change management in your organization

Change is inevitable in an organization; especially in the age of digital transformation and emerging technologies, businesses and employees need to adapt. Change management (CM) is a methodology that ensures both leaders and employees are equipped and supported when implementing changes to an organization.

The goal of a change management plan, or more accurately an organizational change plan, is to embed processes that have stakeholder buy-in and support the success of both the business and the people involved. In practice, the most important aspect of organizational change is stakeholder alignment. This blog outlines five steps to support the seamless integration of organizational change management.

Steps to support organizational change management


1. Determine your audience

Who is impacted by the proposed change? It is crucial to determine the audience for your change management process.

Start by identifying key leaders­ and determine both their influence and involvement in the history of organizational change. Your key leaders can provide helpful context and influence employee buy-in. You want to interview leaders to better understand ‘why’ the change is being implemented in the first place. Ask questions such as:

◉ What are the benefits of this change?
◉ What are the reasons for this change?
◉ What does the history of change in the organization look like?

Next, identify the other groups impacted by change, otherwise known as the personas. Personas are the drivers of successful implementation of a change management strategy. It is important to understand what the current day-to-day looks like for the persona, and then what tomorrow will look like once change is implemented.

A good example of change that an organization might implement is a new technology, like generative AI (Gen AI). Businesses are implementing this technology to augment work and make their processes more efficient. Throughout this blog, we use this example to better explain each step of implementing change management.

Who is impacted by the implementation of gen AI? The key leaders might be the vice president of the department that is adding the technology, along with a Chief Technical Officer, and team managers. The personas are those whose work is being augmented by the technology.

2. Align the key stakeholders

What are the messages that we will deliver to the personas? When key leaders come together to determine champion roles and behaviors for instituting change, it is important to remember that everyone will have a different perspective.

To best align leadership, take an iterative approach. Through a stakeholder alignment session, teams can co-create with key leaders, change management professionals, and personas to best determine a change management strategy that will support the business and employees.

Think back to the example of gen AI as the change implemented in the organization. Proper alignment of stakeholders would be bringing together the executives deciding to implement the technology, the technical experts on gen AI, the team managers implementing gen AI into their workflows, and even trusted personas—the personas might have experienced past changes in the organization.

3. Define the initiatives and scope

Why are you implementing the change? What are the main drivers of change? How large is the change to the current structure of the organization? Without a clear vision for change initiatives, there will be even more confusion from stakeholders. The scope of change should be easily communicated; it needs to make sense to your personas to earn their buy-in.

Generative AI augments workflows, making businesses more efficient. However, one obstacle of this technology is the psychological aspect that it takes power away from individuals who are running the administrative tasks. Clearly defining the benefits of gen AI and the goals of implementing the technology can help employees better understand the need.

Along with clear initiatives and communication, including a plan to skill employees to understand and use the technology as part of their scope also helps promote buy-in. Drive home the point that the change team members, through the stakeholders, become evangelists pioneering a new way of working. Show your personas how to prompt the tool, apply the technology, and other use cases to grow their excitement and support of the change.

4. Implement the change management plan

After much preparation on understanding the personas, aligning the stakeholders and defining the scope, it is time to run. ‘Go live’ with the change management plan and remember to be patient with employees and have clear communication. How are employees handling the process? Are there more resources needed? This is the part where you highly consider the feedback that is given and assess if it helps achieve the shared goals of the organization.

Implementing any new technology invites the potential for bugs, lags or errors in usage. For our example with gen AI, a good implementation practice might be piloting the technology with a small team of expert users, who underwent training on the tool. After collecting feedback from their ‘go live’ date, the change management team can continue to phase the technology implementation across the organization. Remember to be mindful of employee feedback and keep an open line of communication.

5. Adapt to improve

Adapting the process is something that can be done throughout any stage of implementation but allocating time to analyze the Return on Investment (ROI) should be done at the ‘go live’ date of change. Reviewing can be run via the “sense and respond” approach.

Sense how the personas are reacting to said change. This can be done via sentiment analysis, surveys and information sessions. Then, analyze the data. Finally, based on the analysis, appropriately respond to the persona’s reaction.

Depending on how the business and personas are responding to change, determine whether the outlined vision and benefits of the change are being achieved. If not, identify the gaps and troubleshoot how to better support where you might be missing the mark. It is important to both communicate with the stakeholders and listen to the feedback from the personas.

To close out our example, gen AI is a tool that thrives on continuous usage and practices like fine-tuning. The organization can both measure the growth and success of the technology implemented, as well as the efficiency of the personas that have adapted the tool into their workflows. Leaders can share out surveys to pressure test how the change is resonating. Any roadblocks, pain points or concerns should be responded to directly by the change management team, to continue to ensure a smooth implementation of gen AI.

How to ensure success when implementing organizational change


The success formula to implementing organizational change management includes the next generation of leadership, an accelerator culture that is adaptive to change, and a workforce that is both inspired and engaged.

Understanding the people involved in the process is important to prepare for a successful approach to change management. Everyone comes to the table with their own view of how to implement change. It is important to remain aligned on why the change is happening. The people are the drivers of change. Keep clear, open and consistent communication with your stakeholders and empathize with your personas to ensure that the change will resonate with their needs.

As you craft your change management plan, remember that change does not stop at the implementation date of the plan. It is crucial to continue to sense and respond.

Source: ibm.com

Tuesday 23 April 2024

Deployable architecture on IBM Cloud: Simplifying system deployment

Deployable architecture on IBM Cloud: Simplifying system deployment

Deployable architecture (DA) refers to a specific design pattern or approach that allows an application or system to be easily deployed and managed across various environments. A deployable architecture involves components, modules and dependencies in a way that allows for seamless deployment and makes it easy for developers and operations teams to quickly deploy new features and updates to the system, without requiring extensive manual intervention.

There are several key characteristics of a deployable architecture, which include:

  1. Automation: Deployable architecture often relies on automation tools and processes to manage deployment process. This can involve using tools like continuous integration or continuous deployment (CI/CD) pipelines, configuration management tools and others.
  2. Scalability: The architecture is designed to scale horizontally or vertically to accommodate changes in workload or user demand without requiring significant changes to the underlying infrastructure.
  3. Modularity: Deployable architecture follows a modular design pattern, where different components or services are isolated and can be developed, tested and deployed independently. This allows for easier management and reduces the risk of dependencies causing deployment issues.
  4. Resilience: Deployable architecture is designed to be resilient, with built-in redundancy and failover mechanisms that ensure the system remains available even in the event of a failure or outage.
  5. Portability: Deployable architecture is designed to be portable across different cloud environments or deployment platforms, making it easy to move the system from one environment to another as needed.
  6. Customisable: Deployable architecture is designed to be customisable and can be configured according to the need. This helps in deployment in diverse environments with varying requirements.
  7. Monitoring and logging: Robust monitoring and logging capabilities are built into the architecture to provide visibility into the system’s behaviour and performance.
  8. Secure and compliant: Deployable architectures on IBM Cloud® are secure and compliant by default for hosting your regulated workloads in the cloud. It follows security standards and guidelines, such as IBM Cloud for Financial Services® , SOC Type 2, that ensures the highest levels of security and compliance requirements.

Overall, deployable architecture aims to make it easier for organizations to achieve faster, more reliable deployments, while also  making sure that the underlying infrastructure is scalable and resilient.

Deployable architectures on IBM Cloud


Deploying an enterprise workload with a few clicks can be challenging due to various factors such as the complexity of the architecture and the specific tools and technologies used for deployment. Creating a secure, compliant and tailored application infrastructure is often more challenging and requires expertise. However, with careful planning and appropriate resources, it is feasible to automate most aspects of the deployment process.  IBM Cloud provides you with well-architected patterns that are secure by default for regulated industries like financial services. Sometimes these patterns can be consumed as-is or you can add on more resources to these as per the requirements. Check out the deployable architectures that are available in the IBM Cloud catalog.

Deployment strategies for deployable architecture


Deployable architectures provided on IBM Cloud can be deployed in multiple ways, using IBM Cloud projects, Schematics, directly via CLI or you can even download the code and deploy on your own.

Use-cases of deployable architecture


Deployable architecture on IBM Cloud: Simplifying system deployment
Deployable architecture is commonly used in industries such as finance, healthcare, retail, manufacturing and government, where compliance, security and scalability are critical factors. Deployable architecture can be utilized by a wide range of stakeholders, including:

  1. Software developers, IT professionals, system administrators and business stakeholders who need to ensure that their systems and applications are deployed efficiently, securely and cost-effectively. It helps in reducing time to market, minimizing manual intervention and decreasing deployment-related errors.
  2. Cloud service providers, managed service providers and infrastructure as a service (IaaS) providers to offer their clients a streamlined, reliable and automated deployment process for their applications and services.
  3. ISVs and enterprises to enhance the deployment experience for their customers, providing them with easy-to-install, customizable and scalable software solutions that helps driving business value and competitive advantage.

Source: ibm.com

Saturday 20 April 2024

The journey to a mature asset management system

The journey to a mature asset management system

This blog series discusses the complex tasks energy utility companies face as they shift to holistic grid asset management to manage through the energy transition. Earlier posts in this series addressed the challenges of the energy transition with holistic grid asset management, the integrated asset management platform and data exchange, and merging traditional top-down and bottom-up planning processes.

Asset management and technological innovation


Advancements in technology underpin the need for holistic grid asset management, making the assets in the grid smarter and equipping the workforce with smart tools.

Robots and drones perform inspections by using AI-based visual recognition techniques. Asset performance management (APM) processes, such as risk-based and predictive maintenance and asset investment planning (AIP), enable health monitoring technologies.

Technicians connect to the internet by wearable devices such as tablets, watches or VR glasses, providing customers with fast access to relevant information or expert support from any place in the world. Technicians can resolve technical issues faster, improving asset usage and reducing asset downtime.

Mobile-connected technicians experience improved safety through measures such as access control, gas detection, warning messages or fall recognition, which reduces risk exposure and enhances operational risk management (ORM) during work execution. Cybersecurity reduces risk exposure for cyberattacks on digitally connected assets.

Sensoring and monitoring also contribute to the direct measurement of sustainability environmental, social and governance (ESG) metrics such as energy efficiency and greenhouse gas emission or wastewater flows. This approach provides actual real data points for ESG reporting instead of model-based assumptions, which helps reduce carbon footprint and achieve sustainability goals.

The asset management maturity journey


Utility companies can view the evolution of asset management as a journey to a level of asset management excellence. The following figure shows the stages from a reactive to a proactive asset management culture, along with the various methods and approaches that companies might apply:

The journey to a mature asset management system

In the holistic asset management view, a scalable platform offers functionalities to build capabilities along the way. Each step in the journey demands adopting new processes and ways of working, which dedicated best practice tools and optimization models support.

The enterprise asset management (EAM) system fundamentally becomes a preventive maintenance program in the early stages of the maturity journey, from “Innocence” through to “Understanding”. This transition drives down the cost of unplanned repairs.

To proceed to the next level of “Competence”, APM capabilities take the lead. The focus of the asset management organization shifts toward uptime and business value by preventing failures. This also prevents expensive machine downtime, production deferment and potential safety or environmental risks. Machine connectivity through Internet of Things (IoT) data exchange enables condition-based maintenance and health monitoring. Risk-based asset strategies align maintenance efforts to balance costs and risks.

Predictive maintenance applies machine learning models to predict imminent failures early in the potential failure curve, with sufficient warning time to allow for planned intervention. The final step at this stage is the optimization of the maintenance and replacement program based on asset criticality and available resources.

APM and AIP combine in the “Excellence” stage, and predictive generative AI creates intelligent processes. At this stage, the asset management process becomes self-learning and prescriptive in making the best decision for overall business value.

New technology catalyzes the asset maturity journey, digital solutions connect the asset management systems, and smart connected tools improve quality of work and productivity. The introduction of (generative) AI models in the asset management domain has brought a full toolbox of new optimization tools. Gen AI use cases have been developed in each step of the journey, to support companies develop more capabilities to become more efficient, safe, reliable and sustainable. As the maturity of the assets and asset managers grows, current and future grid assets generate more value.

Holistic asset management aligns with business goals, integrates operational domains of previously siloed disciplines, deploys digital innovative technology and enables excellence in asset management maturity. This approach allows utility companies to maximize their value and thrive as they manage through the energy transition.

Source: ibm.com

Friday 19 April 2024

Using dig +trace to understand DNS resolution from start to finish

Using dig +trace to understand DNS resolution from start to finish

The dig command is a powerful tool for troubleshooting queries and responses received from the Domain Name Service (DNS). It is installed by default on many operating systems, including Linux® and Mac OS X. It can be installed on Microsoft Windows as part of Cygwin. 

One of the many things dig can do is to perform recursive DNS resolution and display all of the steps that it took in your terminal. This is extremely useful for understanding not only how the DNS works, but for determining if there is an issue somewhere within the resolution chain that cause resolution failures for your zones or domains. 

First, let’s briefly review how a query recursive receives a response in a typical recursive DNS resolution scenario: 


  1. You as the DNS client (or stub resolver) query your recursive resolver for www.example.com. 
  2. Your recursive resolver queries the root nameserver for NS records for “com.” 
  3. The root nameserver refers your recursive resolver to the .com Top-Level Domain (TLD) authoritative nameserver. 
  4. Your recursive resolver queries the .com TLD authoritative server for NS records of “example.com.” 
  5. The .com TLD authoritative nameserver refers your recursive server to the authoritative servers for example.com. 
  6. Your recursive resolver queries the authoritative nameservers for example.com for the A record for “www.example.com” and receives 1.2.3.4 as the answer. 
  7. Your recursive resolver caches the answer for the duration of the time-to-live (TTL) specified on the record and returns it to you.

The above process basically looks like this:

Step 1

Using dig +trace to understand DNS resolution from start to finish

Step 2

Using dig +trace to understand DNS resolution from start to finish

Step 3

Using dig +trace to understand DNS resolution from start to finish

Step 4

Using dig +trace to understand DNS resolution from start to finish

Step 5

Using dig +trace to understand DNS resolution from start to finish

This process occurs every time you type a URL into your web browser or fire up your email client. This illustrates why DNS answer speed and accuracy are so important: if the answer is inaccurate, you may need to repeat this process several times; and if the speed with which you receive an answer is slow, then it will make everything you do online seem to take longer than it should.  

Driving both DNS answer speed and accuracy is at the core of the IBM NS1 Connect value proposition.

Source: ibm.com

Thursday 18 April 2024

Understanding glue records and Dedicated DNS

Understanding glue records and Dedicated DNS

Domain name system (DNS) resolution is an iterative process where a recursive resolver attempts to look up a domain name using a hierarchical resolution chain. First, the recursive resolver queries the root (.), which provides the nameservers for the top-level domain(TLD), e.g.com. Next, it queries the TLD nameservers, which provide the domain’s authoritative nameservers. Finally, the recursive resolver  queries those authoritative nameservers.  

In many cases, we see domains delegated to nameservers inside their own domain, for instance, “example.com.” is delegated to “ns01.example.com.” In these cases, we need glue records at the parent nameservers, usually the domain registrar, to continue the resolution chain.  

What is a glue record? 

Glue records are DNS records created at the domain’s registrar. These records provide a complete answer when the nameserver returns a reference for an authoritative nameserver for a domain. For example, the domain name “example.com” has nameservers “ns01.example.com” and “ns02.example.com”. To resolve the domain name, the DNS would query in order: root, TLD nameservers and authoritative nameservers.  

When nameservers for a domain are within the domain itself, a circular reference is created. Having glue records in the parent zone avoids the circular reference and allows DNS resolution to occur.  

Glue records can be created at the TLD via the domain registrar or at the parent zone’s nameservers if a subdomain is being delegated away.  

When are glue records required?

Glue records are needed for any nameserver that is authoritative for itself. If a 3rd party, such as a managed DNS provider hosts the DNS for a zone, no glue records are needed. 

IBM NS1 Connect Dedicated DNS nameservers require glue records 

IBM NS1 Connect requires that customers use a separate domain for their Dedicated DNS nameservers. As such, the nameservers within this domain will require glue records. Here, we see glue records for exampledns.net being configured in Google Domains with random IP addresses: 

Once the glue records have been added at the registrar, the Dedicated DNS domain should be delegated to the IBM NS1 Connect Managed nameservers and the Dedicated DNS nameservers. For most customers, there will be a total of 8 NS records in the domain’s delegation. 

What do glue records look like in the dig tool? 

Glue records appear in the ADDITIONAL SECTION of the response. To see a domain’s glue records using the dig tool, directly query a TLD nameserver for the domain’s NS record. The glue records in this example are in quotation marks. Quotation marks are used for emphasis below: 

Understanding glue records and Dedicated DNS

Understanding glue records and Dedicated DNS

How do I know my glue records are correct? 


To verify that glue records are correctly listed at the TLD nameservers, directly query the TLD nameservers for the domain’s NS records using the dig tool as shown above. Compare the ADDITIONAL SECTION contents of the response to the expected values entered as NS records in IBM NS1 Connect. 

Source: ibm.com

Saturday 13 April 2024

Merging top-down and bottom-up planning approaches

Merging top-down and bottom-up planning approaches

This blog series discusses the complex tasks energy utility companies face as they shift to holistic grid asset management to manage through the energy transition. The first post of this series addressed the challenges of the energy transition with holistic grid asset management. The second post in this series addressed the integrated asset management platform and data exchange that unite business disciplines in different domains in one network.

Breaking down traditional silos


Many utility asset management organizations work in silos. A holistic approach that combines the siloed processes and integrates various planning management systems provides optimization opportunities on three levels:

1. Asset portfolio (AIP) level: Optimum project execution schedule
2. Asset (APMO) level: Optimum maintenance and replacement timing
3. Spare part (MRO) level: Optimum spare parts holding level

The combined planning exercises produce budgets for capital expenditures (CapEx) and operating expenses (OpEx), and set minimum requirements for grid outages for the upcoming planning period, as shown in the following figure:

Merging top-down and bottom-up planning approaches

Asset investments are typically part of a grid planning department, which considers expansions, load studies, new customers and long-term grid requirements. Asset investment planning (AIP) tools bring value in optimizing various, sometimes conflicting, value drivers. They combine new asset investments with existing asset replacements. However, they follow different approaches to risk management by using a risk matrix to assess risk at the start of an optimization cycle. This top-down process is effective for new assets since no information about the assets is available. For existing assets, a more accurate bottom-up risk approach is available from the continuous health monitoring process. This process calculates the health index and the effective age based on the asset’s specific degradation curves. Dynamic health monitoring provides up-to-date risk data and accurate replacement timing, as opposed to the static approach used for AIP. Combining the asset performance management and optimization (APMO) and AIP processes uses this enhanced estimation data to optimize in real time.

Maintenance and project planning take place in operations departments. The APMO process generates an optimized work schedule for maintenance tasks over a project period and calculates the optimum replacement moment for an existing asset at the end of its lifetime. The maintenance management and project planning systems load these tasks for execution by field service departments.

On the maintenance repair and overhaul (MRO) side, spare part optimization is linked to asset criticality. Failure mode and effect analysis (FMEA) defines maintenance strategies and associated spare holding strategies. The main parameters are optimizing for stock value, asset criticality and spare part ordering lead times.

Traditional planning processes focus on disparate planning cycles for new and existing assets in a top-down versus bottom-up asset planning approach. This approach leads to suboptimization. An integrated planning process breaks down the departmental silos with optimization engines at three levels. Optimized planning results in lower outages and system downtime, and it increases the efficient use of scarce resources and budget.

Source: ibm.com

Friday 12 April 2024

IBM researchers to publish FHE challenges on the FHERMA platform

IBM researchers to publish FHE challenges on the FHERMA platform

To foster innovation in fully homomorphic encryption (FHE), IBM researchers have begun publishing challenges on the FHERMA platform for FHE challenges launched in late 2023 by Fair Math and the OpenFHE community.

FHE: A new frontier in technology


Fully homomorphic encryption is a groundbreaking technology with immense potential. One of its notable applications lies in enhancing medical AI models. By enabling various research institutes to collaborate seamlessly in the training process, FHE opens doors to a new era of possibilities. The ability to process encrypted data without decryption marks a pivotal advancement, promising to revolutionize diverse fields.

IBM has been working to advance the domain of FHE for 15 years, since IBM Research scientist Craig Gentry introduced the first plausible fully homomorphic scheme in 2009. The “bootstrapping” mechanism he developed cleans and reduces the amount of “noise” in encoded information, which made possible the widespread use of FHE commercially.

Progress in FHE


FHE has experienced significant progress since the introduction of its first scheme. The transition from theoretical frameworks to practical implementations has been marked by countless issues that need to be addressed. While there are already applications that are using FHE, the community is constantly improving and innovating the algorithms to make FHE more popular and applicable to new domains.

Fostering innovation through challenges


The FHERMA platform was built to incentivize innovation in the FHE domain. Various challenges can be seen on the FHERMA site. The challenges are motivated by problems encountered by real-world machine learning and blockchain applications.

Solutions to challenges must be written by using known cryptographic libraries such as openFHE. The developers can also use higher-level libraries such as IBM’s HElayers to speed up their development and easily write robust and generic code.

The best solutions to the various challenges will win cash prizes from Fair Math, alongside contributing to the FHE community. Winners will also be offered the opportunity to present their solutions in a special workshop currently being planned.

The goal of the challenges is to foster research, popularize FHE, and develop cryptographic primitives that are efficient, generic, and support different hyperparameters (for example, writing matrix multiplication that is efficient for matrices of dimensions 1000×1000 and 10×10). This aligns with IBM’s vision for privacy-preserving computation by using FHE.

Driving progress and adoption


Introducing and participating in challenges that are listed on the FHERMA site is an exciting and rewarding way to advance the extended adoption of FHE, while helping to move development and research in the domain forward. We hope you join us in this exciting endeavor on the FHERMA challenges platform.

Teams and individuals who successfully solve the challenges will receive cash prizes from Fair Math. More importantly, the innovative solutions to the published challenges will help move the FHE community forward—a longstanding goal for IBM.

Source: ibm.com

Thursday 11 April 2024

Why CHROs are the key to unlocking the potential of AI for the workforce

Why CHROs are the key to unlocking the potential of AI for the workforce

It’s no longer a question of whether AI will transform business and the workforce, but how it will happen. A study by the IBM Institute for Business Value revealed that up to three-quarters of CEOs believe that competitive advantage will depend on who has the most advanced generative AI.

With so many leaders now embracing the technology for business transformation, some wonder which C-Suite leader will be in the driver’s seat to orchestrate and accelerate that change.

CHROs today are perfectly positioned to take the lead on both people skills and AI skills, ushering the workforce into the future. Here’s how top CHROs are already seizing the opportunity. 

Orchestrating the new human + AI workforce 


Today, businesses are no longer only focused on finding the human talent they need to execute their business strategy. They’re thinking more broadly about how to build, buy, borrow or “bot” the skills needed for the present and future.  

The CHRO’s primary challenge is to orchestrate the new human plus AI workforce. Top CHROs are already at work on this challenge, using their comprehensive understanding of the workforce and how to design roles and skills within an operating model to best leverage the strengths of both humans and AI.  

In the past, that meant analyzing the roles that the business needs to execute its strategy, breaking those roles down into their component skills and tasks and creating the skilling and hiring strategy to fill gaps. Going forward, that means assessing job descriptions, identifying the tasks best suited to technology and the tasks best suited to people and redesigning the roles and the work itself.  

Training the AI as well as the people 


As top CHROs partner with their C-Suite peers to reinvent roles and change how tasks get done with AI and automation, they are also thinking about the technology roadmap for skills. With the skills roadmap established, they can play a key role in building AI-powered solutions that fit the business’ needs.  

HR leaders have the deep expertise in training best practices that can inform not only how people are trained for skills, but how the AI solutions themselves are trained.  

To train a generative AI assistant to learn project management, for example, you need a strong set of unstructured data about the work and tasks required. HR leaders know the right steps to take around sourcing and evaluating content for training, collaborating with the functional subject matter experts for that area.  

That’s only the beginning. Going forward, business leaders will also need to consider how to validate, test and certify these AI skills.  

Imagine an AI solution trained to support accountants with key accounting tasks. How will businesses test and certify those skills and maintain compliance, as rigorously as is done for a human accountant getting an accounting license? What about certifications like CPP or Six Sigma? HR leaders have the experience and knowledge of leading practices around training, certification and more that businesses will need to answer these questions and truly implement this new operating model.  

Creating a culture focused on growth mindset and learning 


Successfully implementing technology depends on having the right operating model and talent to power it. Employees need to understand how to use the technology and buy in to adopting it. It is fundamentally a leadership and change journey, not a technology journey.  

Every organization will need to increase the overall technical acumen of their workforce and make sure that they have a basic understanding of AI so they can be both critical thinkers and users of the technology. Here, CHROs will lean into their expertise and play a critical role moving forward—up-skilling people, creating cultures of growth mindset and learning and driving sustained organizational change.  

For employees to get the most out of AI, they need to understand how to prompt it, evaluate its outputs and then refine and modify. For example, when you engage with a generative AI-powered assistant, you will get very different responses if you ask it to “describe it to an executive” versus “describe it to a fifth-grader.” Employees also need to be educated and empowered to ask the right questions about AI’s outputs and source data and analyze them for accuracy, bias and more.  

While we’re still in the early phases of the age of AI, leading CHROs have a pulse on the anticipated impact of these powerful technologies. Those who can seize the moment to build a workforce and skills strategy that makes the most of human talent plus responsibly trained AI will be poised to succeed.

Source: ibm.com

Tuesday 9 April 2024

Product lifecycle management for data-driven organizations

Product lifecycle management for data-driven organizations

In a world where every company is now a technology company, all enterprises must become well-versed in managing their digital products to remain competitive. In other words, they need a robust digital product lifecycle management (PLM) strategy. PLM delivers value by standardizing product-related processes, from ideation to product development to go-to-market to enhancements and maintenance. This ensures a modern customer experience. The key foundation of a strong PLM strategy is healthy and orderly product data, but data management is where enterprises struggle the most. To take advantage of new technologies such as AI for product innovation, it is crucial that enterprises have well-organized and managed data assets.

Gartner has estimated that 80% of organizations fail to scale digital businesses because of outdated governance processes. Data is an asset, but to provide value, it must be organized, standardized and governed. Enterprises must invest in data governance upfront, as it is challenging, time-consuming and computationally expensive to remedy vast amounts of unorganized and disparate data assets. In addition to providing data security, governance programs must focus on organizing data, identifying non-compliance and preventing data leaks or losses.  

In product-centric organizations, a lack of governance can lead to exacerbated downstream effects in two key scenarios:  


1. Acquisitions and mergers

Consider this fictional example: A company that sells three-wheeled cars has created a robust data model where it is easy to get to any piece of data and the format is understood across the business. This company is so successful that it acquired another company that also makes three-wheeled cars. The new company’s data model is completely different from the original company. Companies commonly ignore this issue and allow the two models to operate separately. Eventually, the enterprise will have weaved a web of misaligned data requiring manual remediation. 

2. Siloed business units

Now, imagine a company where the order management team owns order data and the sales team owns sales data. In addition, there is a downstream team that owns product transactional data. When each business unit or product team manages their own data, product data can overlap with the other unit’s data causing several issues, such as duplication, manual remediation, inconsistent pricing, unnecessary data storage and an inability to use data insights. It becomes increasingly difficult to get information in a timely fashion and inaccuracies are bound to occur. Siloed business units hamper the leadership’s ability to make data-driven decisions. In a well-run enterprise, each team would connect their data across systems to enable unified product management and data-informed business strategy.  

How to thrive in today’s digital landscape


In order to thrive in today’s data-driven landscape, organizations must proactively implement PLM processes, embrace a unified data approach and fortify their data governance structures. These strategic initiatives not only mitigate risks but also serve as catalysts for unleashing the full potential of AI technologies. By prioritizing these solutions, organizations can equip themselves to harness data as the fuel for innovation and competitive advantage. In essence, PLM processes, a unified data approach and robust data governance emerge as the cornerstone of a forward-thinking strategy, empowering organizations to navigate the complexities of the AI-driven world with confidence and success.

Source: ibm.com

Friday 5 April 2024

Accelerate hybrid cloud transformation through IBM Cloud for Financial Service Validation Program

Accelerate hybrid cloud transformation through IBM Cloud for Financial Service Validation Program

The cloud represents a strategic tool to enable digital transformation for financial institutions


As the banking and other regulated industry continues to shift toward a digital-first approach, financial entities are eager to use the benefits of digital disruption. Lots of innovation is happening, with new technologies emerging in areas such as data and AI, payments, cybersecurity and risk management, to name a few. Most of these new technologies are born-in-cloud. Banks want to tap into these new innovations. This shift is a significant change in their business models, moving from a capital expenditure approach to an operational expenditure approach, allowing financial organizations to focus on their primary business. However, the transformation from traditional on-prem environments to a public cloud PaaS or SaaS model presents significant cybersecurity, risk, and regulatory concerns that continue to impede progress.

Balancing innovation, compliance, risk and market dynamics is a challenge 


While many organizations recognize the vast pool of innovations that public cloud platforms offer, financially regulated clients remain accustomed to the level of control and visibility provided by on-prem environments. Despite the potential benefits, cybersecurity remains the primary concern with public cloud adoption. The average cost of any mega-breach is an astonishing $400 plus million, with misconfigured cloud as a leading attack vector. This leaves many organizations hesitant to make the transition, fearing they will lose the control and security they have with their on-prem environments. The banking industry’s continued shift toward a digital-first approach is encouraging. However, financial organizations must carefully consider the risks that are associated with public cloud adoption and ensure that they have the proper security measures in place before making the transition. 

The traditional approach for banks and ISV application onboarding involves a review process, which consists of several key items like the following: 

  • A third-party architecture review, where the ISV needs to have an architecture document describing how they are deploying into the cloud and how it is secure. 
  • A third-party risk management review, where the ISV needs to describe how it is complying to required controls. 
  • A third-party investment review, where the ISV provides a bill of material showing what and how services are being used to meet compliance requirements, along with price points. 

The ISV is expected to be prepared for all these reviews and the overall onboarding lifecycle through this process takes more than 24 months today.

Why a FS Cloud and FS Validation Program? 


IBM has created the solution for this problem with its Financial Services Cloud offering, and its ISV Financial Services validation program, which is designed to de-risk the partner ecosystem for clients. This help accelerating continuous integration and continuous delivery on the cloud. This program ensures that the new innovations coming out of these ISVs are validated, tested, and ready to be deployed in a secure and compliant manner. With IBM’s ISV Validation program, banks can confidently adopt new innovative offerings on cloud and stay ahead in the innovation race. 

Ensuring that the success of a cloud transformation journey requires a combination of modern governance, standard control framework, and automation. There are different industry frameworks available to secure and provide compliance posture. Continuous compliance that is aligned to an industry framework, informed by an industry coalition that is composed of representation from key banks worldwide and other compliance bodies, is essential. IBM Cloud Framework for Financial services is uniquely positioned for that, meeting all these requirements. 

IBM Cloud for Financial Services® is a secure cloud platform that is designed to reduce risk for clients by providing a high level of visibility, control, regulatory compliance, and the best-of-breed security. It allows financial institutions to accelerate innovation, unlock new revenue opportunities, and reduce compliance costs by providing access to pre-validated partners and solutions that conform to financial services security and controls. The platform also offers risk management and compliance automation, continuous monitoring, and audit reporting capabilities, as well as on-demand visibility for clients, auditors, and regulators. 

Our mission is to help ISVs adapt to the cloud and SaaS models and prepare ISVs to meet the security standards and compliance requirements necessary to do business with financial institutions on cloud. Our process brings the compliance and onboarding cycle time down to less than 6 months, a significant improvement. Through this process, we are creating an ecosystem of ISVs that are validated by IBM Cloud for Financial Services, providing customers with a trusted and reliable network of vendors. 

Streamlined process and tooling


IBM® has created a well-defined process and various tools, technologies and automation to assist ISVs as part of the validation program. We offer an integrated onboarding platform that ensures a smooth and uninterrupted experience. This platform serves as a centralized hub, guiding ISVs throughout the entire program, starting from initial engagements and leading up to the validation of final controls. The onboarding platform navigates the ISV through following steps: 

Orientation and education

The platform provides a catalog of self-paced courses that help you become familiar with the processes and tools that are used during the IBM Cloud for Financial Services onboarding and validation. The self-paced format allows you to learn at your own pace and on your own schedule. 

ISV Controls analysis

The ISV Controls Analysis serves as an initial assessment of an organization’s security and risk posture, laying the groundwork for IBM to plan the necessary onboarding activities.

Architecture assessment

An architecture assessment evaluates the architecture of an ISV’s cloud environment. The assessment is designed to help ISVs identify gaps in their cloud architecture and recommend best practices to enhance the compliance and governance of their cloud environment.

Deployment planning

Deployment of ISV application in a secure environment and manage their workloads on IBM Cloud®. This step is designed to meet the security and compliance requirements of organizations. Providing a comprehensive set of security controls and services to help protect customer data and applications, meeting the suitable secure architecture requirements. 

Security Assessment

The security assessment is a process of evaluating the security controls of the proposed business processes against a set of enhanced, industry-specific, control requirements in the IBM Cloud for Financial Services Framework. The process helps to identify vulnerabilities, threats, and risks that might compromise the security of a system and allows for the implementation of appropriate security measures to address those issues. 

Professional guidance by IBM and KPMG teams


IBM team provides guidance and assets to help accelerate the onboarding process in a shared trusted model. We also assist ISVs with deploying and testing their applications on the IBM Cloud for Financial Services approved architecture. We work with ISVs throughout the controls assessment process to help their application achieve the IBM Cloud for Financial Services validated status. Our goal is to ensure that ISVs meet our rigorous standards and comply with industry regulations. We are also partnering with KPMG, an industry leader in the security and regulatory compliance domain to add value to the ISVs and clients. 

Time to revenue and cost savings


This process enables the ISV to be ready and go to market in less than eight weeks reducing the overall time to market and overall cost of onboarding for any end clients. 

Benefits of partnering with IBM? 


As an ISV, you have access to our extensive financial institution clients. Our cloud is trusted by 92 of the top 100 banks, giving you a significant advantage in the industry. 

Co-create with IBM team of expert architects and developers to take your solutions to the next level with leading-edge capabilities. 

Partnering with us means you can elevate your Go-To-Market strategy through co-selling. We can help you tap into our vast sales channels, incentive programs, client relationships, and industry expertise. 

You have access to our technical services, and cloud credits, as an investment in your innovation. 

Our marketplaces, like the IBM Cloud® Catalog and Red Hat Marketplace, offer you an excellent opportunity to sell your products and services to a wider audience. 

Finally, our marketing and direct investments in your marketing, can generate demand and help you reach your target audience effectively. 

Source: ibm.com