Tuesday, 27 June 2023

Enhancing the Wimbledon fan experience with AI from watsonx

Watsonx, IBM Exam, IBM Exam Prep, IBM Preparation, IBM Certification, IBM Tutorial and Materials, IBM Guides, IBM Learning

IBM’s partnership with the All-England Lawn Tennis Club (AELTC) has driven digital transformation at Wimbledon for more than 30 years. And this year, Wimbledon is tapping into the power of generative AI, producing new digital experiences on the Wimbledon app and website using IBM’s new trusted AI and data platform, watsonx.

Automated AI commentary built from foundation models


IBM first pioneered the use of AI to curate video highlight reels in 2017, work that earned the IBM Consulting team a 2023 Emmy® Award. The solution uses gesture recognition (such as fist pumps and players’ reactions), crowd noise and game analytics (such as break points) to identify highlight-worthy videos in both golf and tennis.

This year, fans can add AI-generated spoken commentary to Wimbledon highlight reels, hearing play-by-play narration for the start and end of each reel, along with key points. Fans can also turn on closed captions to further enhance accessibility, a key consideration for AELTC.

The solution is built from a foundation model developed using watsonx, IBM’s enterprise-grade AI platform designed to manage the entire lifecycle of AI models, from curating trusted data sources to governing responsible, trusted AI. Work began with watsonx.data, a data store that connects disparate data sources and allows developers to filter the data for things like profanity, hate speech or personally identifiable information. For AI Commentary, the team drew source material from nearly 130 million documents.

The data was then used to train a large language model chosen from watsonx.ai, a next-generation studio for building and training generative AI models for business use cases. The IBM team then fine-tuned the model, adding the specific domain expertise of Wimbledon, including the use of unique Wimbledon nomenclature, such as “gentlemen’s draw” rather than “men’s draw.” The final model boasts 3 billion parameters, and the team will continue to monitor its performance using governance tools, ensuring the model performs as expected.

“The watsonx platform has allowed us to quickly leverage the power of generative AI without sacrificing trust or transparency,” says Aaron Baughman, a Distinguished Engineer and Master Inventor at IBM. “This is what we mean when we say ‘AI for Business.’ To use AI in a commercial setting, you need to have confidence that a model is scalable, reliable and trusted.”   

Co-creation with IBM iX


IBM iX, the experience design arm of IBM Consulting, works with the Club year-round to design, develop, maintain and secure the tournament’s website and mobile apps. The goal of this work is to continually enhance the digital experience with new, innovative features, while maintaining the tradition, beauty and design simplicity of Wimbledon itself.

Twice a year, the IBM iX and the Wimbledon team meet for workshops guided by the IBM Garage™ methodology, an enterprise design thinking collaboration that sets a roadmap for building and iterating the next generation of features that drive fan engagement. They use personas and journey maps to guide the design process, and agile development techniques to quickly iterate and build new features. In addition to the AI Commentary feature, the team is introducing several other enhancements, including:

AI draw analysis


As soon as any tournament draw is released, players and fans alike intuitively assess each player’s luck and path through the field: do they have a “good draw” or a “bad draw”? This year, IBM AI Draw Analysis helps them make more data-informed predictions by providing a statistical factor (a draw ranking) for each player in the singles draw.

The analysis leverages two previous innovations IBM built for the Club: the IBM Power Index, an AI-powered analysis of recent player performance and momentum (plus sentiment gleaned from natural language processing of media discussion by IBM Watson Discovery), and Likelihood to Win, a prediction of who will win a singles match.

The draw analysis, derived from structured and unstructured data, determines the level of advantage or disadvantage for each player and is updated throughout the day as the tournament progresses and players are eliminated. Every player has their draw ranked from 1 (most favorable) to 128 (most difficult). Fans can also click on individual matches to see a projected difficulty for that round.

Path to the final


Based on the AI Draw Analysis, users of the Wimbledon app can also see which opponent each player is most likely to play. It lists all potential matchups in the draw, ranked by the IBM Likely to Play. For each match in progress, fans can also follow the live scores and stats provided by IBM SlamTracker.

As a proven consultative process that taps into enterprise-scale automation and AI toolsets, the partnership between IBM and Wimbledon continues to deliver an innovative fan experience to millions around the world.

Source: ibm.com

Saturday, 24 June 2023

How data, automation and AI are transforming Business Process Outsourcing into a competitive advantage

IBM, IBM Exam, IBM Exam Prep, IBM Certification, IBM Tutorial and Materials, IBM AI

When IBM Consulting’s Neeraj Manik spoke recently with a large pharmaceutical client about how to streamline and improve its front-office and back-office financial processes, he pointed to a web of interconnected business challenges the organization was facing: “too many invoices, too many suppliers, too much money being paid to suppliers,” as Manik put it.

Manik, VP and senior partner for IBM Consulting, outlined a massive opportunity to strategically redesign the client’s finance operations and payment processing by leveraging AI, data analytics, metrics and automation. Ultimately, modernizing these processes could save hundreds of millions of dollars, improve the employee experience and make the company more agile and competitive, he says. Manik sees leveraging this technology as a fundamental change from years past, when a company might outsource business processes to save as little as 30% without considering how outsourcing might affect organizational efficiencies, job accuracy, and employee and client experience.

Technologies such as AI and automation have transformed the outsourcing market and BPO services, giving companies the ability to create efficiencies while also modernizing processes rather than relying on offshore outsourcing.

Labor arbitrage, or outsourcing labor to the lowest-cost workforce, has been the central strategy associated with business process outsourcing (BPO) for years. It often meant sourcing customer support, information technology and other office operations from countries with lower costs of labor. Today, though, technologies such as AI and automation have transformed the outsourcing market and BPO services, giving companies the ability to create efficiencies while also modernizing processes rather than relying on offshore outsourcing.

Technology-enabled business process operations, the new BPO, can significantly create new value, improve data quality, free precious employee resources, and deliver higher customer satisfaction, but it requires a holistic approach. Tapping into AI and automation helps businesses streamline and strengthen their operations, while providing rich information that helps enterprises quickly predict and respond to trends and threats alike.

Not only do companies that work with IBM Consulting get IBM’s experience in process design and business strategy; they also get the added bonus of IBM’s deep partnerships with companies like ServiceNow, Celonis and Salesforce. Ultimately, instead of being forced to focus on a single solution or technology, organizations can partner with IBM Consulting to invest in broad, transformational business initiatives and outcomes.

The new BPO is no longer just about cutting operational costs. When done right, it can make a business flexible, smarter and able to quickly scale to meet shifting market conditions. “Modern BPO is a creator of growth, differentiation and competitive advantage,” Manik says.

Spotting hidden opportunities


At a time of rising costs, talent constraints and economic uncertainty, technologically enabled BPO offers an opportunity for companies to build intelligent workflows and leaner processes across finance, human resources, procurement, supply chain and customer operations. According to organizational consulting firm Korn Ferry, more than 85 million jobs could go unfilled by 2030 because there aren’t enough skilled workers to take them. The new BPO enables companies to quickly access more expert, technical, functional and industry specific talent than they can assemble in-house, driving new levels of efficiency across their business functions.

When working with clients, Manik looks for business opportunities that might be hidden under the surface: How can an organization’s BPO capabilities and methods enable a larger business transformation?

“What we can see is sometimes just the tip of the iceberg. There’s so much underneath this that can be unlocked in terms of business value.” Neeraj Manik

“It is our role as IBM Consulting to say, ‘how do we help you connect the dots?’” Manik says. “‘What we can see is sometimes just the tip of the iceberg. There’s so much underneath this that can be unlocked in terms of business value, that can improve how you go to market, how efficiently you run your supply chains, and how you can raise your margin profile.’”

For IBM Consulting, it’s not only about producing a list of recommendations for action, Manik says, but about following through and helping companies implement process automation and manage change, ensure adoption and get results.

The results can be apparent quickly. In the case of insurance giant Generali, for example, IBM Consulting rolled out two new AI assistants in France—one that helped upskill employees and another that interfaced directly with customers. Generali also became one of the first insurance companies to use AI to tackle the complex task of escheatment, or returning unclaimed assets and property. The new tools augmented the work of thousands of insurance agents, saving $1million in the first year of deployment, and increasing productivity by 5%. The program’s success in France led Generali to scale AI solutions internationally.

Seeing the bigger picture


As companies plot their investments in various transformation projects, Manik has one central piece of advice: “Make sure every decision you make about technology starts with and has a clear and direct link to business outcomes,” he says. “It sounds obvious, but it’s something that many C-suite leaders tend to forget as they get excited about new technology or a specific upgrade,” Manik says. It’s his role to help leaders take a step back and look at the big picture: “Don’t focus solely on what to adopt next,” he says, “but ask yourself why you need it in your operating model.”

One car manufacturer, for example, opened up a conversation by asking about an upgrade to its data servers. Manik reframed the question. “Hang on — we recognize your need to modernize, but to what end?” he told them. “How will this technology decision deliver the business impact you need?”

That question sparked a conversation about the carmaker’s larger goals, including its push to produce more autonomous vehicles. “Once we really understood that they are trying to change how quickly they can produce cars and different types of vehicles, we realized they needed a different supply chain design,” Manik says. “We are now on a path with them around supply chain transformation.

“Many times the conversation starts with technology, but migrates somewhere else,” Manik says. “Ultimately, it’s not about adopting new technology for technology’s sake, it’s about rethinking business processes and core competencies to uncover new business opportunities and areas to optimize — sometimes in ways that customers aren’t expecting.”

Source: ibm.com

Thursday, 22 June 2023

What is data center management?

Data Center Management, IBM Exam Certification, IBM Exam Skills, IBM Exam Job, IBM Exam Prep, IBM Exam Preparation, IBM Tutorial and Materials

To provide stakeholders with vital IT services, organizations need to keep their private data centers operational, secure and compliant. Data center management encompasses the tasks and management tools necessary for doing so. A person responsible for carrying out these tasks is known as a data center manager.

What is the role of a data center manager?


Either physically onsite or remotely, a data center manager performs general maintenance—such as software and hardware upgrades, general cleaning or deciding the physical arrangement of servers—and takes proactive or reactive measures against any threat or event that harms data center performance, security and compliance.

The typical responsibilities of a data center manager include the following:

  • Performing lifecycle tasks like installing and decommissioning equipment
  • Maintaining service level agreements (SLAs)
  • Ensuring licensing and contractual obligations are met
  • Identifying and resolving IT problems like connection issues between edge computing devices and the data center
  • Securing data center networks and ensuring backup systems and processes are in place for disaster recovery
  • Monitoring the data center environment’s energy efficiency (e.g., lighting, cooling, etc.)
  • Managing and allocating resources to maximize budgetary spending and performance
  • Determining optimal server arrangement and cabling organization
  • Planning emergency contingencies in case of natural disaster or other unplanned downtime
  • Making necessary updates and repairs to systems while minimizing downtime and impact to IT operations and business functions (also known as change management)

Certification programs exist for IT students and professionals who want to acquire or enhance the skills and knowledge necessary to succeed in data center management.

Common challenges of data center management


Navigating complexity

By nature, asset management within an enterprise data center is complex. A data center is often comprised of hardware and software from multiple vendors, including numerous applications and tools. A data center environment can also co-exist and interact with private cloud environments from multiple cloud service providers. Each hardware component, software instance and cloud-based environment can have its own contractual terms, warranty, user interface or licensing permissions. Every element of a data center also has unique processes and procedures to follow when implementing patches or upgrades. While a challenge in its own right, complexity is also a contributing factor (if not a direct cause) of many other challenges faced when managing a data center.

Meeting SLAs

Because of a data center’s complex multi-vendor environment, it can be difficult for data center managers to ensure all SLAs are being upheld. These SLAs can span:

  • Application availability
  • Data retention
  • Recovery speed
  • Network uptime and availability

Tracking warranties

Data center managers can struggle in a complex environment to track which warranties have expired or what each warranty covers. Without visibility over warranty information, money may be needlessly spent on components that may have otherwise been covered.

Costs

For private data centers, IT staff, energy and cooling costs can consume much of the limited budget allocated to what’s typically deemed a non-value-added cost to the organization.

Monitoring

Data center managers may be forced to use insufficient or outdated equipment to monitor their complex data center operations. This can result in gaps in performance visibility and inefficient workload distribution. Capacity planning is also negatively impacted, since data center managers reliant on disparate or outdated equipment may not have the accurate metrics needed to assess how well a data center is meeting current demands.

Limited resources

Data center managers often work with limited staff, power and space due to budgetary constraints. In many cases, they also lack the proper tools needed to manage these limited resources effectively. Limited resources can hinder service management, resulting in the delivery of delayed or inadequate IT resources to business users and other stakeholders across an organization.

Meeting sustainability goals

Many organizations are working to reduce their carbon footprint, which means finding ways to reduce the energy consumption of their data centers and transition to green energy sources. Data center managers are tasked with implementing the hardware and procedures that reduce their environment’s carbon footprint while simultaneously dealing with existing data center complexity and limited resources.

How to overcome the challenges of data center management


DCIM software

Data center managers can use a data center infrastructure management (DCIM) solution to simplify their management tasks and achieve IT performance optimization. DCIM software provides a centralized platform where data center managers can monitor, measure, manage and control all elements of a data center in real-time—from on-premises IT components to data center facilities like heating, cooling and lighting.

With a DCIM solution, data center managers gain a single, streamlined view over their data center and can better understand what’s happening throughout the IT infrastructure.

A DCIM solution provides visibility into the following:

  • Power and cooling status
  • Which IT equipment and software components are ready for upgrading
  • Licensing/contractual terms and SLAs for all components
  • Device health and security status
  • Energy consumption and power management
  • Network bandwidth and server capacity
  • Use of floor space
  • Location of all physical data center assets

A DCIM solution can also help data center managers adopt virtualization to combine and better manage their data center’s IT resources. More advanced DCIM solutions can even automate tasks and eliminate manual steps, freeing up the data center manager’s time and reducing costs.

Colocation data centers

A colocation data center is a third-party service that provides an organization with physical space and facility management for their private servers and associated IT assets. While the organization is still responsible for staffing and for managing their data center components, a colocation service offloads the burden and costs associated with building, running and maintaining a physical space.

Hardware, hybrid cloud and AI solutions for sustainability

There are hardware, hybrid cloud and AI solutions available to help data center managers reach their organization’s sustainability goals while maximizing data center performance. For example, the right server can greatly reduce energy consumption and free up physical space—in some cases, up to 75% and 67%, respectively.

Data center management and IBM


With electricity, you want enough capacity to get the job done, but not so much that you’re wasting it when not using it. Use hybrid cloud and AI to streamline operations, save energy and increase performance, making sustainability a true business driver—while delivering a return on your investment.

Reduce your footprint:  IBM LinuxONE Rockhopper 4 servers can reduce energy consumption by 75% and space by 67% (when compared to the same workloads on x86 servers with similar conditions and location).  With energy-efficient data centers, consolidated workloads and improved infrastructure, you can save money and lessen your footprint.

Automate your energy use:  With IBM Turbonomic, automate your energy use to improve energy efficiency. Measure, analyze and intelligently manage resources to ensure applications always consume exactly what they need.

Simplify data management: Get market-leading performance and efficiency from the unified IBM Storage FlashSystem platform family, allowing you to streamline administration and operational complexity across on-premises, hybrid cloud and containerized environments.

Source: ibm.com

Tuesday, 20 June 2023

Reshoring: The risks of swinging the pendulum too far

IBM, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Tutoria land Materials, IBM Certification, IBM Guides

From the decades before the turn of the century until the global pandemic, great economic growth spread across the world, driving historic demand in commodities and consumer goods. But this economic growth, coupled with stringent labor laws, drove up labor costs.

By sourcing materials and labor from countries with lower labor and manufacturing costs, businesses were able to capitalize on the economic boom, produce more goods and services, and minimize their costs. Today, businesses continue to look for ways to reduce costs and increase efficiency, and offshore suppliers still minimize costs in most instances for North American and European companies.

However, now we see how brittle these global supply chains are. A perfect storm of disease, war, technological innovation, overspecialization, unchecked climate change and geopolitical tensions have shattered global supply chains and had a significant impact on the global economy.

The total cost of the current supply chain issues caused by the pandemic, labor shortages and the war in Ukraine is difficult to estimate, as the situation is constantly evolving. However, a 2022 report by the World Bank estimated that the global economy could lose up to USD 1.2 trillion in 2023 because of these disruptions. The report also found that disruptions to the global supply chain are likely to have a significant impact on developing countries, as these countries are more reliant on imported goods and services. The report estimates that developing countries could lose up to USD 426 billion in 2023 because of these disruptions.

Deglobalization can build a more resilient supply chain


Deglobalization is an idea gaining traction among organizations worldwide as they cope with disruption. A deglobalized supply chain relies on manufacturing, labor and industries that are either local to the business or in a neighboring state or country.

With a local supply chain, organizations have better control and shorter lead times. Companies can manufacture products closer to the consumer, reducing the risk of disruption caused by natural disasters and geopolitical instability. Deglobalization also offers better transparency into where and how goods are being made and expedites the transportation of good to customers.

Investment into local infrastructure strengthens national economies, and when everything is done within the same legal jurisdiction, it reduces the risk of legal disputes and improves regulatory compliance.

Organizations like Apple, Nike and Tesla have been working to deglobalize their supply chains to gain more control and transparency and to reduce reliance on distant suppliers. Governments are passing legislation to incentivize local production as well.

In 2022, the United States Congress passed the CHIPS Act, which provides roughly USD 280 billion in new funding to boost domestic research and manufacturing of semiconductors in the United States. The European Union and China are investing trillions in their economies to rebuild local industries and create a less risk-prone supply chain.

A new model embraces local and global suppliers


Many industries have almost disappeared from North America and Europe, due to the inability to compete with the low cost of offshore suppliers. As companies look to source their product locally, they are finding that many products are not available or cannot be made without significant capital investments. In general, deglobalization will lead to higher costs for businesses in these geographies because it requires them or their suppliers to invest in processing and manufacturing facilities and pay higher wages to local workers. That cost passes to the consumer and will be reflected in a higher price of goods, so it is likely that only products with low price elasticity will be able to sustain local supply chains. And the quality of goods might suffer as local businesses learn what distant counterparts learned through trial and error long ago.

It’s likely the model that will win out will be a supply chain that contains built-in redundancies, using both local and global suppliers in concert with one another. In this model, if there’s danger of global goods being delayed or unavailable, businesses can reach out to their local suppliers for product. A hybrid supply chain provides flexibility and agility, allowing businesses to quickly adapt to changing market conditions and customer demands. By striking the balance between local and global suppliers, companies can achieve a renewed resilience, effective cost optimization and enhanced customer satisfaction, which ensures the stability and sustainability of their supply chain in the long run.

Source: ibm.com

Saturday, 17 June 2023

The real secret to a successful digital transformation? Human empathy


According to Debbie Vavangas, IBM Consulting VP, one of the main reasons digital transformation efforts fail is that organizations don’t fully account for the humans involved. They don’t fully consider the various people working throughout the organization, and how changes affect their daily lives. When it comes to things like automation, AI, and intelligent workflows, it may seem like taking people out of the process is the whole point. In certain circumstances, it’s tempting to remove the human element in a digital transformation. 

But no matter how technical a transformation project might be, Vavangas says, “Digital transformation works at the people level. It’s how you design experiences that are adopted; it’s how people learn to love things. ​​If you’re not thinking about your people, your innovation is doomed.” 

“If you’re not thinking about your people, your innovation is doomed.”Debbie Vavangas

It’s not that organizations don’t recognize that people matter; they often get caught up in the more tangible elements of a digital transformation—app modernization, AI, automation or operational efficiency. If investments in digital technology are not grounded in stakeholder needs and preferences, they will not drive organizational value. If these investments in digital transformation and new ways of working are a challenge for employees to embrace, a company’s transformation ambitions will unravel. 

​​Vavangas is no stranger to digital transformation as the global lead for IBM Garage, a unique end-to-end model for accelerating digital business transformation that puts innovation at the heart of enterprise strategy. She’s one of IBM’s thought leaders on innovative ways of working and change management. In her observation, the importance of how people experience transformation is “almost always woefully underestimated.” 

​Human-centered transformation can be achieved through a combination of user research, breaking down organizational barriers, and ensuring that your organization’s culture is eager to adapt to change. 

​​“Transformation is pointless when we do it without purpose.”Debbie Vavangas
​​“Transformation is pointless when we do it without purpose,” Vavangas says. If you transform an organization into something that doesn’t serve those responsible for its success, you will only waste time and money. 

Human-centered digital transformations begin with understanding what’s inside the hearts and minds of the people your organization depends on, then using those insights to inform how you embark on new initiatives and include everyone in the journey. To plan for the real-world human factors that can make or break a digital transformation, consider these three underused best practices for analyzing human experience, overcoming challenges and driving successful digital transformation: 

​​​​​1. User research: “I believe in my bones in the power of user research to make sure that you get to the crucial secret sauce, which is adoption,” Vavangas says. ​​Conducting comprehensive user research—from specific qualitative interviews to extensive data analysis—is key to determining the right success factors for a digital transformation, as well as to ensure employees are prepared to deliver. By incorporating metrics and user feedback early and often, companies can manage risk and ask, “Is this working?” and “What can we do better?” If you don’t have the data you need for user research, synthetic data can help. 

​​​​​2. Breaking down human barriers: Vavangas is adamant about clearly defining the human pain points that can derail your digital transformation and calculating the cost of those roadblocks down to the dollar. Reluctant leadership, the culture shock of organizational change, bringing siloed teams together, rigid rules, and new technology systems—all of these can be obstacles unless managed effectively. Think about the actual costs of your sticking points so you can push for workarounds. “When you know how much an impediment is costing you each day,” Vanvagas says, “it creates a very different lens to problem-solving.” 

​​​​​3. Cultural transformation: “If you don’t change the culture, transformation doesn’t get adopted,” Vavangas says. Yet lasting cultural change is one of the most difficult things for an organization to achieve. It requires buy-in across your organization, and that won’t happen unless leadership teams understand employees’ experiences and respond to their needs. “Measure how people are feeling as you’re rolling out programs,” Vavangas says. “What does it feel like to work in this different way? Are they feeling supported? Do they feel like they’re growing? Have we made things easier?” 

If your organization is looking for a way to accelerate digital transformation while keeping the human in mind, learn more about ​​IBM Garage and how it helps enterprises boost innovation and achieve lasting cultural transformation. 

Source: ibm.com

Thursday, 15 June 2023

Is there a “right” cloud strategy for banking?

IBM Exam, IBM Exam Prep, IBM Exam Prep, IBM Preparation, IBM Tutorial and Materials, IBM Guides, IBM Learning

As public cloud technology and hybrid multicloud architectures are being adopted in financial institutions at an increasing rate, we’re observing that their counterparts in the public sector— central banks—are a long way behind, due at least in part to a profoundly risk-averse approach.

While central banks have a very different mission from commercial banks, what they do have in common is the need to modernize their IT operations to support digital transformation, contain costs, source key skills and mitigate operational and cybersecurity risks.

New report: Central Banking and Cloud Services: The New Frontier


To get a better understanding of how central banks compare to private sector organizations in how they think about public cloud and approach their cloud migration strategy, we partnered with the team at OMFIF to produce a first-of-its-kind report that looks at the opportunities and challenges of public cloud services seen through the lens of central banking officials.

Central Banking and Cloud Services: The New Frontier is based on a series of interviews with executives to illustrate an informed picture of what cloud technology can offer central banks and the challenges they face in adopting it. In exploring the benefits that cloud migration gives central banks, we also examined the ways in which financial institutions in the private sector have addressed the hurdles that are hampering central bank cloud adoption. Some of these challenges are technical, some are legal, some are cultural, and different solutions are necessary to address each.

What does it mean to “do cloud right?”


In IBM Cloud, we are working to drive new levels of ecosystem collaboration and knowledge-sharing that support cloud adoption by de-risking the journey and reducing the time-to-value. We see public cloud as an enabler of a better future for financial services, not as a destination.

The following five takeaways are insights that we have gleaned from our extensive customer engagements, global regulatory outreach program and the industry contributions of the IBM Financial Services Cloud Council:

1. Allow your digital transformation ambitions (not your fears) to drive your cloud strategy.

2. Know yourself, your stakeholders and your data.

3. Be solution-oriented and nimble in combining technical, operational and legal capabilities to deliver outcomes that matter to your organization.

4. Understand the immediate power and sustained value of great architecture with embedded security controls and continuous monitoring.

5. Maximize the optionality that hybrid multicloud brings to mitigate regulatory and technological uncertainty in your environment.

IBM believes that dependability, reliability and trust need to be at the heart of every cloud strategy. Done right, our take is that the cloud can deliver unparalleled benefits in performance and total cost of ownership without compromising resiliency, security and compliance. And that conviction holds true across both public-sector institutions and private-sector enterprises.

The good news is that for every cloud transformation challenge, there is a solution—this includes privacy-enhancing technologies like confidential computing and Keep-Your-Own-Key (KYOK) cryptography, architectural patterns that enable intentional and optimized choices about workload placement, pre-defined security controls with continuous posture management, and specific deployment measures that address data localization requirements.

We’re confident that by learning from the successes and challenges experienced by the private sector, the central banking community will unlock the benefits of public cloud faster than ever before by addressing inhibitors that may currently be holding them back and better informing their perspective on how best to manage the evolving risk dynamics in the financial services sector.

The future for financial services is bright. And we’re very proud to be at the heart of it.

Source: ibm.com

Tuesday, 13 June 2023

5G network rollout using DevOps: Myth or reality?

Dell EMC, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Guides, Dell EMC Learning, Dell EMC Preparation Exam

The deployment of Telecommunication Network Functions had always been a largely manual process until the advent of 5th Generation Technology (5G). 5G requires that network functions be moved from a monolithic architecture toward modularized and containerized patterns. This opened up the possibility of introducing DevOps-based deployment principles (which are well-established and adopted in the IT world) to the network domain.

Even after the containerization of 5G network functions, they are still quite different from traditional IT applications because of strict requirements on the underlying infrastructure. This includes specialized accelerators (SRIOV/DPDK) and network plugins (Multus) to provide the required performance to handle mission-critical, real-time traffic. This requires a careful, segregated network deployment process into various “functional layers” of DevOps functionality that, when executed in the correct order, provides a complete automated deployment that aligns closely with the IT DevOps capabilities.

This post provides a view of how these layers should be managed and implemented across different teams.

The need for DevOps-based 5G network rollout


5G rollout is associated with the below requirements that make it mandatory to brutally automate the deployment and management process (as opposed to the traditional manual processes in earlier technologies such as 4G):

◉ Pace of rollout: 5G networks are deployed at record speeds to achieve coverage and market share.

◉ Public cloud support: Many CSPs use hyperscalers like AWS to host their 5G network functions, which requires automated deployment and lifecycle management.

◉ Hybrid cloud support: Some network functions must be hosted on a private data center, but that also the requires ability to automatically place network functions dynamically.

◉ Multicloud support: In some cases, multiple hyperscalers are necessary to distribute the network.
Evolving standards: New and evolving standards like Open RAN adoption require continuous updates and automated testing.

◉ Growing vendor ecosystems: Open standards and APIs mean many new vendors are developing network functions that require continuous interoperability testing support.

All the above factors require an extremely automated process that can deploy/re-deploy/place/terminate/test 5G network functions on demand. This cannot be achieved with the traditional way of manually deploying and managing network functions.

Four layers to design with DevOps principles


There are four “layers” that must be designed with DevOps processes in mind:

Dell EMC, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Guides, Dell EMC Learning, Dell EMC Preparation Exam

1. Infrastructure: This layer is responsible for the deployment of cloud (private/public) infrastructure to host network functions. This layer will automate the deployment of virtual private cloud, clusters, node groups, security policies, etc. that are required by the network function. This layer will also ensure the correct infrastructure type is selected with the CNIs required by the network function (e.g., SRIOV, Multus, etc.)

2. Application/network function: This layer is responsible for installing network functions on the infrastructure by running helm-type commands and post-install validation scripts. It also takes care of the major upgrades on the network function.

3. Configuration: This layer takes care of any new Day 2 metadata/configuration that must be loaded on the network function. For example, new metadata to be loaded to support slice templates in the Policy Charging Function(PCF).

4. Testing: This layer is responsible for running automated tests against the various functionalities supported by network functions.

Each of the above layers has its own implementation of DevOps toolchains, with a reference provided in the diagram above. Layer 1 and 2 can be further enhanced with a GitOps-based architecture for lights-out management of the application.

Best practices


It is very important that there is a well-defined framework with the scope, dependencies, and ownership of each layer. The following table is our view on how it should be managed:

As you can see, there are dependencies between these pipelines. To make this end-to-end process work efficiently across multiple layers, you need an intent-based orchestration solution that can manage dependencies between various pipelines and perform supported activities in the surrounding CSP ecosystem, such as Slice Inventory and Catalog.

   Pipeline
Infrastructure Application  Configuration  Testing 
Scope
(Functionality to automate)
VPC, subnets, EKS cluster, security groups, routes CNF installation, CNF upgrades CSP slice templates, CSP RFS templates, releases and bug fixes Release testing, regression testing
Phase
(Applicable network function lifecycle phase)
Day 1 (infrastructure setup)  Day 0 (CNF installation), Day 1 (CNF setup)  Day 2+, on-demand  Day 2+, on-demand 
Owner
(Who owns development and maintenance of pipeline?) 
IBM/cloud vendor  IBM/SI  IBM/SI  IBM/SI 
Source control
(Place where source artifacts are stored. Any change triggers the pipeline, depending on the use case) 
Vendor detailed design  ECR repo (images), Helm package  Code commit (custom code)  Code commit (test data) 
Target integration (How the pipeline will interact during the execution process)  IaaC (e.g., Terraform), AWS APIs  Helm-based  RestConf/APIs   RestConf/APIs 
Dependency between pipelines  None  Infrastructure pipeline completed  Base CNF installed  Base CNF installed, release deployed 
Promotion of different environments  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod 

Telecommunications solutions from IBM


This post provides a framework and approach that, when orchestrated correctly, enables a completely automated DevOps-/GitOps-style deployment of 5G network functions.

In our experience, the deciding factor in the success of such a framework is the selection of a partner with experience and a proven orchestration solution.


Source: ibm.com

Monday, 12 June 2023

How Krista Software helped Zimperium speed development and reduce costs with IBM Watson

IBM, IBM Exam, IBM Exam Prep, IBM Exam Prep, IBM Tutorial and Materials, IBM Career, IBM Skill, IBM Jobs

Successful businesses are embracing the power of AI to help streamline operations, generate insights, boost productivity and drive more value for clients. However, for many enterprises, the barrier to entry for integrating trustworthy, scalable and transparent AI remains high. In fact, 80% of enterprise AI projects never make it out of the lab.

So how do businesses that want to incorporate AI move forward when there is such a high level of difficulty? Many have turned to IBM’s portfolio of AI offerings, which provides pre-trained AI models that can be integrated into existing applications to improve process efficiency, enabling organizations to direct their resources to more valuable tasks.

Krista Software is an example of how IBM enables business partners to integrate IBM’s embeddable AI software portfolio in their offerings as a cost-effective and risk-averse way to help clients benefit from AI technology without needing to build the infrastructure from the ground up.

The challenge of staying one step ahead in mobile security 


Dallas-based Zimperium provides a mobile-first security platform purpose-built for enterprise environments. With machine learning-based protection through a single platform, Zimperium offers customers mobile threat defense and in-app protection. To provide continuous and persistent security for customers, Zimperium relies upon timely software releases to remain one step ahead of emerging threats on corporate and user-owned mobile devices. However, until recently, their software deployment process was time-consuming, requiring a lot of human interaction.

For example, Zimperium maintains hundreds of software environments on any given day, and their engineers must run multiple releases — which include patches, updates and hard fixes — through an entire deployment cycle for each of those environments. Every release undergoes a rigorous approval cycle that involves high-touch coordination between customer success and pre-sales teams. Once the software is approved, engineers deploy and apply each release to all environments. With thousands of deployments each year, many taking up to 3 weeks, Zimperium turned to Krista Software to help streamline its process.

Krista Software helps Zimperium automate operations with IBM Watson 


Vamsi Kurukuri, VP of Site Reliability at Zimperium, developed a strategy to remove roadblocks and pain points in Zimperium’s deployment process. He then selected Krista’s AI-powered intelligent automation platform to optimize Zimperium’s project management suite, messaging solutions, development and operations (DevOps). Krista’s platform uses machine learning and IBM Watson NLP, allowing Zimperium engineers to “Ask Krista” for a business outcome, streamlining tasks such as creating IT tickets, sending notifications to eligible team members and sending reminders when approvals are needed, enabling Krista to own the outcome for each deployment cycle.

The Krista platform follows each ticket throughout the development cycle, ensures every step is adhered to, and that the right software is ready to be deployed to the right servers at the right time. Once all parties approve the release, Krista then deploys it. With Krista, Zimperium automated its software deployment process, reducing a 4+ hour manual process to mere minutes, across hundreds of environments. This improvement led to over USD 200,000 in savings and empowered Zimperium’s engineers and developers to focus on what they do best: developing secure software for their clients.

Powering change: IBM’s embeddable AI software portfolio 


With the support of Krista Software, Zimperium automated its entire scheduling and deployment process in less than two months, helping them release updates faster, address human error and regulatory requirements, improve efficiency and reduce risk with no data science and coding requirements. Zimperium saw significant cost savings and increased efficiency as it helped protect its clients against both known and unknown cybersecurity threats.

Building on the success delivered in less than 60 days, Krista and Zimperium are entering the next phase of the relationship, in which Krista will use IBM Watson to help optimize Zimperium’s order-to-cash process and automate its international customer support. Krista also plans to continue to deepen its work with IBM, including exploring the upcoming IBM watsonx AI and data platform, to help clients like Zimperium unlock AI’s true potential.

Source: ibm.com

Thursday, 8 June 2023

How Red Hat OpenShift on AWS (ROSA) accelerates enterprise modernization initiatives on cloud, delivering business application innovation

IBM Exam, IBM Exam Prep, IBM Exam Prepartion, IBM Tutorial and Materials, IBM Guides, IBM Learning, IBM Career, IBM Jobs

When it comes to driving large technology transformation on Cloud, leveraging existing investments, and optimizing open innovation within the larger ecosystem with a hybrid cloud platform, IBM Consulting™ offers several learnings to help organizations address the architecture and technology challenge.

Consider large financial services organization going through core banking modernization. The core banking application landscape involves multiple applications (both legacy and custom off-the-shelf) that are integrated and surfaced across multiple customer experiences, including mobile. The goal of modernizing such a large application landscape is to deliver a nimble micro services architecture while also providing a robust application delivery mechanism that leverages the continuous integration/continuous delivery (CI/CD) processes. 

As referenced in the architecture challenge section of the Mastering hybrid cloud IBV report, a single integrated hybrid cloud platform and application architecture is the chassis on which the applications can be mounted and managed, leading to dramatic improvement in software application development and production.  

One proven approach is adopting Red Hat OpenShift on AWS (ROSA) as a turnkey application development platform managed by both Red Hat and AWS. This platform manages underlying application resources and other functions (such as scalability) so that enterprises can focus on cloud-native application development or application modernization—instead of managing the complexity of the underlying application platform. A Telco customer reported that “developers can now concentrate more on their application logic and their business logic, and they can just develop applications. Our prime focus really is to develop software quickly.” 

The advantages of leveraging ROSA reach across industries. In the Travel and Transportation industry there was an unprecedented demand for post-pandemic leisure travel, which resulted in huge growth in the number of airline passengers, hotel reservations and related services such as car rentals. Similar use cases in other industries include integrated member experience in Healthcare, smart asset performance and security in Energy & Utilities, connected vehicle services in Automotive, operations and process optimization as part of industry 4.0 in Manufacturing, and customer relationship management and customer service automation in Financial Services. A  financial services customer reported “uptime and performance increased 25–30% with Red Hat OpenShift Dedicated versus a self-managed and self-supported Kubernetes application platform.” 

For these customers, adopting digital transformation tends to drive cultural change within application lifecycle management, skills and operating model changes, change management and center of excellence—which drives cloud-ready architecture patterns and education. A sustainable transformation aligns with the consistent application platform driving productivity benefits within an elastic, agile and resilient application environment. These tenets are the foundation for ROSA’s target operating model, which IBM Consulting optimizes across customer engagements.  

A Total Economic Impact™ report found that this approach led to a 50% improvement in operational efficiency, a 20% increase in recaptured developer time and a 70% reduction in the development cycle. An IDC report also found that with Red Hat OpenShift cloud services, it is possible to develop features 30-40% more quickly and with a 25% reduction in costs, compared with a public cloud provider container offering. 

Industry use cases and core application modernization engagements demonstrate the benefits of adopting the ROSA application platform, including the ability to: 

◉ Optimize operational costs given the underlying application platform management is fully managed by Red Hat SREs and AWS 

◉ Increase revenue-generating capabilities with new business application features that can be delivered in an accelerated release cycle, improving time to value 

◉ Lower risks of application delivery with compliance, upgrades, security and availability needs, given the platform stability and fewer vulnerabilities of the application platform 

◉ Innovate continuously with faster dev cycles and lower validation costs, with the consistency of the application platform 

◉ Manage talent focused on higher value activities of application development, skills portability, and employee retention (driving cloud native innovation, reduced development environment standup cycle, platform optimization and standardized toolsets) 

Source: ibm.com

Tuesday, 6 June 2023

7 steps for managing the work order process

IBM Exam, IBM Career, IBM Skill, IBM Jobs, IBM Learning, IBM Guides, IBM Tutorial and Materials

Work orders are the driving force behind any organization’s asset management apparatus. Whenever a person or entity submits a service request, the maintenance team that receives it must create a formal paper and/or digital document that includes all the details of maintenance tasks and outlines a process for completing the tasks. That document is called a work order.


The primary purpose of a work order is to keep everyone within the maintenance operation abreast of the workflow, which ultimately helps the organization organize, communicate and track maintenance work more efficiently.

Managing the work order process


The work order management process describes how a work order will move through the maintenance process, starting with maintenance task identification and wrapping up with post-completion analysis.

Phase 1: Task identification

In the first phase of the process, a person or organization identifies the tasks that the maintenance staff needs to complete. The tasks will also help the recipient determine whether the maintenance tasks qualify as planned maintenance (wherein the jobs will be easily identifiable ahead of time) or unplanned maintenance (where the scope and specifics of the job will require an initial assessment).  

Phase 2: Work request submission

Once the initiating party identifies the maintenance issues, they should lay out the details in a maintenance request form and submit it to maintenance for review and approval. Work requests can arise from any number of circumstances—from tenant requests to preventative maintenance audits.  

Phase 3: Work request evaluation

The maintenance department (or maintenance team) is responsible for evaluating work requests once they are submitted. Ideally, the department will review the details of the work request to determine the feasibility of the work and then determine personnel and resource needs. If approved, the work order request is converted to a work order.

Phase 4: Work order creation

Once the maintenance team or supervisor approves the work request and allocates the materials, equipment and staff they need to complete the jobs, they will create a new work order. The work order should include all the necessary details of the job, as well as the company contact information and an indication of the priority level and completion date. To streamline this process, organizations can standardize the work order format using a template.

In this stage, maintenance will also identify which type of work order they will need. If, for instance, a company relies on a proactive maintenance approach to anticipate and reduce equipment downtime, they will likely utilize a preventative maintenance work order. On the other hand, if a piece of equipment has already failed or the organization uses a more reactive maintenance program, the maintenance team will probably create a corrective maintenance work order or an emergency work order.

Phase 5: Work order distribution and completion

At this point, the team/supervisor will assign the necessary maintenance activities to a qualified maintenance technician who will complete the checklist of tasks on the proposed timeline. If the organization uses computerized maintenance management system (CMMS) software, the job will be automatically assigned to a technician.

Phase 6: Work order documentation and closure

Maintenance technicians are responsible for documenting and closing a work order once they complete all the assigned tasks. Technicians will need to indicate the time spent on each task, list any materials/equipment they used, provide images of their work and include notes and observations about the job. A manager may or may not need to sign off on the completed work order and provide guidance about next steps and follow-ups before moving on to the final phase.

Phase 7. Work order review/analysis

Reviewing closed work orders can provide valuable insights about maintenance operations, so organizations should try to review closed work orders as frequently as possible. Analyzing closed work orders can really help organizations identify opportunities for improvement in the work order process. Post-completion analysis also helps team members identify any tasks they missed or need to revisit.

Optimizing your work order management process


As an organization grows, it can become untenable to rely on paper work order management systems (or even Excel spreadsheets) to manage ever-evolving data needs. Larger organizations and those with more complex needs should consider investing in computerized maintenance management system (CMMS) software, a type of work order management software.

A high-quality CMMS will automatically plan, create, track and organize service requests, work orders and routine maintenance, eliminating excessive task planning duties for maintenance managers and supervisors.

Using CMMS software also allows your organization to store large amounts of data electronically, in a centralized location. With all your work order data living in one place, your management team can get real-time access to work orders as they move through the work order lifecycle. CMMS platforms with accompanying software for mobile devices push access a step farther, allowing users to track work orders and access maintenance activities remotely. 

Furthermore, a good CMMS can aggregate and display work order data according to the department’s specific needs. Maintenance teams can build and view customizable reports, visualize trend data and metrics/KPIs, and monitor asset functionality to make troubleshooting and inventory management simpler.

While adopting a CMMS can be a complex process, integrating CMMS software into your maintenance operations can help your organization reduce costs, increase data access and visibility, reduce backlog and human error, and streamline your facilities management systems. 

IBM Maximo Application Suite


Get the most value from your enterprise assets with the IBM Maximo Application Suite, a comprehensive enterprise asset management system that helps organizations optimize asset performance, extend asset lifespan and reduce unplanned downtime. IBM Maximo provides users an integrated, AI-powered, cloud-based platform with comprehensive CMMS capabilities that produce advanced data analytics and help maintenance managers make smarter, more data-driven decisions.

Source: ibm.com

Saturday, 3 June 2023

Accelerating AI & Innovation: the future of banking depends on core modernization

IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Skills, IBM Jobs, IBM Learning, IBM Certification, IBM Tutorial and Materials, IBM Guides, IBM Certification

In the rapidly evolving landscape of financial services, embracing AI and digital innovation at scale has become imperative for banks to stay competitive. With the power of AI and machine learning, financial institutions can leverage predictive analytics, anomaly detection and shared learning models to enhance system stability, detect fraud and drive superior customer-centric experiences. As we step into 2023, the focus has shifted to digital financial services, encompassing embedded finance, generative AI and the migration of super apps from China into a global phenomenon. And all this while balancing the adoption of a hybrid multicloud strategy. For banks to stay relevant and competitive in this new world, it is imperative for them to adjust to new trends, understand the importance of open finance and transform their core systems. Ultimately, banks must start with modernizing their core through technologies like hybrid multicloud and AI.

Generative AI: unleashing new opportunities 


Generative AI, exemplified by the explosion in advanced large language model solutions on the market and seen most recently via the launch of IBM watsonx, offers exciting possibilities in financial advisory and data analysis. While the unexplored future of generative AI poses opportunities in deterministic financial environments, configuring these models properly can simplify complex financial concepts and enable easier understanding for customers. Financial institutions must carefully leverage generative AI to strike the right balance between innovation and ethical usage. This is why IBM puts all of its AI technologies through rigorous processes and protocols to offer trustworthy solutions.


In such a highly regulated industry like banking, it is that much more important for clients to have this access to the toolset, technology, infrastructure, and consulting expertise to build their own — or fine-tune and adapt available AI models — on their own data and deploy them at scale in a more trustworthy and open environment to drive business success. Competitive differentiation and unique business value will be able to be increasingly derived from how adaptable an AI model can be to an enterprise’s unique data and domain knowledge. 

Embedded finance: redefining customer experiences 


Embedded finance has emerged as a rapidly growing trend, revolutionizing the way customers interact with financial products and services. Banks now have the opportunity to seamlessly integrate financial capabilities into various contexts, such as online commerce or car buying and emerging digital ecosystems, without disrupting customer workflows. By embedding financial services into everyday activities, banks can deliver hyper-personalized and convenient experiences, enhancing customer satisfaction and loyalty. 

The rise of super apps: transforming digital ecosystems 


Super apps, popular in China, have the potential to reshape the financial services landscape globally. By consolidating multiple applications and services under a single entity, super apps offer customers a comprehensive ecosystem that seamlessly integrates digital identity, instant payment, and data-driven capabilities. As embedded finance gains traction and open banking APIs become more prevalent, the vision of super apps is becoming a reality. Financial institutions need to adapt to this emerging trend and actively participate in the evolving digital ecosystems to deliver enhanced value and cater to evolving customer expectations. 

Open finance: accelerating the API-driven economy 


Open banking has been a topic of discussion for several years, with PSD2 regulations driving initial progress. Now open finance, an extension of PSD2, is set to open up even more services and foster an API-driven economy. With open finance, banks are compelled to open up additional APIs beyond payment accounts, enabling greater innovation and competition in the financial sector. This shift toward data-driven economies places embedded finance at the core of financial services. Forward-thinking banks are not only complying with regulatory requirements but also proactively leveraging open finance to distribute their services efficiently and reach customers wherever they are. 

The critical need for modernizing core systems and the role of hybrid cloud


In this new paradigm of AI-powered digital finance, modernizing core systems becomes imperative for banks to deliver seamless experiences, leverage emerging technologies, and remain competitive. Traditional legacy systems often lack the flexibility, scalability and agility required to support the integration of embedded finance, generative AI and open finance. By transforming core systems, banks can create a solid foundation that enables the seamless integration of new technologies, facilitates efficient API-driven ecosystems and enhances the overall customer experience. 

Hybrid multicloud plays a crucial role in facilitating the shift. It allows banks to leverage the scalability and flexibility of public cloud services while maintaining control over sensitive data through private cloud and on-premises infrastructure. By adopting a hybrid multicloud approach, banks can transform their core systems, leverage AI and machine learning capabilities, ensure data security and compliance and seamlessly integrate with third-party services and APIs. The hybrid cloud provides the agility and scalability necessary to support the rapid deployment of new digital services, while also offering the control and customization required by financial institutions. 

Modernization starts at the core


However, transforming core systems and transitioning to a hybrid cloud infrastructure is not a one-size-fits-all solution. Each bank has unique requirements, existing technology landscapes and strategic goals. It is crucial to align the technology roadmap of fintech solutions with the overall bank strategy, including the digital strategy. This alignment ensures a competitive advantage, sustainability and a seamless convergence between the two roadmaps. Collaboration between banks, fintech providers and IBM can facilitate this alignment and help banks navigate the complexities of digital transformation. 

The financial services industry is undergoing a profound transformation driven by AI, digital innovation and the shift toward digital financial services. Embedded finance, generative AI, the rise of super apps, and open finance are reshaping customer experiences and creating new opportunities for financial institutions. To fully leverage these transformative trends, banks must transform their core systems and adopt a hybrid multicloud infrastructure. This transformation not only enables seamless integration of new technologies but also enhances operational efficiency, agility and data security. As banks embark on this journey, strategic alignment between the technology roadmap and the overall bank strategy is paramount. 

Source: ibm.com

Thursday, 1 June 2023

Keep it simple: How to succeed at business transformation using behavioral economics

IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Certification, IBM Tutorial and Materials

Business leaders often think it’s impossible to predict the outcome of a transformation effort—whether employees will embrace a new process, for example, or how customers will react to a new service. They’re missing out on a secret of change management, says IBM Global Managing Partner Jesus Mantas: “You really can predict, for the most part, why people do what they do.” The answers, he says, come from ​​behavioral economics.

In his role overseeing Business Transformation Services for​​ IBM Consulting, Mantas guides organizations toward success as they redesign their businesses. Mantas has spent years combing through findings from behavioral economics and incorporating them into his consulting work. The principles of human behavior can seem simple and even obvious, he says, but time and again, companies ignore them, then wonder what went wrong. Here are a few essential—but often overlooked—guidelines for any leader aiming to influence people’s decisions and drive change.

​​Realize it’s less about the data—and more about the presentation


“In a business environment, we tend to think everybody makes rational decisions,” Mantas says. But emotion plays a much larger role than leaders think. Case in point: Take the same facts and present them differently, and you get a different reaction from customers. A pair of headphones selling for 50% off $60 feels more compelling than the same item selling for $30. Ground beef that’s labeled “85% lean” seems more appealing than an identical product labeled as “15% fat.”

As another example of the power of presentation, Mantas cites studies that show a powerful way to encourage behavior in people is to sign them up for something—like a ​​401(k) savings plan—and allow them to opt out. That brings much higher adoption rates than a program requiring people to opt in. According to research from fund manager Vanguard, people who are auto-enrolled in a 401(k) have a 93% participation rate, compared to a 66% rate when people have to opt in.

In both cases, people are given the same choice—to join a 401(k) or not—but the facts are presented differently, using an opposite ​​choice architecture, as behavioral economists call it. Research about the power of auto-enrollment is so persuasive, in fact, that a new U.S. federal spending package requires employers to automatically sign up their employees for 401(k) plans to improve their retirement security.

Mantas believes data and facts make up 20% of a decision, while presentation is the other 80%. Factors like color and design “have disproportionately more impact than baseline statements,” he says. Efforts around transformation should always keep that in mind, and businesses should spend much more time getting the presentation right.

​​​​Stop making things so hard for people


The 401(k) research bolsters another point Mantas mentions frequently: If you want to influence behavior and encourage adoption, create the simplest possible path. What could be easier than joining a 401(k) through auto-enrollment? In other words, make things easy for people.

“‘Why is nobody following our new process?’ OK, well, it has 42 steps.”Jesus Mantas

As Mantas says, “People will do what’s easy more often than they will do what is correct, right or expected. It’s so simple, so obvious. Nobody has ever disagreed with me when I say that. And yet people barely ever apply it in practice. And then they ask, ‘why is nobody following our new process?’ OK, well, it has 42 steps.'”

When Mantas worked with a company looking to build a network of charging points for electric vehicles, the company’s team was focused on getting the technology to work well and rolling the stations out widely. “That’s great,” Mantas recalls asking them, “but why will someone adopt yours versus any other option that they have?” His own answer: “The charging experience has to be easier than any other one on the market. If you do that, you will have more adoption than anybody else.”

​​​Build strong and sticky habits


Mantas once spoke with a CEO who wondered how employees could adopt his company’s new principles as it underwent a transformation. It wasn’t about principles, Mantas told him, but habits.

The objective is to change what people do every day, which is very different from what they believe in or aspire to. Habits are what we do, who we are and how we think. If using new technology or processes doesn’t become a habit, the effort will ultimately fail.

The first step to developing a habit is, of course, making it easy and starting small; that’s an idea shared by BJ Fogg, a behavior scientist at Stanford University, in his book Tiny Habits. Leaders can establish habit-building cues, or reminders to do something.

IBM Consulting has its own list of habits, one of which is to build client trust. In practice, that means creating processes around transparency, like supplying data and metrics that measure success. “Building client trust is not a principle,” Mantas says. “That’s something you need to do in every interaction. That’s like brushing your teeth.”

The old adage is true: Humans are creatures of habit, and building routines will make your transformation stick. Like Mantas’s other recommendations, it’s a commonsense truth that’s backed up by research. The big picture, he says: ​​“When you study behavioral economics and science, you really find new avenues and tools to accelerate transformation—and unlock a significant amount of value.”

Source: ibm.com