Thursday 18 April 2024

Understanding glue records and Dedicated DNS

Understanding glue records and Dedicated DNS

Domain name system (DNS) resolution is an iterative process where a recursive resolver attempts to look up a domain name using a hierarchical resolution chain. First, the recursive resolver queries the root (.), which provides the nameservers for the top-level domain(TLD), e.g.com. Next, it queries the TLD nameservers, which provide the domain’s authoritative nameservers. Finally, the recursive resolver  queries those authoritative nameservers.  

In many cases, we see domains delegated to nameservers inside their own domain, for instance, “example.com.” is delegated to “ns01.example.com.” In these cases, we need glue records at the parent nameservers, usually the domain registrar, to continue the resolution chain.  

What is a glue record? 

Glue records are DNS records created at the domain’s registrar. These records provide a complete answer when the nameserver returns a reference for an authoritative nameserver for a domain. For example, the domain name “example.com” has nameservers “ns01.example.com” and “ns02.example.com”. To resolve the domain name, the DNS would query in order: root, TLD nameservers and authoritative nameservers.  

When nameservers for a domain are within the domain itself, a circular reference is created. Having glue records in the parent zone avoids the circular reference and allows DNS resolution to occur.  

Glue records can be created at the TLD via the domain registrar or at the parent zone’s nameservers if a subdomain is being delegated away.  

When are glue records required?

Glue records are needed for any nameserver that is authoritative for itself. If a 3rd party, such as a managed DNS provider hosts the DNS for a zone, no glue records are needed. 

IBM NS1 Connect Dedicated DNS nameservers require glue records 

IBM NS1 Connect requires that customers use a separate domain for their Dedicated DNS nameservers. As such, the nameservers within this domain will require glue records. Here, we see glue records for exampledns.net being configured in Google Domains with random IP addresses: 

Once the glue records have been added at the registrar, the Dedicated DNS domain should be delegated to the IBM NS1 Connect Managed nameservers and the Dedicated DNS nameservers. For most customers, there will be a total of 8 NS records in the domain’s delegation. 

What do glue records look like in the dig tool? 

Glue records appear in the ADDITIONAL SECTION of the response. To see a domain’s glue records using the dig tool, directly query a TLD nameserver for the domain’s NS record. The glue records in this example are in quotation marks. Quotation marks are used for emphasis below: 

Understanding glue records and Dedicated DNS

Understanding glue records and Dedicated DNS

How do I know my glue records are correct? 


To verify that glue records are correctly listed at the TLD nameservers, directly query the TLD nameservers for the domain’s NS records using the dig tool as shown above. Compare the ADDITIONAL SECTION contents of the response to the expected values entered as NS records in IBM NS1 Connect. 

Source: ibm.com

Saturday 13 April 2024

Merging top-down and bottom-up planning approaches

Merging top-down and bottom-up planning approaches

This blog series discusses the complex tasks energy utility companies face as they shift to holistic grid asset management to manage through the energy transition. The first post of this series addressed the challenges of the energy transition with holistic grid asset management. The second post in this series addressed the integrated asset management platform and data exchange that unite business disciplines in different domains in one network.

Breaking down traditional silos


Many utility asset management organizations work in silos. A holistic approach that combines the siloed processes and integrates various planning management systems provides optimization opportunities on three levels:

1. Asset portfolio (AIP) level: Optimum project execution schedule
2. Asset (APMO) level: Optimum maintenance and replacement timing
3. Spare part (MRO) level: Optimum spare parts holding level

The combined planning exercises produce budgets for capital expenditures (CapEx) and operating expenses (OpEx), and set minimum requirements for grid outages for the upcoming planning period, as shown in the following figure:

Merging top-down and bottom-up planning approaches

Asset investments are typically part of a grid planning department, which considers expansions, load studies, new customers and long-term grid requirements. Asset investment planning (AIP) tools bring value in optimizing various, sometimes conflicting, value drivers. They combine new asset investments with existing asset replacements. However, they follow different approaches to risk management by using a risk matrix to assess risk at the start of an optimization cycle. This top-down process is effective for new assets since no information about the assets is available. For existing assets, a more accurate bottom-up risk approach is available from the continuous health monitoring process. This process calculates the health index and the effective age based on the asset’s specific degradation curves. Dynamic health monitoring provides up-to-date risk data and accurate replacement timing, as opposed to the static approach used for AIP. Combining the asset performance management and optimization (APMO) and AIP processes uses this enhanced estimation data to optimize in real time.

Maintenance and project planning take place in operations departments. The APMO process generates an optimized work schedule for maintenance tasks over a project period and calculates the optimum replacement moment for an existing asset at the end of its lifetime. The maintenance management and project planning systems load these tasks for execution by field service departments.

On the maintenance repair and overhaul (MRO) side, spare part optimization is linked to asset criticality. Failure mode and effect analysis (FMEA) defines maintenance strategies and associated spare holding strategies. The main parameters are optimizing for stock value, asset criticality and spare part ordering lead times.

Traditional planning processes focus on disparate planning cycles for new and existing assets in a top-down versus bottom-up asset planning approach. This approach leads to suboptimization. An integrated planning process breaks down the departmental silos with optimization engines at three levels. Optimized planning results in lower outages and system downtime, and it increases the efficient use of scarce resources and budget.

Source: ibm.com

Friday 12 April 2024

IBM researchers to publish FHE challenges on the FHERMA platform

IBM researchers to publish FHE challenges on the FHERMA platform

To foster innovation in fully homomorphic encryption (FHE), IBM researchers have begun publishing challenges on the FHERMA platform for FHE challenges launched in late 2023 by Fair Math and the OpenFHE community.

FHE: A new frontier in technology


Fully homomorphic encryption is a groundbreaking technology with immense potential. One of its notable applications lies in enhancing medical AI models. By enabling various research institutes to collaborate seamlessly in the training process, FHE opens doors to a new era of possibilities. The ability to process encrypted data without decryption marks a pivotal advancement, promising to revolutionize diverse fields.

IBM has been working to advance the domain of FHE for 15 years, since IBM Research scientist Craig Gentry introduced the first plausible fully homomorphic scheme in 2009. The “bootstrapping” mechanism he developed cleans and reduces the amount of “noise” in encoded information, which made possible the widespread use of FHE commercially.

Progress in FHE


FHE has experienced significant progress since the introduction of its first scheme. The transition from theoretical frameworks to practical implementations has been marked by countless issues that need to be addressed. While there are already applications that are using FHE, the community is constantly improving and innovating the algorithms to make FHE more popular and applicable to new domains.

Fostering innovation through challenges


The FHERMA platform was built to incentivize innovation in the FHE domain. Various challenges can be seen on the FHERMA site. The challenges are motivated by problems encountered by real-world machine learning and blockchain applications.

Solutions to challenges must be written by using known cryptographic libraries such as openFHE. The developers can also use higher-level libraries such as IBM’s HElayers to speed up their development and easily write robust and generic code.

The best solutions to the various challenges will win cash prizes from Fair Math, alongside contributing to the FHE community. Winners will also be offered the opportunity to present their solutions in a special workshop currently being planned.

The goal of the challenges is to foster research, popularize FHE, and develop cryptographic primitives that are efficient, generic, and support different hyperparameters (for example, writing matrix multiplication that is efficient for matrices of dimensions 1000×1000 and 10×10). This aligns with IBM’s vision for privacy-preserving computation by using FHE.

Driving progress and adoption


Introducing and participating in challenges that are listed on the FHERMA site is an exciting and rewarding way to advance the extended adoption of FHE, while helping to move development and research in the domain forward. We hope you join us in this exciting endeavor on the FHERMA challenges platform.

Teams and individuals who successfully solve the challenges will receive cash prizes from Fair Math. More importantly, the innovative solutions to the published challenges will help move the FHE community forward—a longstanding goal for IBM.

Source: ibm.com

Thursday 11 April 2024

Why CHROs are the key to unlocking the potential of AI for the workforce

Why CHROs are the key to unlocking the potential of AI for the workforce

It’s no longer a question of whether AI will transform business and the workforce, but how it will happen. A study by the IBM Institute for Business Value revealed that up to three-quarters of CEOs believe that competitive advantage will depend on who has the most advanced generative AI.

With so many leaders now embracing the technology for business transformation, some wonder which C-Suite leader will be in the driver’s seat to orchestrate and accelerate that change.

CHROs today are perfectly positioned to take the lead on both people skills and AI skills, ushering the workforce into the future. Here’s how top CHROs are already seizing the opportunity. 

Orchestrating the new human + AI workforce 


Today, businesses are no longer only focused on finding the human talent they need to execute their business strategy. They’re thinking more broadly about how to build, buy, borrow or “bot” the skills needed for the present and future.  

The CHRO’s primary challenge is to orchestrate the new human plus AI workforce. Top CHROs are already at work on this challenge, using their comprehensive understanding of the workforce and how to design roles and skills within an operating model to best leverage the strengths of both humans and AI.  

In the past, that meant analyzing the roles that the business needs to execute its strategy, breaking those roles down into their component skills and tasks and creating the skilling and hiring strategy to fill gaps. Going forward, that means assessing job descriptions, identifying the tasks best suited to technology and the tasks best suited to people and redesigning the roles and the work itself.  

Training the AI as well as the people 


As top CHROs partner with their C-Suite peers to reinvent roles and change how tasks get done with AI and automation, they are also thinking about the technology roadmap for skills. With the skills roadmap established, they can play a key role in building AI-powered solutions that fit the business’ needs.  

HR leaders have the deep expertise in training best practices that can inform not only how people are trained for skills, but how the AI solutions themselves are trained.  

To train a generative AI assistant to learn project management, for example, you need a strong set of unstructured data about the work and tasks required. HR leaders know the right steps to take around sourcing and evaluating content for training, collaborating with the functional subject matter experts for that area.  

That’s only the beginning. Going forward, business leaders will also need to consider how to validate, test and certify these AI skills.  

Imagine an AI solution trained to support accountants with key accounting tasks. How will businesses test and certify those skills and maintain compliance, as rigorously as is done for a human accountant getting an accounting license? What about certifications like CPP or Six Sigma? HR leaders have the experience and knowledge of leading practices around training, certification and more that businesses will need to answer these questions and truly implement this new operating model.  

Creating a culture focused on growth mindset and learning 


Successfully implementing technology depends on having the right operating model and talent to power it. Employees need to understand how to use the technology and buy in to adopting it. It is fundamentally a leadership and change journey, not a technology journey.  

Every organization will need to increase the overall technical acumen of their workforce and make sure that they have a basic understanding of AI so they can be both critical thinkers and users of the technology. Here, CHROs will lean into their expertise and play a critical role moving forward—up-skilling people, creating cultures of growth mindset and learning and driving sustained organizational change.  

For employees to get the most out of AI, they need to understand how to prompt it, evaluate its outputs and then refine and modify. For example, when you engage with a generative AI-powered assistant, you will get very different responses if you ask it to “describe it to an executive” versus “describe it to a fifth-grader.” Employees also need to be educated and empowered to ask the right questions about AI’s outputs and source data and analyze them for accuracy, bias and more.  

While we’re still in the early phases of the age of AI, leading CHROs have a pulse on the anticipated impact of these powerful technologies. Those who can seize the moment to build a workforce and skills strategy that makes the most of human talent plus responsibly trained AI will be poised to succeed.

Source: ibm.com

Tuesday 9 April 2024

Product lifecycle management for data-driven organizations

Product lifecycle management for data-driven organizations

In a world where every company is now a technology company, all enterprises must become well-versed in managing their digital products to remain competitive. In other words, they need a robust digital product lifecycle management (PLM) strategy. PLM delivers value by standardizing product-related processes, from ideation to product development to go-to-market to enhancements and maintenance. This ensures a modern customer experience. The key foundation of a strong PLM strategy is healthy and orderly product data, but data management is where enterprises struggle the most. To take advantage of new technologies such as AI for product innovation, it is crucial that enterprises have well-organized and managed data assets.

Gartner has estimated that 80% of organizations fail to scale digital businesses because of outdated governance processes. Data is an asset, but to provide value, it must be organized, standardized and governed. Enterprises must invest in data governance upfront, as it is challenging, time-consuming and computationally expensive to remedy vast amounts of unorganized and disparate data assets. In addition to providing data security, governance programs must focus on organizing data, identifying non-compliance and preventing data leaks or losses.  

In product-centric organizations, a lack of governance can lead to exacerbated downstream effects in two key scenarios:  


1. Acquisitions and mergers

Consider this fictional example: A company that sells three-wheeled cars has created a robust data model where it is easy to get to any piece of data and the format is understood across the business. This company is so successful that it acquired another company that also makes three-wheeled cars. The new company’s data model is completely different from the original company. Companies commonly ignore this issue and allow the two models to operate separately. Eventually, the enterprise will have weaved a web of misaligned data requiring manual remediation. 

2. Siloed business units

Now, imagine a company where the order management team owns order data and the sales team owns sales data. In addition, there is a downstream team that owns product transactional data. When each business unit or product team manages their own data, product data can overlap with the other unit’s data causing several issues, such as duplication, manual remediation, inconsistent pricing, unnecessary data storage and an inability to use data insights. It becomes increasingly difficult to get information in a timely fashion and inaccuracies are bound to occur. Siloed business units hamper the leadership’s ability to make data-driven decisions. In a well-run enterprise, each team would connect their data across systems to enable unified product management and data-informed business strategy.  

How to thrive in today’s digital landscape


In order to thrive in today’s data-driven landscape, organizations must proactively implement PLM processes, embrace a unified data approach and fortify their data governance structures. These strategic initiatives not only mitigate risks but also serve as catalysts for unleashing the full potential of AI technologies. By prioritizing these solutions, organizations can equip themselves to harness data as the fuel for innovation and competitive advantage. In essence, PLM processes, a unified data approach and robust data governance emerge as the cornerstone of a forward-thinking strategy, empowering organizations to navigate the complexities of the AI-driven world with confidence and success.

Source: ibm.com

Friday 5 April 2024

Accelerate hybrid cloud transformation through IBM Cloud for Financial Service Validation Program

Accelerate hybrid cloud transformation through IBM Cloud for Financial Service Validation Program

The cloud represents a strategic tool to enable digital transformation for financial institutions


As the banking and other regulated industry continues to shift toward a digital-first approach, financial entities are eager to use the benefits of digital disruption. Lots of innovation is happening, with new technologies emerging in areas such as data and AI, payments, cybersecurity and risk management, to name a few. Most of these new technologies are born-in-cloud. Banks want to tap into these new innovations. This shift is a significant change in their business models, moving from a capital expenditure approach to an operational expenditure approach, allowing financial organizations to focus on their primary business. However, the transformation from traditional on-prem environments to a public cloud PaaS or SaaS model presents significant cybersecurity, risk, and regulatory concerns that continue to impede progress.

Balancing innovation, compliance, risk and market dynamics is a challenge 


While many organizations recognize the vast pool of innovations that public cloud platforms offer, financially regulated clients remain accustomed to the level of control and visibility provided by on-prem environments. Despite the potential benefits, cybersecurity remains the primary concern with public cloud adoption. The average cost of any mega-breach is an astonishing $400 plus million, with misconfigured cloud as a leading attack vector. This leaves many organizations hesitant to make the transition, fearing they will lose the control and security they have with their on-prem environments. The banking industry’s continued shift toward a digital-first approach is encouraging. However, financial organizations must carefully consider the risks that are associated with public cloud adoption and ensure that they have the proper security measures in place before making the transition. 

The traditional approach for banks and ISV application onboarding involves a review process, which consists of several key items like the following: 

  • A third-party architecture review, where the ISV needs to have an architecture document describing how they are deploying into the cloud and how it is secure. 
  • A third-party risk management review, where the ISV needs to describe how it is complying to required controls. 
  • A third-party investment review, where the ISV provides a bill of material showing what and how services are being used to meet compliance requirements, along with price points. 

The ISV is expected to be prepared for all these reviews and the overall onboarding lifecycle through this process takes more than 24 months today.

Why a FS Cloud and FS Validation Program? 


IBM has created the solution for this problem with its Financial Services Cloud offering, and its ISV Financial Services validation program, which is designed to de-risk the partner ecosystem for clients. This help accelerating continuous integration and continuous delivery on the cloud. This program ensures that the new innovations coming out of these ISVs are validated, tested, and ready to be deployed in a secure and compliant manner. With IBM’s ISV Validation program, banks can confidently adopt new innovative offerings on cloud and stay ahead in the innovation race. 

Ensuring that the success of a cloud transformation journey requires a combination of modern governance, standard control framework, and automation. There are different industry frameworks available to secure and provide compliance posture. Continuous compliance that is aligned to an industry framework, informed by an industry coalition that is composed of representation from key banks worldwide and other compliance bodies, is essential. IBM Cloud Framework for Financial services is uniquely positioned for that, meeting all these requirements. 

IBM Cloud for Financial Services® is a secure cloud platform that is designed to reduce risk for clients by providing a high level of visibility, control, regulatory compliance, and the best-of-breed security. It allows financial institutions to accelerate innovation, unlock new revenue opportunities, and reduce compliance costs by providing access to pre-validated partners and solutions that conform to financial services security and controls. The platform also offers risk management and compliance automation, continuous monitoring, and audit reporting capabilities, as well as on-demand visibility for clients, auditors, and regulators. 

Our mission is to help ISVs adapt to the cloud and SaaS models and prepare ISVs to meet the security standards and compliance requirements necessary to do business with financial institutions on cloud. Our process brings the compliance and onboarding cycle time down to less than 6 months, a significant improvement. Through this process, we are creating an ecosystem of ISVs that are validated by IBM Cloud for Financial Services, providing customers with a trusted and reliable network of vendors. 

Streamlined process and tooling


IBM® has created a well-defined process and various tools, technologies and automation to assist ISVs as part of the validation program. We offer an integrated onboarding platform that ensures a smooth and uninterrupted experience. This platform serves as a centralized hub, guiding ISVs throughout the entire program, starting from initial engagements and leading up to the validation of final controls. The onboarding platform navigates the ISV through following steps: 

Orientation and education

The platform provides a catalog of self-paced courses that help you become familiar with the processes and tools that are used during the IBM Cloud for Financial Services onboarding and validation. The self-paced format allows you to learn at your own pace and on your own schedule. 

ISV Controls analysis

The ISV Controls Analysis serves as an initial assessment of an organization’s security and risk posture, laying the groundwork for IBM to plan the necessary onboarding activities.

Architecture assessment

An architecture assessment evaluates the architecture of an ISV’s cloud environment. The assessment is designed to help ISVs identify gaps in their cloud architecture and recommend best practices to enhance the compliance and governance of their cloud environment.

Deployment planning

Deployment of ISV application in a secure environment and manage their workloads on IBM Cloud®. This step is designed to meet the security and compliance requirements of organizations. Providing a comprehensive set of security controls and services to help protect customer data and applications, meeting the suitable secure architecture requirements. 

Security Assessment

The security assessment is a process of evaluating the security controls of the proposed business processes against a set of enhanced, industry-specific, control requirements in the IBM Cloud for Financial Services Framework. The process helps to identify vulnerabilities, threats, and risks that might compromise the security of a system and allows for the implementation of appropriate security measures to address those issues. 

Professional guidance by IBM and KPMG teams


IBM team provides guidance and assets to help accelerate the onboarding process in a shared trusted model. We also assist ISVs with deploying and testing their applications on the IBM Cloud for Financial Services approved architecture. We work with ISVs throughout the controls assessment process to help their application achieve the IBM Cloud for Financial Services validated status. Our goal is to ensure that ISVs meet our rigorous standards and comply with industry regulations. We are also partnering with KPMG, an industry leader in the security and regulatory compliance domain to add value to the ISVs and clients. 

Time to revenue and cost savings


This process enables the ISV to be ready and go to market in less than eight weeks reducing the overall time to market and overall cost of onboarding for any end clients. 

Benefits of partnering with IBM? 


As an ISV, you have access to our extensive financial institution clients. Our cloud is trusted by 92 of the top 100 banks, giving you a significant advantage in the industry. 

Co-create with IBM team of expert architects and developers to take your solutions to the next level with leading-edge capabilities. 

Partnering with us means you can elevate your Go-To-Market strategy through co-selling. We can help you tap into our vast sales channels, incentive programs, client relationships, and industry expertise. 

You have access to our technical services, and cloud credits, as an investment in your innovation. 

Our marketplaces, like the IBM Cloud® Catalog and Red Hat Marketplace, offer you an excellent opportunity to sell your products and services to a wider audience. 

Finally, our marketing and direct investments in your marketing, can generate demand and help you reach your target audience effectively. 

Source: ibm.com

Thursday 4 April 2024

The winning combination for real-time insights: Messaging and event-driven architecture

The winning combination for real-time insights: Messaging and event-driven architecture

In today’s fast-paced digital economy, businesses are fighting to stay ahead and devise new ways to streamline operations, enhance responsiveness and work with real-time insights. We are now in an era defined by being proactive, rather than reactive. In order to stay ahead, businesses need to enable proactive decision making—and this stems from building an IT infrastructure that provides the foundation for the availability of real-time data.

A core part of the solution needed comes from messaging infrastructure and many businesses already have a strong foundation in place. Among others, IBM MQ has been recognized as the top messaging broker because of its simplicity of use, flexibility, scalability, security and many other reasons. A messaging queue technology is essential for businesses to stay afloat, but building out event-driven architecture fueled by messaging might just be your x-factor.

Messaging that can be relied on


IBM MQ facilitates the reliable exchange of messages between applications and systems, making sure that critical data is delivered promptly and exactly once to protect against duplicate or lost data. For 30 years, IBM MQ users have realized the immense value of investing in this secure messaging technology—but what if it could go further?

IBM MQ boasts the ability to seamlessly integrate with other processing tools with its connectors (including Kafka connectors), APIs and standard messaging protocols. Essentially, it sets an easy stage for building a strong real-time and fault-tolerant technology stack businesses once could only dream of.

IBM MQ is an industry leader for a reason, there’s no doubt about that. Investing in future-proof solutions is critical for businesses trying to thrive in such a dynamic environment. IBM MQ’s 30 years of success and reliability in a plethora of use cases is not something that should be ignored, especially when it has been continuously reinventing itself and proving its adaptability as different technologies have emerged with its flexible deployment options (available on-prem, on cloud and hybrid). However, IBM MQ and Apache Kafka can sometimes be viewed as competitors, taking each other on in terms of speed, availability, cost and skills. Will picking one over the other provide the optimum solution for all your business operations?

MQ and Apache Kafka: Teammates


Simply put, they are different technologies with different strengths, albeit often perceived to be quite similar. Among other differences, MQ focuses on precise and asynchronous instant exchange of data with directed interactions, while Apache Kafka focuses on high throughput, high volume and data processing in sequence to reduce latency. So, if MQ is focused on directed interactions and Kafka is focused on gaining insights, what might the possibilities be if you used them together?

We know IBM MQ excels in ensuring precision and reliability in message delivery, making it perfect for critical workloads. The focus is on trusted delivery, regardless of the situation and provision of instantaneous responses. If combined with Apache Kafka’s high availability and streamlined data collection—enabling applications or other processing tools to spot patterns and trends—businesses would immediately be able to harness the MQ data along with other streams of events from Kafka clusters to develop real-time intelligent solutions.

The more intelligence, the better


Real-time responsiveness and intelligence should be injected as much as possible into every aspect of your technology stacks. With increasing amounts of data inundating your business operations, you need a streaming platform that helps you monitor the data and act on it before it’s too late. The core of building this real-time responsiveness lies in messaging, but its value can be expanded through event-driven architectures.

Consider a customer-centric business responding to thousands of orders and customer events coming through every minute. With a strong messaging infrastructure that prevents messages from falling through the cracks, your teams can build customer confidence through message resilience—no orders get lost and you can easily find them in your queue manager. But, with event-driven technologies, you can add an extra layer of stream processing to detect trends and opportunities, increase your customer retention, or adapt to dynamic pricing.

Event-driven technologies have been emerging in our digital landscape, starting with Apache Kafka as an industry leader in event streaming. However, IBM Event Automation’s advanced capabilities leverage the power of Apache Kafka and help enterprises bring their event-driven architectures to another level through event processing and event endpoint management capabilities. It takes a firehose of raw data streams coming from the directed interactions of all your applications and Kafka connectors or Kafka topics, allowing analysts and wider teams to derive insights without needing to write java, SQL, or other codes. In other words, it provides the necessary context for your business events.

With a low-code and intuitive user interface and functionality, businesses can empower less technical users to fuel their work with real-time insights. This significantly lowers the skills barrier by enabling business technologists to use the power of events without having to go to advanced developer teams first and have them pull information from a data storage. Consequently, users can see the real-time messages and cleverly work around them by noticing order patterns and perhaps even sending out promotional offers among many other possibilities.

At the same time, event endpoint management capabilities help IT administrators to control who can access data by generating unique authentication credentials for every user. They can enable self-service access so users can keep up with relevant events, but they can also add layers of controls to protect sensitive information. Uniquely, it allows teams the opportunity to explore the possibilities of events while also controlling for sensitive information.

Source: ibm.com

Tuesday 2 April 2024

Using generative AI to accelerate product innovation

Using generative AI to accelerate product innovation

Generative artificial intelligence (GenAI) can be a powerful tool for driving product innovation, if used in the right ways. We’ll discuss select high-impact product use cases that demonstrate the potential of AI to revolutionize the way we develop, market and deliver products to customers. Stacking strong data management, predictive analytics and GenAI is foundational to taking your product organization to the next level.

1. Addressing customer inquiries with an AI-driven chatbot 


ChatGPT distinguished itself as the first publicly accessible GenAI-powered virtual chatbot. Now, enterprises can adopt the foundational principles of this technology and apply them within their operations, further enriched by contextualization and security. With IBM watsonx™ Assistant, companies can build large language models and train them using proprietary information, all while helping to ensure the security of their data.

Conversational AI solutions can have several product applications that drive revenue and improve customer experience. For instance, an intelligent chatbot can address common customer concerns regarding bill explanations. When customers seek explanations for their bills, a GenAI-powered chatbot can provide them with detailed explanations, including transaction logs for usage and overage charges.

It can also provide new product packages or contract terms that align with a customer’s past usage needs, identifying new revenue opportunities and improving customer satisfaction. Businesses that use IBM watsonx Assistant can expect to see a 30% reduction in customer support costs and a 20% increase in customer satisfaction.

2. Accelerating product modernization 


GenAI has the power to automate manual product modernization processes. GenAI technologies can survey publicly available sources, such as press releases, to collect competitor data and compare the current product mix to competitor offerings. It can also gain an understanding of market advantages and suggest strategic product changes. These new insights can be realized at greater speeds than ever before. 

A key benefit of GenAI is its ability to generate code. Now, a business user can use GenAI tools to develop preliminary code for new product features without as much reliance on technical teams. These same tools can analyze code and identify and fix bugs in the code to reduce testing efforts. 

GenAI solutions such as IBM watsonx™ Code Assistant meet the core technical needs of enterprises. Watsonx Code Assistant can help enterprises achieve a 30% reduction in development effort or a 30% productivity gain. These tools have the potential to revolutionize technical processes and increase the speed of technical product delivery. 

3. Analyzing customer behavior for tailored product recommendations 


With the power of predictive analytics and GenAI, businesses can understand when specific customers are best suited for new products, receive suggestions for the appropriate products, and receive suggested next steps for engaging with the client. For example, if a customer undergoes a major business change such as an acquisition, predictive models trained on previous transactions can analyze the potential need for new products. 

GenAI can then suggest upselling opportunities and write an email to the customer, to be reviewed by the salesperson. This empowers sales teams to increase speed to value while offering customers top-tier service. Using IBM® watsonx.data™, enterprise data can be prepared for various analytical and AI use cases. 

4. Analyzing customer feedback to inform business strategy 


Enterprises have the opportunity to use GenAI to improve customer experience by more readily actioning customer feedback. Through IBM® watsonx.ai™, various industry-leading models are available for different types of summarization. This technology can quickly interpret and summarize large volumes of customer feedback. 

It can then provide suggested product improvements with fleshed-out requirements and user stories, accelerating the speed of responsiveness and innovation. GenAI can pull themes from feedback from lost customers to illuminate trends, suggest new sales strategies, and arm sales teams with business intelligence and pre-scripted follow-ups. 

5. Applying customer segmentation for intelligent marketing 


GenAI has the potential to revolutionize digital marketing by increasing the speed, effectiveness and personalization of marketing processes. Using standard data analytics practices, businesses can identify patterns and clusters within data to enable more accurate targeting of customers. 

Once the clusters are created, GenAI can power automated content creation processes that reach specific customer groups across various platforms. IBM watsonx™ Orchestrate enables the user to automate daily tasks and increase productivity. This tool can create content, connect to different platforms, and send out updates across them at the drop of a hat, saving marketing teams time and money as they deliver solutions. 

This content creation and customer outreach ability is the key differentiator of generative AI and part of what makes these new technologies so exciting. GenAI can take expensive, manual marketing processes and translate them into accelerated, automated processes. 

Source: ibm.com

Saturday 30 March 2024

Holistic asset management for utility network companies

Holistic asset management for utility network companies

Addressing challenges of the energy transition with grid asset management


The energy transition is gearing up to full speed as renewable energy sources replace fossil-based systems of energy production. The grid itself must green to operate within the environmental, social and governance (ESG) objectives and become carbon neutral by 2050. This shift requires energy utility companies to plan their grid asset management holistically as they find a new balance between strategic objectives.

Sustainable asset performance has become one of the key drivers in decision-making for asset planning and grid modernization business processes. New emerging technology enables AI-powered digital twins to operate the smart grid. However, operators must balance intermittent renewable energy intake to produce a controlled, stable output.

A balanced transition between old and new systems


The demand to fulfill existing long-term contracts and an abundance of new demands for industrial electrification pose new challenges to grid management. Finding the right balance requires load forecasting and simulation to prevent net congestion. Economical optimization must factor in new market dynamics and ensure reliable operation.

Existing network assets are aging, and more intelligent asset management strategies must emerge to maintain and replace the grid within tightening budgets. Asset investment planning must find a balance between these systems while minimizing risk and carbon footprint.

To manage the grid of the future, utility companies must shift from traditional asset management to a holistic approach. This shift will broaden insights so these companies can take strategic, tactical steps to optimize operational network development and operation decisions.

Asset lifecycle management


Holistic grid asset management adopts a lifecycle view across the whole asset lifespan to obtain a safe, secure, reliable and affordable network. Utility companies must break down the internal departmental walls between the silos of grid planning, construction, operation, maintenance and replacement to allow end-to-end visibility. They must connect underlying technology systems to create a single pane of glass for all operations. A shared data model across operating systems serves as the basis for integration, simulation, prediction and optimization by using generative AI models to drive next-level business value.

The goal of asset management is to optimize capital expenditures (CapEx) and operating expenses (OpEx) in a seamless transition between the timescale of the planning horizons. The following figure demonstrates the complex planning and optimization objectives required for a holistic view of the asset management lifecycle:

Holistic asset management for utility network companies

A top-down strategic approach for whole-life planning of asset investment portfolio matching future ESG goals needs to connect with a bottom-up maintenance and replacement strategy for existing assets. Asset investment planning (AIP) results in project portfolio management and product lifecycle management to plan, prioritize and run asset expansion and replacement projects within the boundaries of available budget and resource capacity.

Real-time operational data provides an asset health view that drives condition-based maintenance and replacement planning. This is the domain of enterprise asset management (EAM) for maintenance execution and asset performance management (APM) for strategy optimization. Traditionally, a disconnect at the tactical level has separated these planning and optimization methodologies. At the same time, operational risk management requires respecting health, safety and environment (HSE) management and process safety management to manage potentially hazardous operations. The maintenance repair and overhaul (MRO) spare parts strategy must align with the asset strategy in terms of criticality and optimal stock value.

Acknowledgment of the complexity of planning with multidimensional objectives on different timescales is the starting point for adopting a holistic view of asset management.

Source: ibm.com

Friday 29 March 2024

The path to embedded sustainability

The path to embedded sustainability

Businesses seeking to accelerate sustainability initiatives must take an integrated approach that brings together all business and technology functions. Sustainability is no longer the responsibility of only the chief sustainability officer (CSO). It is not managed by a single department in a silo. Driving true sustainable impact, at scale, takes place when an enterprise is fully aligned to that transformation. To scale progress in combating climate change, this alignment and collaboration must happen across value chain partners, ecosystems, and industries.

Sustainability and ESG: An opportunity for synergy

Sustainability and ESG are not synonymous. While ESG seeks to provide standard methods and approaches to measuring across environmental, social and governance KPIs, and holds organizations accountable for that performance, sustainability is far broader. ESG can serve as a vehicle to progress sustainability but it can also distract from the urgent need of combating climate change and working toward the 17 UN SDGs.

As we have seen with any sort of external reporting liabilities, this type of accountability does drive action. It’s our responsibility to ensure we don’t just do ESG reporting for the sake of reporting, and that it doesn’t impede actual progress in sustainability. We must ensure ESG progress and sustainability are driving towards a common goal. The reality is companies might be ready to fund ESG initiatives, but not as ready to fund ‘sustainability’ initiatives.

If designed intentionally, these do not have to be separate initiatives. When something is ‘regulatory,’ ‘mandatory,’ or ‘involuntary,’ companies have no choice but to find a way. A pre-existing sustainability office may find resources or funds shifted to ESG, or a reprioritization of targets based on ESG measurements. However, to capture both the business value behind ESG compliance as well as its ability to drive impact, it requires a holistic approach that strategically captures these synergies.  

We are helping our clients maximize those investments, leveraging the requirements of ESG to drive compliance as well as sustainability. Our clients are improving their ability to measure and track progress against ESG metrics, while concurrently operationalizing sustainability transformation.

Maximizing value with a holistic strategy

The first step in maximizing that dual value is upfront due diligence. It is necessary to assess the current state of reporting readiness, the alignment between ESG requirements and voluntary sustainability initiatives, and any consideration on how to drive acceleration with future-proofed solutions. Questions might include:

  • Where is the organization relative to its required and voluntary sustainability goals?
  • Have the sustainability goals evolved in response to recent regulation or market shifts? 
  • How aligned is the sustainability strategy to the business strategy? 
  • Is ownership of delivering sustainability goals distributed throughout the organization or is every leader aware of how they are expected to contribute?
  • How is sustainability managed—as an annual measuring exercise or an ongoing effort that supports business transformation?
  • What regulations are owned by specific functional areas that may contribute to a broader ESG roadmap if viewed holistically?
  • Are there in flight business or technology initiatives where I can embed these requirements?

Up until recently, sustainability was most likely handled by one central team. Now, functional areas across the organization are recognizing their role in measuring ESG progress as well as their opportunities to help make their company more sustainable.  

Similar to a company executing any corporate strategy, progress is made when the organization understands it, and employees are aware of how they play a role in bringing it to life. All leaders must enable teams and departments to understand how sustainability is part of the corporate strategy. They must provide the enablement and tools so these teams can integrate the overarching sustainability purpose and objectives within the corporate strategy into their respective roles in accelerating sustainable outcomes.

I see a clear shift in companies becoming more aware that they must work across departments to drive sustainability. A company cannot report on scope 3 category 7 of employee commute without employee data from HR or facilities management data, or without the technology platform and data governance to have an auditable view of that data. Businesses cannot prove there is no forced labor in their supply chain without working with procurement to understand their supplier base, where they are located, and what might be high risk, and then solution to embed proactive risk management in vendor onboarding. 

Embedding sustainability in practice

Accountability is where an enterprise can ensure that sustainability is embedded and activated. The idea of embedding is integrating it into the day-to-day role. It’s enabling employees to make informed decisions and understanding the climate impact based on that decision. Any business or investment decision has a profit lever, a cost lever, and sometimes a performance lever, such as an Service Level Agreement (SLA). Now, sustainability can be a lever to truly embed impact into everyday operations. Employees can make more sustainable decisions knowing the tradeoff and impact.

A recent study from the IBM Institute for Business Value surveyed 5,000 global C-suite executives across 22 industries to find out why sustainability isn’t generating more impact for organizations. The study found companies were just “doing sustainability,” or approaching sustainability as a compliance task or accounting exercise rather than a business transformation accelerator.

Executives recognize the importance of data to achieve sustainability objectives; 82% of the study’s respondents agree that high-quality data and transparency are necessary to succeed. However, a consistent challenge they encounter in driving both ESG reporting and sustainable transformation is the shared reality is that companies cannot manage what they cannot measure.

Data not only provides the quantitative requirements for ESG metrics, it also provides the visibility to manage the performance of those metrics. If the employees of a company don’t have the data, they cannot publish financial grade reporting, identify opportunities for decarbonization, or validate progress towards becoming a more sustainable company.

One point addressed in our study surrounds the data specific challenges that can come with sustainability. Findings revealed that “despite recognizing the link between data and sustainability success, only 4 in 10 organizations can automatically source sustainability data from core systems such as ERP, enterprise asset management, CRM, energy management, and facilities management.”

When clients embed the right processes and organizational accountability across ESG reporting and sustainability, they can make sure they are getting the right information and data into the hands of the right people, often system owners. Those ‘right people’ can now make more informed decisions in their respective roles and scale transformation from one team to the entire organization while also incorporating these needs of ESG data capture, collection, and ingestion for the sake of both reporting and operationalizing.

The study found organizations that successfully embedded sustainability approached the data usability challenge through a firmer data foundation and better data governance. The criticality of a clear data strategy and foundation brings us to our final topic: how generative AI can further accelerate sustainability.

Utilizing generative AI to embed sustainability

There are many different applications for generative AI when it comes to embedding sustainability, especially when it comes to filling in data gaps. The data needed for ESG and sustainability reporting is immense and complex. Oftentimes, companies don’t have it available or have the correct protocols to align their data and sustainability strategies.

Most clients, regardless of the size of the company, have sustainability teams that are stretched, trying to manually chase data instead of focusing on what the data is saying. Generative AI can unlock productivity potential, accelerating data collection and ingestion reconciliation. As an example, instead of sustainability teams manually collecting and reviewing paper fuel receipts, technology can help translate receipt images into the necessary data elements for fuel-related metrics. This allows these teams to spend more time on how to optimize fuel use for decarbonization, using time for data insights instead of time chasing the data.

By spending all your time on reconciling invoices or collecting physical fuel receipts, how are you or others in your organization going to have the time to understand the data and in turn make changes to drive sustainability? If time is spent collecting data and then pulling together reports, there is little time left to garner actionable insights from that data and enact change. Systems and processes must be in place so that an organization can drive sustainability performance, while meeting ESG reporting requirements, and not use all of its resources and funding on data management that provides eventual visibility without the capacity to use it for impact.

As mentioned in the study, generative AI can be a “game changer for data-driven sustainability, enabling organizations to turn trade-offs into win-wins, identify improvement opportunities, and drive innovation at speed and scale.” It is little wonder why 73% of surveyed executives say they plan to increase their investment in generative AI for sustainability.

To truly leverage the power of generative AI tomorrow, companies must first understand their data readiness today. Then, we can prioritize how generative AI can improve existing data for visibility and use that data for performance insights.

Companies can identify immediate opportunities for generative AI to help them move faster, while concurrently ensuring that the core data collection and management is established to support current and future reporting needs. We want our clients to focus on leveraging ESG reporting to have a return on investment (ROI) financially, as well as in driving sustainable impact. While external mandatory requirements will be a driver for where an organization’s budget is allocated, organizations can intentionally embed sustainability as a part of those initiatives to capture the full value of their transformation efforts.

Source: ibm.com