Tuesday, 30 April 2024

VeloxCon 2024: Innovation in data management

VeloxCon 2024: Innovation in data management

VeloxCon 2024, the premier developer conference that is dedicated to the Velox open-source project, brought together industry leaders, engineers, and enthusiasts to explore the latest advancements and collaborative efforts shaping the future of data management. Hosted by IBM® in partnership with Meta, VeloxCon showcased the latest innovation in Velox including project roadmap, Prestissimo (Presto-on-Velox), Gluten (Spark-on-Velox), hardware acceleration, and much more.

An overview of Velox


Velox is a unified execution engine that is built and open-sourced by Meta, aimed at accelerating data management systems and streamlining their development. One of the biggest benefits of Velox is that it consolidates and unifies data management systems so you don’t need to keep rewriting the engine. Today Velox is in various stages of integration with several data systems including Presto (Prestissimo), Spark (Gluten), PyTorch (TorchArrow), and Apache Arrow.

Velox at IBM


Presto is the engine for watsonx.data, IBM’s open data lakehouse platform. Over the last year, we’ve been working hard on advancing Velox for Presto – Prestissimo – at IBM. Presto Java workers are being replaced by a C++ process based on Velox. We now have several committers to the Prestissimo project and continue to partner closely with Meta as we work on building Presto 2.0.

Some of the key benefits of Prestissimo include:

  • Hugh performance boost: query processing can be done with much smaller clusters
  • No performance cliffs: no Java processes, JVM, or garbage collections, as memory arbitration improves efficiency
  • Easier to build and operate at scale: Velox gives you reusable and extensible primitives across data engines (like Spark)

This year, we plan to do even more with Prestissimo including:

  • The Iceberg reader
  • Production readiness (metrics collection with Prometheus)
  • New Velox system implementation
  • TPC-DS benchmark runs

VeloxCon 2024


We worked closely with Meta to organize VeloxCon 2024, and it was a fantastic community event. We heard speakers from Meta, IBM, Pinterest, Intel, Microsoft, and others share what they’re working on and their vision for Velox over two dynamic days.

Day 1 highlights

The conference kicked off with sessions from Meta including Amit Purohit reaffirming Meta’s commitment to open source and community collaboration. Pedro Pedreira, alongside Manos Karpathiotakis and Deblina Gupta, delved into the concept of composability in data management, showcasing Velox’s versatility and its alignment with Arrow.

Amit Dutta of Meta explored Prestissimo’s batch efficiency at Meta, shedding light on the advancements made in optimizing data processing workflows. Remus Lazar, VP Data & AI Software at IBM presented Velox’s journey within IBM and vision for its future. Aditi Pandit of IBM followed with insights into Prestissimo’s integration at IBM, highlighting feature enhancements and future plans.

The afternoon sessions were equally insightful, with Jimmy Lu of Meta unveiling the latest optimizations and features in Velox. While Binwei Yang of Intel discussed the integration of Velox with the Apache Gluten project, emphasizing its global impact. Engineers from Pinterest and Microsoft shared their experiences of unlocking data query performance by using Velox and Gluten, showcasing tangible performance gains.

The day concluded with sessions from Meta on Velox’s memory management by Xiaoxuan Meng and a glimpse into the new simple aggregation function interface that was presented by Wei He.

Day 2 highlights

The second day began with a keynote from Orri Erling, co-creator of Velox. He shared insights into Velox Wave and Accelerators, showcasing its potential for acceleration. Krishna Maheshwari from NeuroBlade highlighted their collaboration with the Velox community, introducing NeuroBlade’s SPU (SQL Processing Unit) and its transformative impact on Velox’s computational speed and efficiency.

Sergei Lewis from Rivos explored the potential of offloading work to accelerators to enhance Velox’s pipeline performance. William Malpica and Amin Aramoon from Voltron Data introduced Theseus, a composable, scalable, distributed data analytics engine, using Velox as a CPU backend.

Yoav Helfman from Meta unveiled Nimble, a cutting-edge columnar file format that is designed to enhance data storage and retrieval. Pedro Pedreira and Sridhar Anumandla from Meta elaborated on Velox’s new technical governance model, emphasizing its importance in guiding the project’s development sustainability.

The day also featured sessions on Velox’s I/O optimizations by Deepak Majeti from IBM, strategies for safeguarding against Out-Of-Memory (OOM) kills by Vikram Joshi from ComputeAI, and a hands-on demo on debugging Velox applications by Deepak Majeti.

What’s next with Velox


VeloxCon 2024 was a testament to the vibrant ecosystem surrounding the Velox project, showcasing groundbreaking innovations and fostering collaboration among industry leaders and developers alike. The conference provided attendees with valuable insights, practical knowledge, and networking opportunities, solidifying Velox’s position as a leading open source project in the data management ecosystem.

Source: ibm.com

Saturday, 27 April 2024

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts.

One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with embedded language translation features, but we haven’t reached the point where LLMs generate value outside of cloud providers.

However, there are smaller models that have the potential to innovate gen AI capabilities on mobile devices. Let’s examine these solutions from the perspective of a hybrid AI model.

The basics of LLMs


LLMs are a special class of AI models powering this new paradigm. Natural language processing (NLP) enables this capability. To train LLMs, developers use massive amounts of data from various sources, including the internet. The billions of parameters processed make them so large.

While LLMs are knowledgeable about a wide range of topics, they are limited solely to the data on which they were trained. This means they are not always “current” or accurate. Because of their size, LLMs are typically hosted in the cloud, which require beefy hardware deployments with lots of GPUs.

This means that enterprises looking to mine information from their private or proprietary business data cannot use LLMs out of the box. To answer specific questions, generate summaries or create briefs, they must include their data with public LLMs or create their own models. The way to append one’s own data to the LLM is known as retrieval augmentation generation, or the RAG pattern. It is a gen AI design pattern that adds external data to the LLM.

Is smaller better?


Enterprises that operate in specialized domains, like telcos or healthcare or oil and gas companies, have a laser focus. While they can and do benefit from typical gen AI scenarios and use cases, they would be better served with smaller models.

In the case of telcos, for example, some of the common use cases are AI assistants in contact centers, personalized offers in service delivery and AI-powered chatbots for enhanced customer experience. Use cases that help telcos improve the performance of their network, increase spectral efficiency in 5G networks or help them determine specific bottlenecks in their network are best served by the enterprise’s own data (as opposed to a public LLM).

That brings us to the notion that smaller is better. There are now Small Language Models (SLMs) that are “smaller” in size compared to LLMs. SLMs are trained on 10s of billions of parameters, while LLMs are trained on 100s of billions of parameters. More importantly, SLMs are trained on data pertaining to a specific domain. They might not have broad contextual information, but they perform very well in their chosen domain. 

Because of their smaller size, these models can be hosted in an enterprise’s data center instead of the cloud. SLMs might even run on a single GPU chip at scale, saving thousands of dollars in annual computing costs. However, the delineation between what can only be run in a cloud or in an enterprise data center becomes less clear with advancements in chip design.

Whether it is because of cost, data privacy or data sovereignty, enterprises might want to run these SLMs in their data centers. Most enterprises do not like sending their data to the cloud. Another key reason is performance. Gen AI at the edge performs the computation and inferencing as close to the data as possible, making it faster and more secure than through a cloud provider.

It is worth noting that SLMs require less computational power and are ideal for deployment in resource-constrained environments and even on mobile devices.

An on-premises example might be an IBM Cloud® Satellite location, which has a secure high-speed connection to IBM Cloud hosting the LLMs. Telcos could host these SLMs at their base stations and offer this option to their clients as well. It is all a matter of optimizing the use of GPUs, as the distance that data must travel is decreased, resulting in improved bandwidth.

How small can you go?


Back to the original question of being able to run these models on a mobile device. The mobile device might be a high-end phone, an automobile or even a robot. Device manufacturers have discovered that significant bandwidth is required to run LLMs. Tiny LLMs are smaller-size models that can be run locally on mobile phones and medical devices.

Developers use techniques like low-rank adaptation to create these models. They enable users to fine-tune the models to unique requirements while keeping the number of trainable parameters relatively low. In fact, there is even a TinyLlama project on GitHub.  

Chip manufacturers are developing chips that can run a trimmed down version of LLMs through image diffusion and knowledge distillation. System-on-chip (SOC) and neuro-processing units (NPUs) assist edge devices in running gen AI tasks.

While some of these concepts are not yet in production,  solution architects should consider what is possible today. SLMs working and collaborating with LLMs may be a viable solution. Enterprises can decide to use existing smaller specialized AI models for their industry or create their own to provide a personalized customer experience.

Is hybrid AI the answer?


While running SLMs on-premises seems practical and tiny LLMs on mobile edge devices are enticing, what if the model requires a larger corpus of data to respond to some prompts? 

Hybrid cloud computing offers the best of both worlds. Might the same be applied to AI models? The image below shows this concept.

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

When smaller models fall short, the hybrid AI model could provide the option to access LLM in the public cloud. It makes sense to enable such technology. This would allow enterprises to keep their data secure within their premises by using domain-specific SLMs, and they could access LLMs in the public cloud when needed. As mobile devices with SOC become more capable, this seems like a more efficient way to distribute generative AI workloads.

IBM® recently announced the availability of the open source Mistral AI Model on their watson™ platform. This compact LLM requires less resources to run, but it is just as effective and has better performance compared to traditional LLMs. IBM also released a Granite 7B model as part of its highly curated, trustworthy family of foundation models.

It is our contention that enterprises should focus on building small, domain-specific models with internal enterprise data to differentiate their core competency and use insights from their data (rather than venturing to build their own generic LLMs, which they can easily access from multiple providers).

Bigger is not always better


Telcos are a prime example of an enterprise that would benefit from adopting this hybrid AI model. They have a unique role, as they can be both consumers and providers. Similar scenarios may be applicable to healthcare, oil rigs, logistics companies and other industries. Are the telcos prepared to make good use of gen AI? We know they have a lot of data, but do they have a time-series model that fits the data?

When it comes to AI models, IBM has a multimodel strategy to accommodate each unique use case. Bigger is not always better, as specialized models outperform general-purpose models with lower infrastructure requirements. 

Source: ibm.com

Thursday, 25 April 2024

5 steps for implementing change management in your organization

5 steps for implementing change management in your organization

Change is inevitable in an organization; especially in the age of digital transformation and emerging technologies, businesses and employees need to adapt. Change management (CM) is a methodology that ensures both leaders and employees are equipped and supported when implementing changes to an organization.

The goal of a change management plan, or more accurately an organizational change plan, is to embed processes that have stakeholder buy-in and support the success of both the business and the people involved. In practice, the most important aspect of organizational change is stakeholder alignment. This blog outlines five steps to support the seamless integration of organizational change management.

Steps to support organizational change management


1. Determine your audience

Who is impacted by the proposed change? It is crucial to determine the audience for your change management process.

Start by identifying key leaders­ and determine both their influence and involvement in the history of organizational change. Your key leaders can provide helpful context and influence employee buy-in. You want to interview leaders to better understand ‘why’ the change is being implemented in the first place. Ask questions such as:

◉ What are the benefits of this change?
◉ What are the reasons for this change?
◉ What does the history of change in the organization look like?

Next, identify the other groups impacted by change, otherwise known as the personas. Personas are the drivers of successful implementation of a change management strategy. It is important to understand what the current day-to-day looks like for the persona, and then what tomorrow will look like once change is implemented.

A good example of change that an organization might implement is a new technology, like generative AI (Gen AI). Businesses are implementing this technology to augment work and make their processes more efficient. Throughout this blog, we use this example to better explain each step of implementing change management.

Who is impacted by the implementation of gen AI? The key leaders might be the vice president of the department that is adding the technology, along with a Chief Technical Officer, and team managers. The personas are those whose work is being augmented by the technology.

2. Align the key stakeholders

What are the messages that we will deliver to the personas? When key leaders come together to determine champion roles and behaviors for instituting change, it is important to remember that everyone will have a different perspective.

To best align leadership, take an iterative approach. Through a stakeholder alignment session, teams can co-create with key leaders, change management professionals, and personas to best determine a change management strategy that will support the business and employees.

Think back to the example of gen AI as the change implemented in the organization. Proper alignment of stakeholders would be bringing together the executives deciding to implement the technology, the technical experts on gen AI, the team managers implementing gen AI into their workflows, and even trusted personas—the personas might have experienced past changes in the organization.

3. Define the initiatives and scope

Why are you implementing the change? What are the main drivers of change? How large is the change to the current structure of the organization? Without a clear vision for change initiatives, there will be even more confusion from stakeholders. The scope of change should be easily communicated; it needs to make sense to your personas to earn their buy-in.

Generative AI augments workflows, making businesses more efficient. However, one obstacle of this technology is the psychological aspect that it takes power away from individuals who are running the administrative tasks. Clearly defining the benefits of gen AI and the goals of implementing the technology can help employees better understand the need.

Along with clear initiatives and communication, including a plan to skill employees to understand and use the technology as part of their scope also helps promote buy-in. Drive home the point that the change team members, through the stakeholders, become evangelists pioneering a new way of working. Show your personas how to prompt the tool, apply the technology, and other use cases to grow their excitement and support of the change.

4. Implement the change management plan

After much preparation on understanding the personas, aligning the stakeholders and defining the scope, it is time to run. ‘Go live’ with the change management plan and remember to be patient with employees and have clear communication. How are employees handling the process? Are there more resources needed? This is the part where you highly consider the feedback that is given and assess if it helps achieve the shared goals of the organization.

Implementing any new technology invites the potential for bugs, lags or errors in usage. For our example with gen AI, a good implementation practice might be piloting the technology with a small team of expert users, who underwent training on the tool. After collecting feedback from their ‘go live’ date, the change management team can continue to phase the technology implementation across the organization. Remember to be mindful of employee feedback and keep an open line of communication.

5. Adapt to improve

Adapting the process is something that can be done throughout any stage of implementation but allocating time to analyze the Return on Investment (ROI) should be done at the ‘go live’ date of change. Reviewing can be run via the “sense and respond” approach.

Sense how the personas are reacting to said change. This can be done via sentiment analysis, surveys and information sessions. Then, analyze the data. Finally, based on the analysis, appropriately respond to the persona’s reaction.

Depending on how the business and personas are responding to change, determine whether the outlined vision and benefits of the change are being achieved. If not, identify the gaps and troubleshoot how to better support where you might be missing the mark. It is important to both communicate with the stakeholders and listen to the feedback from the personas.

To close out our example, gen AI is a tool that thrives on continuous usage and practices like fine-tuning. The organization can both measure the growth and success of the technology implemented, as well as the efficiency of the personas that have adapted the tool into their workflows. Leaders can share out surveys to pressure test how the change is resonating. Any roadblocks, pain points or concerns should be responded to directly by the change management team, to continue to ensure a smooth implementation of gen AI.

How to ensure success when implementing organizational change


The success formula to implementing organizational change management includes the next generation of leadership, an accelerator culture that is adaptive to change, and a workforce that is both inspired and engaged.

Understanding the people involved in the process is important to prepare for a successful approach to change management. Everyone comes to the table with their own view of how to implement change. It is important to remain aligned on why the change is happening. The people are the drivers of change. Keep clear, open and consistent communication with your stakeholders and empathize with your personas to ensure that the change will resonate with their needs.

As you craft your change management plan, remember that change does not stop at the implementation date of the plan. It is crucial to continue to sense and respond.

Source: ibm.com

Tuesday, 23 April 2024

Deployable architecture on IBM Cloud: Simplifying system deployment

Deployable architecture on IBM Cloud: Simplifying system deployment

Deployable architecture (DA) refers to a specific design pattern or approach that allows an application or system to be easily deployed and managed across various environments. A deployable architecture involves components, modules and dependencies in a way that allows for seamless deployment and makes it easy for developers and operations teams to quickly deploy new features and updates to the system, without requiring extensive manual intervention.

There are several key characteristics of a deployable architecture, which include:

  1. Automation: Deployable architecture often relies on automation tools and processes to manage deployment process. This can involve using tools like continuous integration or continuous deployment (CI/CD) pipelines, configuration management tools and others.
  2. Scalability: The architecture is designed to scale horizontally or vertically to accommodate changes in workload or user demand without requiring significant changes to the underlying infrastructure.
  3. Modularity: Deployable architecture follows a modular design pattern, where different components or services are isolated and can be developed, tested and deployed independently. This allows for easier management and reduces the risk of dependencies causing deployment issues.
  4. Resilience: Deployable architecture is designed to be resilient, with built-in redundancy and failover mechanisms that ensure the system remains available even in the event of a failure or outage.
  5. Portability: Deployable architecture is designed to be portable across different cloud environments or deployment platforms, making it easy to move the system from one environment to another as needed.
  6. Customisable: Deployable architecture is designed to be customisable and can be configured according to the need. This helps in deployment in diverse environments with varying requirements.
  7. Monitoring and logging: Robust monitoring and logging capabilities are built into the architecture to provide visibility into the system’s behaviour and performance.
  8. Secure and compliant: Deployable architectures on IBM Cloud® are secure and compliant by default for hosting your regulated workloads in the cloud. It follows security standards and guidelines, such as IBM Cloud for Financial Services® , SOC Type 2, that ensures the highest levels of security and compliance requirements.

Overall, deployable architecture aims to make it easier for organizations to achieve faster, more reliable deployments, while also  making sure that the underlying infrastructure is scalable and resilient.

Deployable architectures on IBM Cloud


Deploying an enterprise workload with a few clicks can be challenging due to various factors such as the complexity of the architecture and the specific tools and technologies used for deployment. Creating a secure, compliant and tailored application infrastructure is often more challenging and requires expertise. However, with careful planning and appropriate resources, it is feasible to automate most aspects of the deployment process.  IBM Cloud provides you with well-architected patterns that are secure by default for regulated industries like financial services. Sometimes these patterns can be consumed as-is or you can add on more resources to these as per the requirements. Check out the deployable architectures that are available in the IBM Cloud catalog.

Deployment strategies for deployable architecture


Deployable architectures provided on IBM Cloud can be deployed in multiple ways, using IBM Cloud projects, Schematics, directly via CLI or you can even download the code and deploy on your own.

Use-cases of deployable architecture


Deployable architecture on IBM Cloud: Simplifying system deployment
Deployable architecture is commonly used in industries such as finance, healthcare, retail, manufacturing and government, where compliance, security and scalability are critical factors. Deployable architecture can be utilized by a wide range of stakeholders, including:

  1. Software developers, IT professionals, system administrators and business stakeholders who need to ensure that their systems and applications are deployed efficiently, securely and cost-effectively. It helps in reducing time to market, minimizing manual intervention and decreasing deployment-related errors.
  2. Cloud service providers, managed service providers and infrastructure as a service (IaaS) providers to offer their clients a streamlined, reliable and automated deployment process for their applications and services.
  3. ISVs and enterprises to enhance the deployment experience for their customers, providing them with easy-to-install, customizable and scalable software solutions that helps driving business value and competitive advantage.

Source: ibm.com

Saturday, 20 April 2024

The journey to a mature asset management system

The journey to a mature asset management system

This blog series discusses the complex tasks energy utility companies face as they shift to holistic grid asset management to manage through the energy transition. Earlier posts in this series addressed the challenges of the energy transition with holistic grid asset management, the integrated asset management platform and data exchange, and merging traditional top-down and bottom-up planning processes.

Asset management and technological innovation


Advancements in technology underpin the need for holistic grid asset management, making the assets in the grid smarter and equipping the workforce with smart tools.

Robots and drones perform inspections by using AI-based visual recognition techniques. Asset performance management (APM) processes, such as risk-based and predictive maintenance and asset investment planning (AIP), enable health monitoring technologies.

Technicians connect to the internet by wearable devices such as tablets, watches or VR glasses, providing customers with fast access to relevant information or expert support from any place in the world. Technicians can resolve technical issues faster, improving asset usage and reducing asset downtime.

Mobile-connected technicians experience improved safety through measures such as access control, gas detection, warning messages or fall recognition, which reduces risk exposure and enhances operational risk management (ORM) during work execution. Cybersecurity reduces risk exposure for cyberattacks on digitally connected assets.

Sensoring and monitoring also contribute to the direct measurement of sustainability environmental, social and governance (ESG) metrics such as energy efficiency and greenhouse gas emission or wastewater flows. This approach provides actual real data points for ESG reporting instead of model-based assumptions, which helps reduce carbon footprint and achieve sustainability goals.

The asset management maturity journey


Utility companies can view the evolution of asset management as a journey to a level of asset management excellence. The following figure shows the stages from a reactive to a proactive asset management culture, along with the various methods and approaches that companies might apply:

The journey to a mature asset management system

In the holistic asset management view, a scalable platform offers functionalities to build capabilities along the way. Each step in the journey demands adopting new processes and ways of working, which dedicated best practice tools and optimization models support.

The enterprise asset management (EAM) system fundamentally becomes a preventive maintenance program in the early stages of the maturity journey, from “Innocence” through to “Understanding”. This transition drives down the cost of unplanned repairs.

To proceed to the next level of “Competence”, APM capabilities take the lead. The focus of the asset management organization shifts toward uptime and business value by preventing failures. This also prevents expensive machine downtime, production deferment and potential safety or environmental risks. Machine connectivity through Internet of Things (IoT) data exchange enables condition-based maintenance and health monitoring. Risk-based asset strategies align maintenance efforts to balance costs and risks.

Predictive maintenance applies machine learning models to predict imminent failures early in the potential failure curve, with sufficient warning time to allow for planned intervention. The final step at this stage is the optimization of the maintenance and replacement program based on asset criticality and available resources.

APM and AIP combine in the “Excellence” stage, and predictive generative AI creates intelligent processes. At this stage, the asset management process becomes self-learning and prescriptive in making the best decision for overall business value.

New technology catalyzes the asset maturity journey, digital solutions connect the asset management systems, and smart connected tools improve quality of work and productivity. The introduction of (generative) AI models in the asset management domain has brought a full toolbox of new optimization tools. Gen AI use cases have been developed in each step of the journey, to support companies develop more capabilities to become more efficient, safe, reliable and sustainable. As the maturity of the assets and asset managers grows, current and future grid assets generate more value.

Holistic asset management aligns with business goals, integrates operational domains of previously siloed disciplines, deploys digital innovative technology and enables excellence in asset management maturity. This approach allows utility companies to maximize their value and thrive as they manage through the energy transition.

Source: ibm.com

Friday, 19 April 2024

Using dig +trace to understand DNS resolution from start to finish

Using dig +trace to understand DNS resolution from start to finish

The dig command is a powerful tool for troubleshooting queries and responses received from the Domain Name Service (DNS). It is installed by default on many operating systems, including Linux® and Mac OS X. It can be installed on Microsoft Windows as part of Cygwin. 

One of the many things dig can do is to perform recursive DNS resolution and display all of the steps that it took in your terminal. This is extremely useful for understanding not only how the DNS works, but for determining if there is an issue somewhere within the resolution chain that cause resolution failures for your zones or domains. 

First, let’s briefly review how a query recursive receives a response in a typical recursive DNS resolution scenario: 


  1. You as the DNS client (or stub resolver) query your recursive resolver for www.example.com. 
  2. Your recursive resolver queries the root nameserver for NS records for “com.” 
  3. The root nameserver refers your recursive resolver to the .com Top-Level Domain (TLD) authoritative nameserver. 
  4. Your recursive resolver queries the .com TLD authoritative server for NS records of “example.com.” 
  5. The .com TLD authoritative nameserver refers your recursive server to the authoritative servers for example.com. 
  6. Your recursive resolver queries the authoritative nameservers for example.com for the A record for “www.example.com” and receives 1.2.3.4 as the answer. 
  7. Your recursive resolver caches the answer for the duration of the time-to-live (TTL) specified on the record and returns it to you.

The above process basically looks like this:

Step 1

Using dig +trace to understand DNS resolution from start to finish

Step 2

Using dig +trace to understand DNS resolution from start to finish

Step 3

Using dig +trace to understand DNS resolution from start to finish

Step 4

Using dig +trace to understand DNS resolution from start to finish

Step 5

Using dig +trace to understand DNS resolution from start to finish

This process occurs every time you type a URL into your web browser or fire up your email client. This illustrates why DNS answer speed and accuracy are so important: if the answer is inaccurate, you may need to repeat this process several times; and if the speed with which you receive an answer is slow, then it will make everything you do online seem to take longer than it should.  

Driving both DNS answer speed and accuracy is at the core of the IBM NS1 Connect value proposition.

Source: ibm.com

Thursday, 18 April 2024

Understanding glue records and Dedicated DNS

Understanding glue records and Dedicated DNS

Domain name system (DNS) resolution is an iterative process where a recursive resolver attempts to look up a domain name using a hierarchical resolution chain. First, the recursive resolver queries the root (.), which provides the nameservers for the top-level domain(TLD), e.g.com. Next, it queries the TLD nameservers, which provide the domain’s authoritative nameservers. Finally, the recursive resolver  queries those authoritative nameservers.  

In many cases, we see domains delegated to nameservers inside their own domain, for instance, “example.com.” is delegated to “ns01.example.com.” In these cases, we need glue records at the parent nameservers, usually the domain registrar, to continue the resolution chain.  

What is a glue record? 

Glue records are DNS records created at the domain’s registrar. These records provide a complete answer when the nameserver returns a reference for an authoritative nameserver for a domain. For example, the domain name “example.com” has nameservers “ns01.example.com” and “ns02.example.com”. To resolve the domain name, the DNS would query in order: root, TLD nameservers and authoritative nameservers.  

When nameservers for a domain are within the domain itself, a circular reference is created. Having glue records in the parent zone avoids the circular reference and allows DNS resolution to occur.  

Glue records can be created at the TLD via the domain registrar or at the parent zone’s nameservers if a subdomain is being delegated away.  

When are glue records required?

Glue records are needed for any nameserver that is authoritative for itself. If a 3rd party, such as a managed DNS provider hosts the DNS for a zone, no glue records are needed. 

IBM NS1 Connect Dedicated DNS nameservers require glue records 

IBM NS1 Connect requires that customers use a separate domain for their Dedicated DNS nameservers. As such, the nameservers within this domain will require glue records. Here, we see glue records for exampledns.net being configured in Google Domains with random IP addresses: 

Once the glue records have been added at the registrar, the Dedicated DNS domain should be delegated to the IBM NS1 Connect Managed nameservers and the Dedicated DNS nameservers. For most customers, there will be a total of 8 NS records in the domain’s delegation. 

What do glue records look like in the dig tool? 

Glue records appear in the ADDITIONAL SECTION of the response. To see a domain’s glue records using the dig tool, directly query a TLD nameserver for the domain’s NS record. The glue records in this example are in quotation marks. Quotation marks are used for emphasis below: 

Understanding glue records and Dedicated DNS

Understanding glue records and Dedicated DNS

How do I know my glue records are correct? 


To verify that glue records are correctly listed at the TLD nameservers, directly query the TLD nameservers for the domain’s NS records using the dig tool as shown above. Compare the ADDITIONAL SECTION contents of the response to the expected values entered as NS records in IBM NS1 Connect. 

Source: ibm.com

Saturday, 13 April 2024

Merging top-down and bottom-up planning approaches

Merging top-down and bottom-up planning approaches

This blog series discusses the complex tasks energy utility companies face as they shift to holistic grid asset management to manage through the energy transition. The first post of this series addressed the challenges of the energy transition with holistic grid asset management. The second post in this series addressed the integrated asset management platform and data exchange that unite business disciplines in different domains in one network.

Breaking down traditional silos


Many utility asset management organizations work in silos. A holistic approach that combines the siloed processes and integrates various planning management systems provides optimization opportunities on three levels:

1. Asset portfolio (AIP) level: Optimum project execution schedule
2. Asset (APMO) level: Optimum maintenance and replacement timing
3. Spare part (MRO) level: Optimum spare parts holding level

The combined planning exercises produce budgets for capital expenditures (CapEx) and operating expenses (OpEx), and set minimum requirements for grid outages for the upcoming planning period, as shown in the following figure:

Merging top-down and bottom-up planning approaches

Asset investments are typically part of a grid planning department, which considers expansions, load studies, new customers and long-term grid requirements. Asset investment planning (AIP) tools bring value in optimizing various, sometimes conflicting, value drivers. They combine new asset investments with existing asset replacements. However, they follow different approaches to risk management by using a risk matrix to assess risk at the start of an optimization cycle. This top-down process is effective for new assets since no information about the assets is available. For existing assets, a more accurate bottom-up risk approach is available from the continuous health monitoring process. This process calculates the health index and the effective age based on the asset’s specific degradation curves. Dynamic health monitoring provides up-to-date risk data and accurate replacement timing, as opposed to the static approach used for AIP. Combining the asset performance management and optimization (APMO) and AIP processes uses this enhanced estimation data to optimize in real time.

Maintenance and project planning take place in operations departments. The APMO process generates an optimized work schedule for maintenance tasks over a project period and calculates the optimum replacement moment for an existing asset at the end of its lifetime. The maintenance management and project planning systems load these tasks for execution by field service departments.

On the maintenance repair and overhaul (MRO) side, spare part optimization is linked to asset criticality. Failure mode and effect analysis (FMEA) defines maintenance strategies and associated spare holding strategies. The main parameters are optimizing for stock value, asset criticality and spare part ordering lead times.

Traditional planning processes focus on disparate planning cycles for new and existing assets in a top-down versus bottom-up asset planning approach. This approach leads to suboptimization. An integrated planning process breaks down the departmental silos with optimization engines at three levels. Optimized planning results in lower outages and system downtime, and it increases the efficient use of scarce resources and budget.

Source: ibm.com

Friday, 12 April 2024

IBM researchers to publish FHE challenges on the FHERMA platform

IBM researchers to publish FHE challenges on the FHERMA platform

To foster innovation in fully homomorphic encryption (FHE), IBM researchers have begun publishing challenges on the FHERMA platform for FHE challenges launched in late 2023 by Fair Math and the OpenFHE community.

FHE: A new frontier in technology


Fully homomorphic encryption is a groundbreaking technology with immense potential. One of its notable applications lies in enhancing medical AI models. By enabling various research institutes to collaborate seamlessly in the training process, FHE opens doors to a new era of possibilities. The ability to process encrypted data without decryption marks a pivotal advancement, promising to revolutionize diverse fields.

IBM has been working to advance the domain of FHE for 15 years, since IBM Research scientist Craig Gentry introduced the first plausible fully homomorphic scheme in 2009. The “bootstrapping” mechanism he developed cleans and reduces the amount of “noise” in encoded information, which made possible the widespread use of FHE commercially.

Progress in FHE


FHE has experienced significant progress since the introduction of its first scheme. The transition from theoretical frameworks to practical implementations has been marked by countless issues that need to be addressed. While there are already applications that are using FHE, the community is constantly improving and innovating the algorithms to make FHE more popular and applicable to new domains.

Fostering innovation through challenges


The FHERMA platform was built to incentivize innovation in the FHE domain. Various challenges can be seen on the FHERMA site. The challenges are motivated by problems encountered by real-world machine learning and blockchain applications.

Solutions to challenges must be written by using known cryptographic libraries such as openFHE. The developers can also use higher-level libraries such as IBM’s HElayers to speed up their development and easily write robust and generic code.

The best solutions to the various challenges will win cash prizes from Fair Math, alongside contributing to the FHE community. Winners will also be offered the opportunity to present their solutions in a special workshop currently being planned.

The goal of the challenges is to foster research, popularize FHE, and develop cryptographic primitives that are efficient, generic, and support different hyperparameters (for example, writing matrix multiplication that is efficient for matrices of dimensions 1000×1000 and 10×10). This aligns with IBM’s vision for privacy-preserving computation by using FHE.

Driving progress and adoption


Introducing and participating in challenges that are listed on the FHERMA site is an exciting and rewarding way to advance the extended adoption of FHE, while helping to move development and research in the domain forward. We hope you join us in this exciting endeavor on the FHERMA challenges platform.

Teams and individuals who successfully solve the challenges will receive cash prizes from Fair Math. More importantly, the innovative solutions to the published challenges will help move the FHE community forward—a longstanding goal for IBM.

Source: ibm.com

Thursday, 11 April 2024

Why CHROs are the key to unlocking the potential of AI for the workforce

Why CHROs are the key to unlocking the potential of AI for the workforce

It’s no longer a question of whether AI will transform business and the workforce, but how it will happen. A study by the IBM Institute for Business Value revealed that up to three-quarters of CEOs believe that competitive advantage will depend on who has the most advanced generative AI.

With so many leaders now embracing the technology for business transformation, some wonder which C-Suite leader will be in the driver’s seat to orchestrate and accelerate that change.

CHROs today are perfectly positioned to take the lead on both people skills and AI skills, ushering the workforce into the future. Here’s how top CHROs are already seizing the opportunity. 

Orchestrating the new human + AI workforce 


Today, businesses are no longer only focused on finding the human talent they need to execute their business strategy. They’re thinking more broadly about how to build, buy, borrow or “bot” the skills needed for the present and future.  

The CHRO’s primary challenge is to orchestrate the new human plus AI workforce. Top CHROs are already at work on this challenge, using their comprehensive understanding of the workforce and how to design roles and skills within an operating model to best leverage the strengths of both humans and AI.  

In the past, that meant analyzing the roles that the business needs to execute its strategy, breaking those roles down into their component skills and tasks and creating the skilling and hiring strategy to fill gaps. Going forward, that means assessing job descriptions, identifying the tasks best suited to technology and the tasks best suited to people and redesigning the roles and the work itself.  

Training the AI as well as the people 


As top CHROs partner with their C-Suite peers to reinvent roles and change how tasks get done with AI and automation, they are also thinking about the technology roadmap for skills. With the skills roadmap established, they can play a key role in building AI-powered solutions that fit the business’ needs.  

HR leaders have the deep expertise in training best practices that can inform not only how people are trained for skills, but how the AI solutions themselves are trained.  

To train a generative AI assistant to learn project management, for example, you need a strong set of unstructured data about the work and tasks required. HR leaders know the right steps to take around sourcing and evaluating content for training, collaborating with the functional subject matter experts for that area.  

That’s only the beginning. Going forward, business leaders will also need to consider how to validate, test and certify these AI skills.  

Imagine an AI solution trained to support accountants with key accounting tasks. How will businesses test and certify those skills and maintain compliance, as rigorously as is done for a human accountant getting an accounting license? What about certifications like CPP or Six Sigma? HR leaders have the experience and knowledge of leading practices around training, certification and more that businesses will need to answer these questions and truly implement this new operating model.  

Creating a culture focused on growth mindset and learning 


Successfully implementing technology depends on having the right operating model and talent to power it. Employees need to understand how to use the technology and buy in to adopting it. It is fundamentally a leadership and change journey, not a technology journey.  

Every organization will need to increase the overall technical acumen of their workforce and make sure that they have a basic understanding of AI so they can be both critical thinkers and users of the technology. Here, CHROs will lean into their expertise and play a critical role moving forward—up-skilling people, creating cultures of growth mindset and learning and driving sustained organizational change.  

For employees to get the most out of AI, they need to understand how to prompt it, evaluate its outputs and then refine and modify. For example, when you engage with a generative AI-powered assistant, you will get very different responses if you ask it to “describe it to an executive” versus “describe it to a fifth-grader.” Employees also need to be educated and empowered to ask the right questions about AI’s outputs and source data and analyze them for accuracy, bias and more.  

While we’re still in the early phases of the age of AI, leading CHROs have a pulse on the anticipated impact of these powerful technologies. Those who can seize the moment to build a workforce and skills strategy that makes the most of human talent plus responsibly trained AI will be poised to succeed.

Source: ibm.com

Tuesday, 9 April 2024

Product lifecycle management for data-driven organizations

Product lifecycle management for data-driven organizations

In a world where every company is now a technology company, all enterprises must become well-versed in managing their digital products to remain competitive. In other words, they need a robust digital product lifecycle management (PLM) strategy. PLM delivers value by standardizing product-related processes, from ideation to product development to go-to-market to enhancements and maintenance. This ensures a modern customer experience. The key foundation of a strong PLM strategy is healthy and orderly product data, but data management is where enterprises struggle the most. To take advantage of new technologies such as AI for product innovation, it is crucial that enterprises have well-organized and managed data assets.

Gartner has estimated that 80% of organizations fail to scale digital businesses because of outdated governance processes. Data is an asset, but to provide value, it must be organized, standardized and governed. Enterprises must invest in data governance upfront, as it is challenging, time-consuming and computationally expensive to remedy vast amounts of unorganized and disparate data assets. In addition to providing data security, governance programs must focus on organizing data, identifying non-compliance and preventing data leaks or losses.  

In product-centric organizations, a lack of governance can lead to exacerbated downstream effects in two key scenarios:  


1. Acquisitions and mergers

Consider this fictional example: A company that sells three-wheeled cars has created a robust data model where it is easy to get to any piece of data and the format is understood across the business. This company is so successful that it acquired another company that also makes three-wheeled cars. The new company’s data model is completely different from the original company. Companies commonly ignore this issue and allow the two models to operate separately. Eventually, the enterprise will have weaved a web of misaligned data requiring manual remediation. 

2. Siloed business units

Now, imagine a company where the order management team owns order data and the sales team owns sales data. In addition, there is a downstream team that owns product transactional data. When each business unit or product team manages their own data, product data can overlap with the other unit’s data causing several issues, such as duplication, manual remediation, inconsistent pricing, unnecessary data storage and an inability to use data insights. It becomes increasingly difficult to get information in a timely fashion and inaccuracies are bound to occur. Siloed business units hamper the leadership’s ability to make data-driven decisions. In a well-run enterprise, each team would connect their data across systems to enable unified product management and data-informed business strategy.  

How to thrive in today’s digital landscape


In order to thrive in today’s data-driven landscape, organizations must proactively implement PLM processes, embrace a unified data approach and fortify their data governance structures. These strategic initiatives not only mitigate risks but also serve as catalysts for unleashing the full potential of AI technologies. By prioritizing these solutions, organizations can equip themselves to harness data as the fuel for innovation and competitive advantage. In essence, PLM processes, a unified data approach and robust data governance emerge as the cornerstone of a forward-thinking strategy, empowering organizations to navigate the complexities of the AI-driven world with confidence and success.

Source: ibm.com

Friday, 5 April 2024

Accelerate hybrid cloud transformation through IBM Cloud for Financial Service Validation Program

Accelerate hybrid cloud transformation through IBM Cloud for Financial Service Validation Program

The cloud represents a strategic tool to enable digital transformation for financial institutions


As the banking and other regulated industry continues to shift toward a digital-first approach, financial entities are eager to use the benefits of digital disruption. Lots of innovation is happening, with new technologies emerging in areas such as data and AI, payments, cybersecurity and risk management, to name a few. Most of these new technologies are born-in-cloud. Banks want to tap into these new innovations. This shift is a significant change in their business models, moving from a capital expenditure approach to an operational expenditure approach, allowing financial organizations to focus on their primary business. However, the transformation from traditional on-prem environments to a public cloud PaaS or SaaS model presents significant cybersecurity, risk, and regulatory concerns that continue to impede progress.

Balancing innovation, compliance, risk and market dynamics is a challenge 


While many organizations recognize the vast pool of innovations that public cloud platforms offer, financially regulated clients remain accustomed to the level of control and visibility provided by on-prem environments. Despite the potential benefits, cybersecurity remains the primary concern with public cloud adoption. The average cost of any mega-breach is an astonishing $400 plus million, with misconfigured cloud as a leading attack vector. This leaves many organizations hesitant to make the transition, fearing they will lose the control and security they have with their on-prem environments. The banking industry’s continued shift toward a digital-first approach is encouraging. However, financial organizations must carefully consider the risks that are associated with public cloud adoption and ensure that they have the proper security measures in place before making the transition. 

The traditional approach for banks and ISV application onboarding involves a review process, which consists of several key items like the following: 

  • A third-party architecture review, where the ISV needs to have an architecture document describing how they are deploying into the cloud and how it is secure. 
  • A third-party risk management review, where the ISV needs to describe how it is complying to required controls. 
  • A third-party investment review, where the ISV provides a bill of material showing what and how services are being used to meet compliance requirements, along with price points. 

The ISV is expected to be prepared for all these reviews and the overall onboarding lifecycle through this process takes more than 24 months today.

Why a FS Cloud and FS Validation Program? 


IBM has created the solution for this problem with its Financial Services Cloud offering, and its ISV Financial Services validation program, which is designed to de-risk the partner ecosystem for clients. This help accelerating continuous integration and continuous delivery on the cloud. This program ensures that the new innovations coming out of these ISVs are validated, tested, and ready to be deployed in a secure and compliant manner. With IBM’s ISV Validation program, banks can confidently adopt new innovative offerings on cloud and stay ahead in the innovation race. 

Ensuring that the success of a cloud transformation journey requires a combination of modern governance, standard control framework, and automation. There are different industry frameworks available to secure and provide compliance posture. Continuous compliance that is aligned to an industry framework, informed by an industry coalition that is composed of representation from key banks worldwide and other compliance bodies, is essential. IBM Cloud Framework for Financial services is uniquely positioned for that, meeting all these requirements. 

IBM Cloud for Financial Services® is a secure cloud platform that is designed to reduce risk for clients by providing a high level of visibility, control, regulatory compliance, and the best-of-breed security. It allows financial institutions to accelerate innovation, unlock new revenue opportunities, and reduce compliance costs by providing access to pre-validated partners and solutions that conform to financial services security and controls. The platform also offers risk management and compliance automation, continuous monitoring, and audit reporting capabilities, as well as on-demand visibility for clients, auditors, and regulators. 

Our mission is to help ISVs adapt to the cloud and SaaS models and prepare ISVs to meet the security standards and compliance requirements necessary to do business with financial institutions on cloud. Our process brings the compliance and onboarding cycle time down to less than 6 months, a significant improvement. Through this process, we are creating an ecosystem of ISVs that are validated by IBM Cloud for Financial Services, providing customers with a trusted and reliable network of vendors. 

Streamlined process and tooling


IBM® has created a well-defined process and various tools, technologies and automation to assist ISVs as part of the validation program. We offer an integrated onboarding platform that ensures a smooth and uninterrupted experience. This platform serves as a centralized hub, guiding ISVs throughout the entire program, starting from initial engagements and leading up to the validation of final controls. The onboarding platform navigates the ISV through following steps: 

Orientation and education

The platform provides a catalog of self-paced courses that help you become familiar with the processes and tools that are used during the IBM Cloud for Financial Services onboarding and validation. The self-paced format allows you to learn at your own pace and on your own schedule. 

ISV Controls analysis

The ISV Controls Analysis serves as an initial assessment of an organization’s security and risk posture, laying the groundwork for IBM to plan the necessary onboarding activities.

Architecture assessment

An architecture assessment evaluates the architecture of an ISV’s cloud environment. The assessment is designed to help ISVs identify gaps in their cloud architecture and recommend best practices to enhance the compliance and governance of their cloud environment.

Deployment planning

Deployment of ISV application in a secure environment and manage their workloads on IBM Cloud®. This step is designed to meet the security and compliance requirements of organizations. Providing a comprehensive set of security controls and services to help protect customer data and applications, meeting the suitable secure architecture requirements. 

Security Assessment

The security assessment is a process of evaluating the security controls of the proposed business processes against a set of enhanced, industry-specific, control requirements in the IBM Cloud for Financial Services Framework. The process helps to identify vulnerabilities, threats, and risks that might compromise the security of a system and allows for the implementation of appropriate security measures to address those issues. 

Professional guidance by IBM and KPMG teams


IBM team provides guidance and assets to help accelerate the onboarding process in a shared trusted model. We also assist ISVs with deploying and testing their applications on the IBM Cloud for Financial Services approved architecture. We work with ISVs throughout the controls assessment process to help their application achieve the IBM Cloud for Financial Services validated status. Our goal is to ensure that ISVs meet our rigorous standards and comply with industry regulations. We are also partnering with KPMG, an industry leader in the security and regulatory compliance domain to add value to the ISVs and clients. 

Time to revenue and cost savings


This process enables the ISV to be ready and go to market in less than eight weeks reducing the overall time to market and overall cost of onboarding for any end clients. 

Benefits of partnering with IBM? 


As an ISV, you have access to our extensive financial institution clients. Our cloud is trusted by 92 of the top 100 banks, giving you a significant advantage in the industry. 

Co-create with IBM team of expert architects and developers to take your solutions to the next level with leading-edge capabilities. 

Partnering with us means you can elevate your Go-To-Market strategy through co-selling. We can help you tap into our vast sales channels, incentive programs, client relationships, and industry expertise. 

You have access to our technical services, and cloud credits, as an investment in your innovation. 

Our marketplaces, like the IBM Cloud® Catalog and Red Hat Marketplace, offer you an excellent opportunity to sell your products and services to a wider audience. 

Finally, our marketing and direct investments in your marketing, can generate demand and help you reach your target audience effectively. 

Source: ibm.com

Thursday, 4 April 2024

The winning combination for real-time insights: Messaging and event-driven architecture

The winning combination for real-time insights: Messaging and event-driven architecture

In today’s fast-paced digital economy, businesses are fighting to stay ahead and devise new ways to streamline operations, enhance responsiveness and work with real-time insights. We are now in an era defined by being proactive, rather than reactive. In order to stay ahead, businesses need to enable proactive decision making—and this stems from building an IT infrastructure that provides the foundation for the availability of real-time data.

A core part of the solution needed comes from messaging infrastructure and many businesses already have a strong foundation in place. Among others, IBM MQ has been recognized as the top messaging broker because of its simplicity of use, flexibility, scalability, security and many other reasons. A messaging queue technology is essential for businesses to stay afloat, but building out event-driven architecture fueled by messaging might just be your x-factor.

Messaging that can be relied on


IBM MQ facilitates the reliable exchange of messages between applications and systems, making sure that critical data is delivered promptly and exactly once to protect against duplicate or lost data. For 30 years, IBM MQ users have realized the immense value of investing in this secure messaging technology—but what if it could go further?

IBM MQ boasts the ability to seamlessly integrate with other processing tools with its connectors (including Kafka connectors), APIs and standard messaging protocols. Essentially, it sets an easy stage for building a strong real-time and fault-tolerant technology stack businesses once could only dream of.

IBM MQ is an industry leader for a reason, there’s no doubt about that. Investing in future-proof solutions is critical for businesses trying to thrive in such a dynamic environment. IBM MQ’s 30 years of success and reliability in a plethora of use cases is not something that should be ignored, especially when it has been continuously reinventing itself and proving its adaptability as different technologies have emerged with its flexible deployment options (available on-prem, on cloud and hybrid). However, IBM MQ and Apache Kafka can sometimes be viewed as competitors, taking each other on in terms of speed, availability, cost and skills. Will picking one over the other provide the optimum solution for all your business operations?

MQ and Apache Kafka: Teammates


Simply put, they are different technologies with different strengths, albeit often perceived to be quite similar. Among other differences, MQ focuses on precise and asynchronous instant exchange of data with directed interactions, while Apache Kafka focuses on high throughput, high volume and data processing in sequence to reduce latency. So, if MQ is focused on directed interactions and Kafka is focused on gaining insights, what might the possibilities be if you used them together?

We know IBM MQ excels in ensuring precision and reliability in message delivery, making it perfect for critical workloads. The focus is on trusted delivery, regardless of the situation and provision of instantaneous responses. If combined with Apache Kafka’s high availability and streamlined data collection—enabling applications or other processing tools to spot patterns and trends—businesses would immediately be able to harness the MQ data along with other streams of events from Kafka clusters to develop real-time intelligent solutions.

The more intelligence, the better


Real-time responsiveness and intelligence should be injected as much as possible into every aspect of your technology stacks. With increasing amounts of data inundating your business operations, you need a streaming platform that helps you monitor the data and act on it before it’s too late. The core of building this real-time responsiveness lies in messaging, but its value can be expanded through event-driven architectures.

Consider a customer-centric business responding to thousands of orders and customer events coming through every minute. With a strong messaging infrastructure that prevents messages from falling through the cracks, your teams can build customer confidence through message resilience—no orders get lost and you can easily find them in your queue manager. But, with event-driven technologies, you can add an extra layer of stream processing to detect trends and opportunities, increase your customer retention, or adapt to dynamic pricing.

Event-driven technologies have been emerging in our digital landscape, starting with Apache Kafka as an industry leader in event streaming. However, IBM Event Automation’s advanced capabilities leverage the power of Apache Kafka and help enterprises bring their event-driven architectures to another level through event processing and event endpoint management capabilities. It takes a firehose of raw data streams coming from the directed interactions of all your applications and Kafka connectors or Kafka topics, allowing analysts and wider teams to derive insights without needing to write java, SQL, or other codes. In other words, it provides the necessary context for your business events.

With a low-code and intuitive user interface and functionality, businesses can empower less technical users to fuel their work with real-time insights. This significantly lowers the skills barrier by enabling business technologists to use the power of events without having to go to advanced developer teams first and have them pull information from a data storage. Consequently, users can see the real-time messages and cleverly work around them by noticing order patterns and perhaps even sending out promotional offers among many other possibilities.

At the same time, event endpoint management capabilities help IT administrators to control who can access data by generating unique authentication credentials for every user. They can enable self-service access so users can keep up with relevant events, but they can also add layers of controls to protect sensitive information. Uniquely, it allows teams the opportunity to explore the possibilities of events while also controlling for sensitive information.

Source: ibm.com