Thursday, 29 February 2024

The difference between ALIAS and CNAME and when to use them

The difference between ALIAS and CNAME and when to use them

The chief difference between a CNAME record and an ALIAS record is not in the result—both point to another DNS record—but in how they resolve the target DNS record when queried. As a result of this difference, one is safe to use at the zone apex (for example, naked domain such as example.com), while the other is not.

Let’s start with the CNAME record type. It simply points a DNS name, like www.example.com, at another DNS name, like lb.example.net.  This tells the resolver to look up the answer at the reference name for all DNS types (for example, A, AAAA, MX, NS, SOA, and others). This introduces a performance penalty, since at least one additional DNS lookup must be performed to resolve the target (lb.example.net). In the case of neither record ever having been queried before by your recursive resolver, it’s even more expensive timewise, as the full DNS hierarchy may be traversed for both records:

  1. You as the DNS client (or stub resolver) query your recursive resolver for www.example.com.
  2. Your recursive resolver queries the root name server for www.example.com.
  3. The root name server refers your recursive resolver to the .com Top-Level Domain (TLD) authoritative server.
  4. Your recursive resolver queries the .com TLD authoritative server for www.example.com.
  5. The .com TLD authoritative server refers your recursive server to the authoritative servers for example.com.
  6. Your recursive resolver queries the authoritative servers for www.example.com and receives lb.example.net as the answer.
  7. Your recursive resolver caches the answer and returns it to you.
  8. You now issue a second query to your recursive resolver for lb.example.net.
  9. Your recursive resolver queries the root name server for lb.example.net.
  10. The root name server refers your recursive resolver to the .net Top-Level Domain (TLD) authoritative server.
  11. Your recursive resolver queries the .net TLD authoritative server for lb.example.net.
  12. The .net TLD authoritative server refers your recursive server to the authoritative servers for example.net.
  13. Your recursive resolver queries the authoritative servers for lb.example.net and receives an IP address as the answer.
  14. Your recursive resolver caches the answer and returns it to you.

Each of these steps consumes at least several milliseconds, often more, depending on network conditions. This can add up to a considerable amount of time that you spend waiting for the final, actionable answer of an IP address.

In the case of an ALIAS record, all the same actions are taken as with the CNAME, except the authoritative server for example.com performs steps six through thirteen for you and returns the final answer as both an IPv4 and IPv6 address. This offers two advantages and one significant drawback:

Advantages


Faster final answer resolution speed

In most cases, the authoritative servers for example.com will have the answer cached and thus can return the answer very quickly.

The alias response will be A and AAAA records. Since an ALIAS record returns the answer that comprises one or more IP addresses, it can be used anywhere an A or AAAA record can be used—including the zone apex. This makes it more flexible than a CNAME, which cannot be used at the zone apex.  The flexibility of the Alias record is needed when your site is posted on some of the most popular CDNs that require the use of CNAME records if you want your users to be able to access it via the naked domain such as example.com.

Disadvantages


Geotargeting information is lost

Since it is the authoritative server for example.com that is issuing the queries for lb.example.net, then any intelligent routing functionality on the lb.example.net record will act upon the location of the authoritative server, not on your location. The EDNS0 edns-client-subnet option does not apply here. This means that you may be potentially mis-routed: for example, if you are in New York and the authoritative server for example.com is in California, then lb.example.com will believe you to be in California and will return an answer that is distinctly sub-optimal for you in New York.  However, if you are using a DNS provider with worldwide pops, then it is likely that the authoritative DNS server will be located in your region, thus mitigating this issue.

One important thing to note is that NS1 collapses CNAME records, provided that they all fall within the NS1 system. NS1’s nameservers are authoritative for both the CNAME and the target record. Collapsing simply means that the NS1 nameserver will return the full chain of records, from CNAME to final answer, in a single response. This eliminates all the additional lookup steps and allows you to use CNAME records, even in a nested configuration, without any performance penalty.

And even better, NS1 supports a unique record type called a Linked Record. This is basically a symbolic link within our platform that acts as an ALIAS record might, except with sub-microsecond resolution speed. To use a Linked Record, simply create the target record as you usually would (it can be of any type) and then create a second record to point to it and select the Linked Record option. Note that Linked Records can cross domain (zone) boundaries and even account boundaries within NS1 and offer a powerful way to organize and optimize your DNS record structure.

CNAME, ALIAS and Linked Record Reference Chart


  CNAME ALIAS  Linked Record
Use at Apex?  No Yes

Yes

(only to other NS1 zones)

Relative Speed (TTFB)   Fast Faster Faster
Collapses Responses  

Yes

(NS1 Connect exclusive feature)

Yes Yes

Source: ibm.com

Tuesday, 27 February 2024

6 benefits of data lineage for financial services

6 benefits of data lineage for financial services

The financial services industry has been in the process of modernizing its data governance for more than a decade. But as we inch closer to global economic downturn, the need for top-notch governance has become increasingly urgent. How can banks, credit unions, and financial advisors keep up with demanding regulations while battling restricted budgets and higher employee turnover?

The answer is data lineage. We’ve compiled six key reasons why financial organizations are turning to lineage platforms like Manta to get control of their data.

1. Automated impact analysis


In business, every decision contributes to the bottom line. That’s why impact analysis is crucial—it predicts the consequences of a decision. How will one decision affect customers? Stakeholders? Sales?

Data lineage helps during these investigations. Because lineage creates an environment where reports and data can be trusted, teams can make more informed decisions. Data lineage provides that reliability—and more.

One often-overlooked area of impact analysis is IT resilience. This blind spot became apparent in March of 2021 when CNA Financial was hit by a ransomware attack that caused widespread network disruption. The company’s email was hacked, consumers panicked, and CNA Financial was forced to pay a record-breaking $40 million in ransom. This is where lineage-supported impact analysis is needed. If you experience a threat, you will want to be prepared to combat it, and know exactly how much of your business will be affected.

IT resilience is also threatened by natural disasters, user error, infrastructure failure, cloud transitions, and more. In fact, 76% of organizations experienced an incident during the past two years that required an IT disaster-recovery plan.

Most organizations struggle with impact analysis as it requires significant resources when done manually. But with automated lineage from Manta, financial organizations have seen as much as a 40% increase in engineering teams’ productivity after adopting lineage.

2. Increased data pipeline observability


As discussed above, there are countless threats to your organization’s bottom line. Whether it is a successful ransomware attack or a poorly planned cloud migration, catching the problem before it can wreak havoc is always less expensive.

That’s why data pipeline observability is so important. It not only protects your organization but also your customers who trust you with their money.

Data lineage expands the scope of your data observability to include data processing infrastructure or data pipelines, in addition to the data itself. With this expanded observability, incidents can be prevented in the design phase or identified in the implementation and testing phase to reduce maintenance costs and achieve higher productivity.

Manta customers who have created complete lineage have been able to trace data-related issues back to the source 90% faster compared to their previous manual approach. This means the teams responsible for particular systems can fix any issue in a matter of minutes, according to Manta research.

3. Regulatory compliance


The financial space is highly regulated. Institutions must comply with regulations like Basel III, SOC 2, FACT, BSA/AML and CECL.

All of these regulations require accurate tracking of data. Your organization must be able to answer:

  • Where does it come from?
  • How did it get there?
  • Are we capable of proving it with up-to-date evidence whenever necessary?
  • Do we need weeks or months to complete a report?
  • Is that report even entirely reliable?

Data lineage helps you answer these questions by creating highly detailed visualizations of your data flows. You can use these reports to accurately track and report your data to ensure regulatory compliance.

4. Efficient cloud migrations


McKinsey predicts that $8 out of every $10 for IT hosting will go toward the cloud by 2024. In the financial space, 40% of banks and 41% of credit unions have already deployed cloud technologies.

However, if you have ever been involved in the migration of a data system, you know how complex the process is. Approximately $100 billion of cloud funding is expected to be wasted over the next three years—and most enterprises cite the costs around migration as a major inhibitor to adopting the cloud. The process is so complex (and expensive) because every system consists of thousands or millions of interconnected parts, and it is impossible to migrate everything in a single step.

Dividing the system into smaller chunks of objects (reports, tables, workflows, etc.) can make it more manageable, but poses another challenge—how to migrate one part without breaking another. How do you know what pieces can be grouped to minimize the number of external dependencies?

With data lineage, every object in the migrated system is mapped and dependencies are documented. Manta customers have used data lineage to complete their migration projects 40% faster with 30% fewer resources.

5. Improved workflow & IT retention


Data engineers, developers, and data scientists continue to be fast-growing and hard-to-fill roles in tech. The shortage of data engineering talent has ballooned from a problem to a crisis, made worse by the increasing complexity of data systems. The last thing you want is to continually overstretch your valuable data engineers with routine, manual (and frustrating) tasks like chasing data incidents, assessing the impacts of planned changes, or answering the same questions about the origins of data records again and again.

Data lineage can help to automate routine tasks and enable self-service wherever possible, allowing data scientists and other stakeholders to retrieve up-to-date lineage and data origin information on their own, whenever they need it. A detailed data lineage map also enables faster onboarding of data engineers to integrate new or less-experienced engineers into the role without impacting the stability and reliability of the data environment.

6. Trust and data governance


Data governance isn’t new, especially in the financial world. The Basel Committee released BCBS 239 as far back as 2013. The regulation was meant to strengthen banks’ risk-related data-aggregation and reporting capabilities—enhancing trust in data.

Report developers, data scientists, and data citizens need data they can trust for accurate, timely, and confident decision-making. But in today’s complex data environment, you’re dealing with dispersed servers and infrastructure, resulting in disparate sources of data and countless data dependencies. You need a complete overview of all your data sources to see how it moves through your organization, understand all touch-points, and how they interact with one another. You can only completely trust your data when you have a complete understanding of it.

Data lineage provides a comprehensive overview of all your data flows, sources, transformations, and dependencies. You’ll ensure accurate reporting, see how crucial calculations were derived, and gain confidence in your data management framework and strategy.

Why Manta is the right fit for data lineage in financial services


Manta has helped dozens of customers in the financial space realize the benefits of data lineage. We bring intelligence to metadata management by providing an automated solution that helps you drive productivity, gain trust in your data, and accelerate digital transformation.

The Manta platform includes unique features to make the most value out of your lineage, with more than 40 out-of-the-box, fully automated scanners. In addition, Manta works alongside the most popular data catalogs; our platform integrates with catalogs like Collibra, Informatica, Alation and more.

Don’t wait. Realize the benefits of automated data lineage today.

Source: ibm.com

Thursday, 22 February 2024

6 ways to elevate the Salesforce experience for your users

6 ways to elevate the Salesforce experience for your users

Customers and partners that interact with your business, as well as the employees who engage them, all expect a modern, digital experience. According to the Salesforce Report, nearly 90% Of buyers say the experience a company provides matters as much as products or services. Whether using Experience Cloud, Sales Cloud, or Service Cloud, your Salesforce user experience should be seamless, personalized and hyper-relevant, reflecting all the right context behind every interaction.

At the same time, Salesforce is a big investment, and you need to show return on that investment as quickly as possible. Ensuring maximum user adoption and proficiency is key. The more useful and relevant the experience is, the more effective users will be on the platform—and the more frequently they will return to it.

Here are six ways you can elevate your Salesforce experience for customers, partners and employees.

1. Continuously inform and engage your users.


Keep users abreast of everything they need to know about your business, and share valuable, engaging content related to their needs and interests. Deliver timely information and critical alerts through tailored announcements. Keep your audience informed and engaged with virtual and in-person events and targeted news, blogs or other articles. Manage and surface all of this within Salesforce to minimize context switching and to keep users coming back to the platform.

2. Personalize the user experience for hyper-relevance.


Infuse context and personalized content to enrich the entire experience and make it more relevant to individual customers. Don’t make employees struggle with out-of-the-box search and list views; dynamically present what they need in the flow of work, so they don’t have to leave the current task to find it. Whether it is location mapping, embedded video, targeted news and events, assigned learning, or recommended products and knowledge articles, strive to give users the information they need when they need it.

3. Escape the confines of the typical Salesforce look and feel.


Break away from limiting, out-of-the-box layouts, view, and UI components to give users the beautiful, modern experience they expect. Follow current UX design principles and ensure that every touchpoint represents your unique branding look and feel, rather than just looking like any other Salesforce implementation.

4. Accelerate platform adoption and mastery.


Develop a plan to thoroughly onboard users and get them proficient with the platform as quickly as possible to start realizing value. Streamline and automate the onboarding process. Gathe data to drive users to the site or platform, personalize the experience, and equip them with the knowledge and resources they need for success. Then, go deeper and give your employees, partners and customers an immersive digital learning experience tailored to their specific needs. A highly skilled ecosystem is a loyal and effective one, and educated customers are advocates for the brand.

5. Enable users to serve themselves and each other.


Give your customers, partners and employees the ability to serve themselves 24/7, whether researching products, making purchases, managing accounts or troubleshooting and solving issues. This means making your product information, knowledge articles and other content easily accessible, searchable and filterable. Deflect cases by giving customers access to the same content your service employees use via the knowledge base or a chatbot.

6. Empower your users to be your advocates.


An effective way to get your brand and messaging in front of as many potential customers as possible is to give your users ways to advocate for you. Organically expand the reach and influence of your brand by enabling users to share, contribute to and interact with your content. Enable partners and employees to contribute blogs and articles, empower customers to share your content in their social networks, and enable users to rate and review products, services and other records. Use this active user base to crowdsource the best ideas for improving your business and your Salesforce implementation.

Achieve an elevated experience with IBM Accelerators for Salesforce


You can achieve this elevated experience with IBM Accelerators for Salesforce. Its library of pre-built components can be used to quickly implement dozens of common use cases in Salesforce with clicks, not code. You can drag, drop, configure and customize components to create engaging, hyper-relevant experiences for your employees, partners and customers on Sales Cloud, Service Cloud, and Experience Cloud. Accelerators like Announcements, Experience Components, News, Ideas, Learning Adventure, Onboarding, and many more enable you to create a highly relevant and personalized experience.

IBM developed these accelerators with the expertise we gained through thousands of successful IBM Salesforce Services engagements. Now, these same products are available to purchase and use in your projects! Unleash the power of our pre-built components to reduce customization efforts, empower administrators and speed the ROI of your Salesforce implementation.

Source: ibm.com

Tuesday, 20 February 2024

Reducing defects and downtime with AI-enabled automated inspections

Reducing defects and downtime with AI-enabled automated inspections

A large, multinational automobile manufacturer responsible for producing millions of vehicles annually, engaged with IBM to streamline their manufacturing processes with seamless, automated inspections driven by real-time data and artificial intelligence (AI).

As an automobile manufacturer, our client has an inherent duty to provide high-quality products. Ideally, they need to discover and fix any defects well before the automobile reaches the consumer. These defects are often expensive, difficult to identify and present a myriad of significant risks to customer satisfaction.

Quality control and early defect detection are paramount to uphold standards, enhance operational efficiency, reduce costs and deliver vehicles that meet or exceed customer expectations while safeguarding the reputation of the manufacturer.

How IBM helped the client better detect and correct defects during manufacturing


IBM worked with the client’s technical experts to deploy IBM Inspection Suite solutions to help them reduce defects and downtime while enabling quick action and issue resolution. The solutions deployed include fixed-mounted inspections (IBM Maximo Visual Inspection Mobile) and handheld inspections (IBM Inspector Portable). Hands-free wearable inspections (IBM Inspector Wearable) were also made available for situations that required a head-mounted display.

While computer vision for quality has existed in more primitive states for the last 30 years, the lightweight and portable nature of IBM’s solution, which is based on a standard iPhone and uses readily available hardware, really got our client’s attention. The client loved the fact that the solution can be used anywhere, at any time, by any of their employees—even while objects are in motion.

Scaling to 30 million inspections for an immediate and significant reduction in defects


The IBM Inspection Suite improved the client’s quality inspection process without requiring coding. The client found the system to be simple to train and deploy, without the need for data scientists. The system learned quickly from images of acceptable and defective work products, which enabled the solution to be up and running within a matter of weeks. The implementation costs were also lower than those of viable alternatives.

The ability to deliver AI-enabled automation by using an intuitive process in their plants allowed this client to scale this user-friendly technology rapidly across numerous other facilities where it aided in over 30 million inspections. The customer almost immediately realized measurable success due to the significant reduction in defects.

Voted on by the leaders of the client’s technical community, the client awarded their annual IT Innovation award to IBM for the technology they believed delivered the greatest value-driving innovation to their company. In presenting the award, the client’s executives declared that a discussion with IBM about transformation led to a focus on improving manufacturing quality with AI automation.

The Inspection Suite supported the client’s quality initiatives with in-station process control and quality remediation at the point of assembly or installation. The solution also provided continuous process improvement that is helping the client lower repair and warranty costs, while improving their customer satisfaction.

Transparency and trust in AI


By bringing the power of IBM’s deep AI capabilities, deployable on cost-effective edge infrastructure, across the client’s plants, IBM Inspection Suite enabled the client to deliver higher quality vehicles to their customers. The client is now expanding to additional plants and use cases thanks to their collaboration and innovation with IBM.

All the team members at IBM were honored that this client recognized them for their business and technical achievements. We believe that this recognition reflects the IBM values of client dedication and innovation that matters. It is a direct acknowledgment of the value IBM Inspection Suite provides to clients.

IBM’s mission is to harness the power of data and AI to drive real-time, predictive business insights that help clients make intelligent decisions.

Source: ibm.com

Saturday, 17 February 2024

Unveiling the transformative AI technology behind watsonx Orders

Unveiling the transformative AI technology behind watsonx Orders

You’re headed to your favorite drive-thru to grab fries and a cheeseburger. It’s a simple order and as you pull in you notice there isn’t much of a line. What could possibly go wrong? Plenty.

The restaurant is near a busy freeway with roaring traffic noise and airplanes fly low overhead as they approach the nearby airport. It’s windy. The stereo is blasting in the car behind you and the customer in the next lane is trying to order at the same time as you. The cacophony would challenge even the most experienced human order taker.

With IBM® watsonx Orders, we have created an AI-powered voice agent to take drive-thru orders without human intervention. The product uses bleeding edge technology to isolate and understand the human voice in noisy conditions while simultaneously supporting a natural, free-flowing conversation between the customer placing the order and the voice agent.

Watsonx Orders understands speech and delivers orders


IBM watsonx Orders begins the process when it detects a vehicle pulling up to the speaker post. It greets customers and asks what they’d like to order. It then listens to process incoming audio and isolates the human voice. From that, it detects the order and the items, then shows the customer what it heard on the digital menu board. If the customer says everything looks right, watsonx Orders sends the order to the point of sale and the kitchen. Finally, the kitchen prepares the food. The full ordering process is shown in the figure below:

Unveiling the transformative AI technology behind watsonx Orders

There are three parts to understanding a customer order. The first part is isolating the human voice and ignoring conflicting environmental sounds. The second part is then understanding speech, including the complexity of accents, colloquialisms, emotions and misstatements. Finally, the third part is translating speech data into an action that reflects customer intent.

Isolating the human voice


When you call your bank or utilities company, a voice agent chatbot probably answers the call first to ask why you’re calling. That chatbot is expecting relatively quiet audio from a phone with little to no background noise.

In the drive-thru, there will always be background noise. No matter how good the audio hardware is, human voices can be drowned out by loud noises, such as a passing train horn.

As watsonx Orders captures audio in real time, it uses machine-learning techniques to perform digital noise and echo cancellation. It ignores noises from wind, rain, highway traffic and airports. Other noise challenges include unexpected background noise and cross-talk, where people are talking in the background during an order.  Watsonx Orders uses advanced techniques to minimize these disruptions.

Understanding speech


Most voice chatbots began as text chatbots. Traditional voice agents first turn spoken words into written text, then they analyze the written sentence to figure out what the speaker wants.

This is computationally slow and wasteful. Instead of first trying to transcribe sounds into words and sentences, watsonx Orders turns speech into phonemes (the smallest units of sound in speech that convey a distinct meaning). For example, when you say “shake,” watsonx Orders parses that word into “sh,” “ay,” and hard “k.” Converting speech into phonemes, instead of full English text, also increases accuracy over different accents and actively supports a real-time conversation flow by reducing intra-dialog latency.

Translating understanding into action


Next, watsonx Orders identifies intent, such as “I want” or “cancel that.”. It then identifies the items that pertain to the commands like “cheeseburger” or “apple pie.”

There are several machine learning techniques for intent recognition. The latest technique uses foundation and large language models, which theoretically can understand any question and respond with an appropriate answer. This is too slow and computationally expensive for hardware-restrained use cases. While it might be impressive for a drive-thru voice agent to answer, “Why is the sky blue?”, it would slow the drive thru, frustrating the people in line and decreasing revenue.

Watsonx Orders uses a highly specific model that is optimized to understand the hundreds of millions of ways that you can order a cheeseburger, such as “No onions, light on the special sauce, or extra tomatoes.” The model also allows customers to modify the menu mid-order: “Actually, no tomatoes on that burger.”

In production, watsonx Orders can complete more than 90% of orders by itself without any human intervention. It’s worth noting that other vendors in this space use contact centers with human operators to take over when the AI agent gets stuck and they count the interaction as “automated.” By our IBM watsonx Orders standards, “automated” means handling an order end-to-end without any humans involved.

Real-world implementation drives profits


During peak times, watsonx Orders can handle more than 150 cars per hour in a dual-lane restaurant, which is better than most human order takers. More cars per hour means more revenue and profit, so our engineering and modeling approaches are constantly optimizing for this metric.

Watsonx Orders has taken 60 million real-world orders in dozens of restaurants, even with challenging noise, cross-talk and order complexity. We built the platform to easily adapt to new menus, restaurant technology stacks and centralized menu management systems in hopes that we can work with every quick-serve restaurant chain across the globe.

Source: ibm.com

Thursday, 15 February 2024

The customer experience evolution: Today’s data-driven, real-time discipline

The customer experience evolution: Today’s data-driven, real-time discipline

An evolution of customer experience (CX) was to be expected. Throughout modern history, organizations have encountered internal and external challenges that changed how they interact with customers and how customers view those organizations.


Advancements in technology mean customers can order virtually any product and receive it in less than a week. For software solutions, they can get access immediately and often in a seamless experience.

Across arguably every industry, business leaders view a great customer experience strategy as a key differentiator. Brand loyalty, already on the wane, became even weaker due to the pandemic. For example, McKinsey found that 50% of consumers reported they would switch brands if their preferred brand was unavailable due to shortages.

Customer needs changed. Customer retention is difficult to keep high. Today, providing a positive customer experience is more of a challenge, or more of a critical need, for companies. To achieve this, more organizations must prioritize being customer-centric.

How we arrived at this CX environment


Early days of retail

Before mass media, it was harder to know what other products were available outside of the ones offered by the local store. Before globalization, it was more difficult to purchase products from far-flung locations. Many customers were limited to the goods and services in their near vicinity. And if something went wrong with a product that could be fixed, they would go to a local mechanic. They likely had strong ties to their local merchants and trusted their opinions on which products they should buy. They were much less likely to have any meaningful relationship with the product manufacturer unless those products were made and sold locally.

As a result, brand loyalty was stronger and customer preferences changed less frequently. Today’s customers, however, have a wide range of options and are less loyal due to several external circumstances. It is harder to maintain customer satisfaction and increase customer loyalty.

The Internet and rise of e-commerce

It is no overstatement to say the Internet changed everything about business. Consumers could learn about new products and services without leaving their homes or turning on the TV. They could start shopping online and buying products directly without leaving their homes. For product manufacturers, this is perhaps the biggest leap forward for customer experience. Previously, their direct customers were mostly retailers or resellers, who sold to the end users in store.

Being able to sell directly to customers meant many of these companies had, for the first time, direct relationships with those end users. They could more directly influence customer loyalty beyond the quality and price of their solutions. They were more directly responsible for offering memorable experiences and providing customer support. And thanks to online metrics, specific customer feedback, and data analytics, these retailers had more information about their customers than ever before.

Increasingly organizations expanded what they offered. Look at Amazon, which started with books and moved into virtually everything else. Long-standing UK pharmacy Boots saw its website visitors rise from 7,000 people a minute to 19,000 during the pandemic, so it needed upgrade its entire infrastructure and tools. It turned to IBM to provide the solution, enabling Boots to easily handle, at peak, over 27,000 visitors without an issue.

Customer journey mapping

The customer journey is more complicated. Today, a customer could be influenced by one channel (e.g., out of home) and make their purchase through another channel (e.g., a mobile app). This omnichannel revolution means organizations must monitor multiple customer touchpoints and better understand which mediums feature more customer interactions. This means organizations need to devote more resources to improving their SEO, mobile apps and social media presences. They need to determine where they should spend their advertising dollars to reach the most persuadable prospects and create an organic inbound engine to capture them.

Consumer advocacy

The rise of social media platforms, chatrooms and message boards gave consumers a voice. They became more willing to express their interests and frustrations with brands, creating a scenario where those organizations needed to monitor conversations and triage responses based on the issue and the influence of the consumer. In some respects, this has given organizations valuable insights from real-time market research and customer feedback loops. But it also raises the bar for what organizations need to do to meet customer expectations. That extended the responsibilities of contact centers and social media or PR teams to respond in real-time.

Segmentation

The rise of cookies, digital media and third-party tracking created personalization and segmentation. This enabled organizations to send individual customers messages that felt tailored to them. Now organizations try to improve the user experience on their website by organizing information based on individual consumer preferences. Social ads can target specific users based on demographics. And they can segment which audiences they target based on purchasing power. A recent IBM Institute for Business Value survey found 57 percent of respondents said meeting customer demands for more personalized experiences was their top reason for adopting AI.

The next wave of technology driven CX


We’re entering a new age of customer experience driven by digital transformation. New technologies like artificial intelligence (AI) and machine learning (ML) will drive automation and further enhance the CX suite. Chatbots using generative AI and natural language processing will encourage more customers to use self-service tools for their simplest problems. That frees up human customer service representatives to focus on the biggest issues that can create happier and more loyal customers.

AI will power predictive analytics that will help organizations understand better when customers may have an issue or when it would be an opportune time to reach out to them. Organizations that can create compelling customer experiences can use virtual reality (VR) and augmented reality (AR) to show potential customers a facsimile of their services.

Customer experience will continue to evolve


The future of customer experience is bright. Providing a positive customer experience can become a competitive advantage. IBM can help enterprises apply trusted AI in this space for more than a decade. Generative AI has further potential to significantly transform customer and field service with the ability to understand complex inquiries and generate more human-like, conversational responses.

IBM puts customer experience strategy at the center of your business, helping you position it as a competitive advantage. With deep expertise in customer journey mapping and design, platform implementation, and data and AI consulting, IBM can help you harness best-in-class technologies to drive transformation across the customer lifecycle.

Source: ibm.com

Tuesday, 13 February 2024

Maximizing your event-driven architecture investments: Unleashing the power of Apache Kafka with IBM Event Automation

Maximizing your event-driven architecture investments: Unleashing the power of Apache Kafka with IBM Event Automation

In today’s rapidly evolving digital landscape, enterprises are facing the complexities of information overload. This leaves them grappling to extract meaningful insights from the vast digital footprints they leave behind.

Recognizing the need to harness real-time data, businesses are increasingly turning to event-driven architecture (EDA) as a strategic approach to stay ahead of the curve. 

Companies and executives are realizing how they need to stay ahead by deriving actionable insights from the sheer amount of data generated every minute in their digital operations. As IDC stated: as of 2022, 36% of IT leaders identified the use of technologies to achieve real-time decision-making as critical for business success, and 45% of IT leaders reported a general shortage of skilled personnel for real-time use cases.

This trend grows stronger as organizations realize the benefits that come from the power of real-time data streaming. However, they need to find the right technologies that adapt to their organizational needs. 

At the forefront of this event-driven revolution is Apache Kafka, the widely recognized and dominant open-source technology for event streaming. It offers businesses the capability to capture and process real-time information from diverse sources, such as databases, software applications and cloud services. 

While most enterprises have already recognized how Apache Kafka provides a strong foundation for EDA, they often fall behind in unlocking its true potential. This occurs through the lack of advanced event processing and event endpoint management capabilities.

Socialization and management in EDA


While Apache Kafka enables businesses to construct resilient and scalable applications, helping to ensure prompt delivery of business events, businesses need to effectively manage and socialize these events.

To be productive, teams within an organization require access to events. But how can you help ensure that the right teams have access to the right events? An event endpoint management capability becomes paramount in addressing this need. It allows for sharing events through searchable and self-service catalogs while simultaneously maintaining proper governance and controls with access based on applied policies.

The importance is clear: you can protect your business events with custom policy-based controls, while also allowing your teams to safely work with events through credentials created for role-based access. Do you remember playing in the sandbox as a kid? Now, your teams can learn to build sandcastles within the box by allowing them to safely share events with certain guardrails, so they don’t exceed specified boundaries. 

Therefore, your business maintains control of the events while also facilitating the sharing and reuse of events, allowing your teams to enhance their daily operations fueled by reliable access to the real-time data they need.
 
Also, granting teams reliable access to relevant event catalogs allows them to reuse events to gain more benefits from individual streams. This allows businesses and teams to avoid duplication and siloing of data that might be immensely valuable. Teams innovate faster when they easily find reusable streams without being hindered by the need to source new streams for every task. This helps ensure that they not only access data but also use it efficiently across multiple streams, maximizing its potential positive impact on the business. 

Level up: Build a transformative business strategy


A substantial technological investment demands tangible returns in the form of enhanced business operations, and enabling teams to access and use events is a critical aspect of this transformative journey.

However, Apache Kafka isn’t always enough. You might receive a flood of raw events, but you need Apache Flink to make them relevant to your business. When used together, Apache Kafka’s event streaming capabilities and Apache Flink’s event processing capabilities smoothly empower organizations to gain critical real-time insights from their data.

Many platforms that use Apache Flink often come with complexities and a steep learning curve, requiring deep technical skills and extensive knowledge of this powerful real-time processing platform. This restricts real-time event accessibility to a select few, increasing costs for companies as they support highly technical teams. Businesses should maximize their investments by enabling a broad range of users to work with real-time events instead of being overwhelmed by intricate Apache Flink settings.

This is where a low-code event processing capability needs to remove this steep learning curve by simplifying these processes and allowing users across diverse roles to work with real-time events. Instead of requiring skilled Flink structured query language (SQL) programmers, other business teams can immediately extract actionable insights from relevant events.

When you remove the Apache Flink complexities, business teams can focus on driving transformative strategies with their newfound access to real-time data. Immediate insights can now fuel their projects, allowing them to experiment and iterate quickly to accelerate time to value. Properly informing your teams and providing them with the tools to promptly respond to events as they unfold gives your business a strategic advantage. 

Finding the right strategic solution


As the need for building an EDA remains recognized as a strategic business imperative, the presence of EDA solutions increases. Platforms in the market have recognized the value of Apache Kafka, enabling them to build resilient, scalable solutions ready for the long term. 

IBM Event Automation, in particular, stands out as a comprehensive solution that seamlessly integrates with Apache Kafka, offering an intuitive platform for event processing and event endpoint management. By simplifying complex tech-heavy processes, IBM Event Automation maximizes the accessibility of Kafka settings. This helps ensure that businesses can harness the true power of Apache Kafka and drive transformative value across their organization. 

Taking an open, community-based approach backed by multiple vendors reduces concerns about the need for future migrations as individual vendors make different strategic choices, for example, Confluent adopting Apache Flink instead of KSQL. Composability also plays a significant role here. As we face a world saturated with various technological solutions, businesses need the flexibility to find and integrate those that enhance their existing investments seamlessly. 

As enterprises continue to navigate the ever-evolving digital landscape, the integration of Apache Kafka with IBM Event Automation emerges as a strategic imperative. This integration is crucial for those aiming to stay at the forefront of technological innovation. 

Source: ibm.com

Monday, 12 February 2024

Creating exceptional employee experiences

Creating exceptional employee experiences

As the line between employees’ personal and professional lives becomes more blurred than ever, employees expect a more flexible and empathetic workplace that takes their full selves into account. This shift in employee expectation is happening in a challenging environment of rapid technological advancements, widening skills gaps and unpredictable socioeconomic issues. And to top it all off, the very nature of HR resists optimization, as employee experiences are hard to quantify, and organizations are difficult to change.

Despite these challenges, employees expect their interactions and experiences with a company to live up to their brand values. HR has a vital role to play in delivering on these expectations to stay competitive, and leaders believe generative AI can help drive HR and talent transformation. If businesses adopt a proactive and dynamic HR approach to address current and future challenges, attract and retain top talent and develop future skills, they can provide     .

Rewriting the employee experience with advocacy and AI


HR leaders think of the workforce in a holistic sense, including current employees, contractors, candidates and past employees.

HR leaders increasingly view employees not only as part of the workforce, but also as unique individuals that play many roles. This whole-self approach is a paradigm shift that embraces the multifaceted nature of people and the landscapes they navigate. Coincidentally, employees already think of their personal and professional priorities as intertwined and mutually influential, and they want their employers to do the same. In a recent survey, well over 70% of employees highlighted working conditions and work–life balance and flexibility as most important to them.

During their time with a company, people undergo many changes that impact not only what they contribute, but also what kind of support they require from their employer. HR can help people navigate these changes in several ways, including developing a career roadmap that aligns with their experience and goals, providing support as they navigate different stages of family planning, creating a physical workspace that accommodates their accessibility needs and personality traits or providing flexible work arrangements.

HR is now tasked with enhancing employees’ full lives and transcend short-lived interactions through technology-driven experiences, such as generative AI-powered automation, intelligent workflows, virtual assistants, chatbots and digital assistants. Infusing AI and automation into every process frees up employees to focus on what is important to them as individuals. This charge is complex, but when thoughtfully executed, these investments yield a strong employee experience that can potentially help employees be both happier and more productive in their work.

While HR is not solely responsible for an organization’s employee experience, HR departments are increasingly becoming key advocates for the new employee experience. As HR helps reimagine the employee experience, they’re shifting to a consumer-inspired, flexible delivery model that is proactive, personalized, relevant and outcome-focused. Organizations must also harness enterprise-wide imagination, vision and empathy to creatively engage and serve individuals who yearn for simplicity and agency.

Holistic solutions through a whole-self employee experience


This whole-self HR approach to employee experience is about more than mandating digital access channels; it’s about making these channels so compelling that they naturally attract users. And crucially, employees still have access to human assistance for time-sensitive, urgent and highly personal situations.

The whole-self mindset empowers employees to access resources independently, anticipates their needs and provides proactive solutions. 

The benefits of this transformation drive business value through improvements such as:

  • Employee engagement and satisfaction
  • Talent attraction, development and retention
  • Personal and professional growth, and other dimensions of well-being
  • Workforce productivity, operational efficiency, effectiveness and cost savings

To navigate this shift to a whole-self employee experience and reap these benefits, companies must do three crucial things:

1. Understand and anticipate employee needs and expectations. It is crucial to move beyond static segmentation models and acknowledge the deeper insights of underlying behavior. This means taking a holistic, dynamic view of who employees are and what motivates them. This will require managers and executives to develop, strengthen and maintain relationships with their employees and understand who they are within and outside of the workplace. Trust, communication, respect and collaboration must become central tenets to cultivating these relationships.

2. Solve for shifting scenarios. Companies must adapt to a dynamic environment and offer relevant options across their services to accommodate changes in their employees’ lives (such as marriage, new parenthood, emergent health issues, caring for ailing relatives and retirement).

3. Simplify for relevance. To create a culture of simplicity, companies must make decision-making easier for employees. This requires leveraging data, AI and expert inputs, so employees can identify relevant information and easily navigate systems.

The goal is to pivot HR into a modern service delivery model that flips the perspective and focus to the employee. It’s about understanding why people need specific experiences at specific times and making those experiences as seamless as possible. Designing from the consumer’s viewpoint means breaking through traditional silos both within and outside of HR in order to simplify tasks, support employees through life transitions and provide access to information and skills development resources. With generative AI, HR finally has a technology that can help scale these highly personalized, highly customized interactions.

The transition to delivering a whole-self employee experience includes the next evolution of HR.  It acknowledges the complexity and dynamism of individuals, the ever-shifting nature of external forces and the pivotal role of employee experience in navigating this reality. Businesses that embrace this model are positioned to attract, develop and retain the talent they need to compete.

Source: ibm.com

Saturday, 10 February 2024

IBM

The case for separating DNS from your CDN

The case for separating DNS from your CDN

If you’re signing on with a content delivery network (CDN) provider, you’ll probably see DNS as part of the standard service package. It’s only natural—to access your content delivered by the CDN, the Internet has to know where to send the traffic. CDNs make it easy to configure and manage those DNS settings.

It’s easy to accept DNS services as part of a CDN package. Most organizations that are just starting out with a CDN probably don’t give DNS a second thought. They just assume that the two services should naturally go together. 

From a management perspective, co-location of DNS within a content delivery service certainly makes sense. The ability to configure and manage DNS alongside other CDN settings saves a little bit of time and effort, and who doesn’t want to save time and effort? 

Here’s the issue: that seemingly minor DNS feature actually has a significant impact on how much you pay for content delivery, the quality of what you deliver and the resilience of what you’re delivering. Even that ease of use argument gets flipped on its head. 

We’re more than a little biased here, but that’s because we believe that for many CDN users the case for using a separate DNS service is overwhelming. 

It all boils down to this: if you’re using multiple CDNs now or see yourself using multiple CDNs in the future, you’ll want to avoid getting locked into a single provider’s ecosystem and cost structure. Doing this effectively requires a separate DNS system that works across providers, allowing you to pick the best option at any particular moment. 

Let’s look at some of the benefits of using an independent DNS provider, including: 

  • cost 
  • performance 
  • ease of management 

Cost 


The DNS offering that comes bundled with CDN services has one job: to make that CDN stickier. They make it possible to steer traffic elsewhere using DNS, but it’s not exactly in the CDN’s interest to send you anywhere else—they only get paid when they’re delivering your content, after all. 

The cost of content delivery varies greatly across different ISPs and geographical regions. The CDNs don’t use this data to optimize traffic as it would impact their bottom line. 

While the cost differences can seem miniscule for individual queries, when you multiply those by the number of queries around the world and look at it over time, the total adds up quickly. The ability to steer traffic to the lowest cost CDN using DNS can end up saving you quite a lot. 

Performance 


Just as the cost of content delivery can change from moment to moment, there are significant differences in performance between CDNs. Here’s a random sample of real user monitoring data from some of the major content delivery providers. 

The case for separating DNS from your CDN

As you can see, the data is all over the place. At any given point in time, various CDNs might deliver significantly better (or worse) performance. 

If you’re using the default DNS that comes with your CDN, it’s very difficult to switch to a better performing CDN in real time. Doing that would require both the knowledge of which CDN is the best option and the ability to rapidly configure DNS to steer traffic between providers. CDN providers don’t provide that data. 

Resilience 


Any CDN worth its salt has a 100% uptime SLA. Even so, outages are inevitable and more frequent than the providers care to admit. (Monday.com has an excellent piece about this.) 

When these outages occur, your content will go offline if you don’t have a way to easily fail over to a different service. Because the DNS that comes bundled with CDN packages is only designed to send traffic to one place, it can leave you without many options when that one place goes dark. 

Using an external DNS provider gives you the ability to automatically switch from one CDN to another in the event of an outage, keeping your content online and the revenue flowing. 

Management 


Remember at the beginning when we said that managing your DNS settings from within a CDN platform can save you some time and effort? That’s true, but only if you’re only using one CDN. 

If you’re using multiple CDNs, however, managing DNS can be a big hassle. Any time you want to shift traffic between providers, you have to go into each platform and manually reconfigure it all. And let’s face it: nobody wants to do that. Separate DNS configuration steps usually means that only a major change prompts any shift in wherever traffic is going. That’s how the CDN providers like it. 

The benefits of a separate DNS layer 


If you’re using multiple CDNs, separating out the DNS layer helps you optimize for the best of each provider through the magic of traffic steering. 

Want to optimize for cost? A DNS provider that sits apart from your CDNs can analyze the data in real time and automatically steer traffic to the cheapest option for that particular moment. 

Want to optimize for performance? Analysis of geographies, ISPs, devices and other factors can be fed into an automated DNS logic, which sends users to the best CDN available. 

Want to keep your content online in the face of periodic outages? Dedicated DNS providers can automatically fail over to whichever CDN is up and running at that moment, offering seamless content delivery across providers. 

Want to save time on DNS management? Using a single, fully automated DNS control plane across CDNs gives you the power to make necessary changes without the annoyance of manual configurations. 

Needless to say, NS1 is designed to do all of this and more. We leverage the power of DNS so some of the biggest, most consequential content platforms out there can deliver the lowest cost, best-performing, most resilient, easiest to manage operations available. Our advanced traffic steering options make it all possible. 

Source: ibm.com

Thursday, 8 February 2024

6 ways generative AI can optimize asset management

6 ways generative AI can optimize asset management

Every asset manager, regardless of the organization’s size, faces similar mandates: streamline maintenance planning, enhance asset or equipment reliability and optimize workflows to improve quality and productivity. In a recent IBM Institute for Business Value study of chief supply chain officers, nearly half of the respondents stated that they have adopted new technologies in response to challenges.

There is even more help on the horizon with the power of generative artificial intelligence (AI) foundation models, combined with traditional AI, to exert greater control over complex asset environments. These foundation models, built on large language models, are trained on vast amounts of unstructured and external data. They can generate responses like text and images, while simultaneously interpreting and manipulating existing data.

Let’s explore 6 ways generative AI can optimize your enterprise asset management operations, including field service, maintenance and compliance. Generative AI can:

1. Generate work instructions


Field service technicians, maintenance planners and field performance supervisors comprise your front-line team. They require job plans and work instructions for asset failures and repairs. Using a hybrid AI or machine learning (ML) model, you can train it on enterprise and published data, including newly acquired assets and sites.

Through interactive dialog, it can generate visual analytics and promptly deliver content to your team. Access to this knowledge can boost field service uptime by 10%–30% and increase first-time fix rates by 20%, resulting in cost savings, improved worker productivity and increased client satisfaction.

2. Increase the efficiency of work order planning


Work orders drive activity, relying on work plans and job plans to authorize and provide resources to handle tasks. The process itself, although straightforward, is time-intensive, making it no surprise that work order planning often experiences delays.

Generative AI empowers foundation models by training them with all the necessary instructions, parts, tools and skills for a specific asset or class, enabling the generation of work plans. This enhances your staff’s capabilities, resulting in a 10%–20% increase in planning proficiency. Also, generative AI can facilitate automation and recommend updates to maintenance standards, potentially leading to a 10%–25% increase in compliance.

3. Support reliability engineering


Reliability is a critical key performance indicator in any asset-driven business. Unfortunately, experienced reliability engineers are leaving many sites, resulting in limited resources for training replacements. By using hybrid AI/ML models, generative AI generates failure and effects analyses from historical data, enabling you to prioritize and reduce serial failures by up to 25%–50% while increasing site reliability by 10%–15%.

4. Analyze and apply maintenance standards


Generative AI foundation models can train on asset class standards, including work history, maintenance plans, job plans and spare parts. They identify and recommend compliance with current standards for existing assets. By enhancing staff skills, generative AI analyses extend asset lifespan by 15%–20% and increase uptime by approximately 5%–10%.

5. Update maintenance quality


When work orders are completed, they often signal the need to move on to the next one. However, intelligent analysis of completed work orders can reveal areas where compliance or maintenance processes need improvement. Generative AI can recommend updates to improve the effectiveness of planned maintenance by 15%–25% and create new job plans based on the completed work plan if they are absent, therefore increasing planning proficiency by 10%–20%.

6. Assist with safety and regulatory compliance


Generative AI provides real-time assistance for industry, site or asset safety and regulatory compliance, reducing fines by 25% and improving compliance by up to 50%. Training models that use published safety guidelines, regulations, regulatory filings, rulings and internal data sources significantly improve the speed, accuracy and success rate of regulatory filings for planners and technicians.

Source: ibm.com

Tuesday, 6 February 2024

Common cloud migration challenges and how to manage them

Common cloud migration challenges and how to manage them

Cloud computing continues to grow in popularity, and its scalability, functionality, cost-effectiveness and other potential benefits have helped transform traditional business models and update legacy systems, creating opportunities for various organizations. A cloud migration, however, is a huge undertaking that requires thorough planning and execution of a comprehensive strategy to successfully achieve business goals. Cloud services are readily available and come in all shapes and sizes—it’s important to determine which is best for your organization.

Though it can deliver many benefits, a migration project poses some risks to an organization and presents a unique set of challenges. The better an organization understands those challenges, and prepares to proactively address them, the more successful (and less stressful!) a migration will be.

In this blog, we review the benefits of cloud migration and some common challenges organizations face, to help your organization better prepare for a cloud migration.

Cloud migration benefits


Cloud migration—the process of moving data, applications and workloads from an on-premises data center to a cloud-based infrastructure, or from one cloud environment to another—offers several significant benefits: 

  • Scalability: A move to cloud removes the physical constraints to scalability; it reduces or eliminates the need to add servers and supporting infrastructure to a data center.
  • Cost-effectiveness: With cloud pricing models, you can pay for only the capacity you use. Rather than adding on-premises capacity in anticipation of scalability you may or may not need, cloud deployment enables you to pay for the capacity you currently need and scale on demand when required.
  • Security: Leading cloud providers offer secure environments that comply with applicable industry standards and government regulations. They protect these environments with security tools, best practices and policies, updating them regularly and at scale as needed.
  • Accelerated adoption: Migrating applications to the cloud allows your company to adopt new technologies faster and increase compatibility. In short, enterprises typically migrate workloads to cloud to improve operational performance and agility, workload scalability and security.

Cloud migration challenges


Though cloud migration might be the right move for your organization, it doesn’t mean that there aren’t challenges involved. The nature of these obstacles depends on your organization’s migration plan. The outcome of your migration might be affected by whether your organization plans to migrate all its computing assets to the cloud or run a partial migration while some applications and services remain on-premises. This replatforming can reveal vulnerabilities and help to highlight your business needs.

In the following sections, we explore some of the most common cloud migration challenges and offer cloud solutions to help your organization manage a smooth migration and avoid issues such as data loss and performance degradation.

Lack of cloud migration strategy


While businesses and organizations are often eager to take advantage of cloud infrastructure, it’s important to approach the process with a clear design and plan of action to achieve the best outcomes. A cloud migration strategy should consider many different factors, including overall migration goals and how to avoid downtime.

It’s important to carefully examine which workloads are most appropriate for cloud and avoid selecting applications to run in cloud that might be better suited on-premises. This factor differs depending on your company’s specific needs and business objectives, chosen cloud service providers and cloud distribution models (SaaS, PaaS or IaaS, for example).

Solution: After you have decided to adopt cloud, creating a detailed roadmap outlining the migration process and necessary resources is crucial for success. Things to consider include what you migrate and why, and who is responsible for each part of the migration. Clearly defining these workflows makes for a smoother transition.

The roadmap considers factors like dependencies, latency and security concerns, and the leveraging of technologies like automation. Organizations must decide whether a private cloud, a public cloud or a hybrid environment is best suited for their cloud data needs.

Migration cost


Calculating cloud migration costs can be one of the most challenging aspects of migration. Organizations often underestimate the full scope of expenses incurred during data migration, such as new network connectivity needs to handle increased bandwidth demands and post-migration costs to run workloads in cloud environments.

Solution: The best way to plan your cloud costs is by taking advantage of cloud migration planning tools that walk you through all the considerations of the cloud migration process. Set a budget for the migration and be sure to factor in current costs associated with moving the workloads and the expense of running them in the cloud.

Complex architecture


Matching your organization’s cloud strategy with the overall IT strategy can present a challenge, particularly if the current IT infrastructure is complex. IT complexity can make it a bit more difficult to develop and run a compatible cloud migration strategy.

Solution: To prevent creating a complex new cloud or hybrid environment, along with its accompanying costs and challenges, your organization needs to carefully plan and establish a realistic vision. Strive to design a cloud architecture that is compatible with the existing in-house IT infrastructure to minimize inconsistencies and interoperability problems between different systems.

Data security and compliance risks


There is security risk involved with any transfer of information, including when moving data to and from cloud platforms. Cloud-hosted data and apps must follow the same security protocol as those on-premises. A successful cloud migration includes these security measures as part of its refactoring.

Solution: To address this risk, choose a cloud service provider with robust security features and a demonstrated history of platform security. Ensure that network connections are appropriate for the task at hand. For example, always opt for a secure private connection to handle sensitive data. Additionally, ensure that cloud providers have the tools, practices and policies in place to comply with relevant security requirements.

Cloud migration and IBM


Cloud migration doesn’t have to be overwhelming. The IBM Instana™ and IBM® Turbonomic® platforms provide cloud migration solutions and tools that simplify the migration process and help your organization find the most efficient and cost-effective path to the cloud.

Here’s how it works:

With IBM Turbonomic you can optimize your cloud consumption from the start, expedite cloud migration initiatives and enable cloud security.

Turbonomic analyzes the real-time resource needs of application workloads, whether they’re cloud-based or running on-prem. The platform then delivers potential application migration plans. These plans detail specific actions and indicate which cloud configurations will support your workloads. This applies whether you take a “lift-and-shift” approach or optimize workloads as part of the migration. This migration assessment strategy can help with cost savings by avoiding expensive lift-and-shift migrations.

Whether your organization is pursuing a cloud-first, hybrid cloud or multicloud strategy, Turbonomic can deliver cloud migration planning tools to accelerate your digital transformation and help you take full advantage of discounted pricing.

With IBM Instana you can establish pre-migration performance baselines, understand your infrastructure needs, automate cloud provisioning and orchestration, pinpoint the root cause of issues during migration and establish proactive post-migration monitoring processes. IBM Instana™ Observability helps to ensure your success throughout the entire cloud migration process (plan, migrate and run) for smooth and efficient application and infrastructure performance, without disruptions to your users.

Source: ibm.com

Saturday, 3 February 2024

IBM Databand: Self-learning for anomaly detection

IBM Databand: Self-learning for anomaly detection

Almost a year ago, IBM encountered a data validation issue during one of our time-sensitive mergers and acquisitions data flows. We faced several challenges as we worked to resolve the issue, including troubleshooting, identifying the problem, fixing the data flow, making changes to downstream data pipelines and performing an ad hoc run of an automated workflow.

Enhancing data resolution and monitoring efficiency with Databand


After the immediate issue was resolved, a retrospective analysis revealed that proper data validation and intelligent monitoring might have alleviated the pain and accelerated the time to resolution. Instead of developing a custom solution solely for the immediate concern, IBM sought a widely applicable data validation solution capable of handling not only this scenario but also potential overlooked issues.  

That is when I discovered one of our recently acquired products, IBM® Databand® for data observability. Unlike traditional monitoring tools with rule-based monitoring or hundreds of custom-developed monitoring scripts, Databand offers self-learning monitoring. It observes past data behavior and identifies deviations that exceed certain thresholds. This machine learning capability enables users to monitor data with minimal rule configuration and anomaly detection, even if they have limited knowledge about the data or its behavioral patterns.

Optimizing data flow observability with Databand’s self-learning monitoring


Databand considers the data flow’s historical behavior and flags suspicious activities while alerting the user. IBM integrated Databand into our data flow, which comprised over 100 pipelines. It provided easily observable status updates for all runs and pipelines and, more importantly, highlighted failures. This allowed us to concentrate on and accelerate the remediation of data flow incidents.

Databand for data observability uses self-learning to monitor the following:  

  • Schema changes: When a schema change is detected, Databand flags it on a dashboard and sends an alert. Anyone working with data has likely encountered scenarios where a data source undergoes schema changes, such as adding or removing columns. These changes impact workflows, which in turn affect downstream data pipeline processing, leading to a ripple effect. Databand can analyze schema history and promptly alert us to any anomalies, preventing potential disruptions.
  • Service level agreement (SLA) impact: Databand shows data lineage and identifies downstream data pipelines affected by a data pipeline failure. If there is an SLA defined for data delivery, alerts help recognize and maintain SLA compliance.
  • Performance and runtime anomalies: Databand monitors the duration of data pipeline runs and learns to detect anomalies, flagging them when necessary. Users do not need to be aware of the pipeline’s duration; Databand learns from its historical data.
  • Status: Databand monitors the status of runs, including whether they are failed, canceled or successful.
  • Data validation: Databand observes data value ranges over time and sends an alert upon detecting anomalies. This includes typical statistics such as mean, standard deviation, minimum, maximum and quartiles.

Transformative Databand alerts for enhanced data pipelines


Users can set alerts by using the Databand user interface, which is uncomplicated and features an intuitive dashboard that monitors and supports workflows. It provides in-depth visibility through directed acyclic graphs, which is useful when dealing with many data pipelines. This all-in-one system empowers support teams to focus on areas that require attention, enabling them to accelerate deliverables.

IBM Enterprise Data’s mergers and acquisitions have enabled us to enhance our data pipelines with Databand, and we haven’t looked back. We are excited to offer you this transformative software that helps identify data incidents earlier, resolve them faster and deliver more reliable data to businesses.

Source: ibm.com

Thursday, 1 February 2024

ManagePlus—your journey before, with and beyond RISE with SAP

ManagePlus—your journey before, with and beyond RISE with SAP

RISE with SAP has not only been a major cloud player in recent years, it’s also become the standard cloud offering from SAP across different products.  

But when assessing what it takes to onboard into RISE with SAP, there are multiple points to consider. Especially important is a good understanding of the RACI split around Standard, Additional and Optional Services, along with relevant CAS (Cloud Application Service) packages. 

If you’re wondering whether RISE with SAP is the right solution for you, consider the following scenarios: 

Data centre move 


You’re looking to move from Capex to Opex in IT spend or have an end-of-data centre contract which can end up triggering evaluation for alternate hosting options —this time, a hyperscaler-based journey (Azure, AWS, GCP, IBM Cloud® and so forth). Also consider the cost of hardware refresh and for possible opportunities around on demand cloud computing. 

S/4HANA contract conversion 


You’re in your journey towards adoption of S/4HANA either with Greenfield, Brownfield or Bluefield by planning a potential contract renegotiation and restructuring with SAP, then RISE with SAP is the only contract presented by SAP (majority of the time) to customers offering S/4HANA capabilities and a move to cloud computing. 

Enabling true transformation 


You’re on the lookout for adoption of industry best practices along with the capabilities of process mining and process discovery to both simplify and standardize the process flows. From a business and IT perspective, this helps in cycle time and eventually price per business object.  

M&A and divestiture 


You have a potential ask of simplifying the divestiture of Company Codes and with that, the split of IT systems with ease of license segregation. 

System consolidation 


You want to reduce the solution footprint and by doing so, reduce infrastructure cost and help shift toward a single source of truth across different components. 

License audit gap/shelfware 


You’re looking for a long-term solution that’s not only subscription-based, but can also address potential compliance gaps. 

IT project issues 


You’d like to worry less about physical provisioning of capacity within your data centre and rely more on faster onboarding of new resources in terms of on-demand capacity. 

End to end security 


You’re required to maintain gold standard of security both from a platform and application point of view, cutting across different security requirements. 

IT Ops issues 


You don’t want to deal with the challenges in managing multiple vendors and SLAs. 

Industry focus 


To complement your current IT setup and add operating flexibility, you’re also looking to address the rapid demand for cloud ERP in industries such as healthcare, retail, education and telecom. In addition, you want an automated resource management and industry specific functions such as sales and customer support at the core of operations. The image below shows what’s included within RISE at a high level. 

ManagePlus—your journey before, with and beyond RISE with SAP

To top it off, SAP recently announced “Grow with SAP,” an offering that includes products, best practice support, adoption acceleration services, community and learning opportunities to help use SAP S/4HANA public cloud edition with speed, predictability and continuous innovation. 

What else should you know about RISE with SAP? 


RISE with SAP comes with different activities, included as part of the standard/ tailored RACI published by SAP. These activities are categorized as: 

  • Standard: Default activities that maintain systems without additional charge such as DB, network, platform and system maintenance. More details about roles and responsibilities.  
  • Additional: One-time activities—such as additional DR testing—required to be performed by SAP beyond what’s standard
  • Optional: Both one-time and recurring impacts on the solution, such as adding additional memory or upgrading on size and scale. 
  • Cloud Application Services (CAS) packages: Activities which a client might be able to handle within their IT team internally, or with the vendor who might already have been a part of their IT landscape. These activities may include supporting the client’s transport management, release version upgrades and test management needs. 

Based on the requirements, it’s not only additional CAS packages to consider—it’s also the SAP and non-SAP systems you don’t want to include into the RISE landscape. This could be due to multiple reasons, primarily, certain combinations of hardware and DB might not be supported in RISE.  

To address this, consider ManagePlus, which maintains similar standards to RISE without some of the incentives which may not be required. A few other compelling reasons in favor of ManagePlus: 
 
 You have not yet decided if RISE with SAP is appropriate for your business 

  • You have multiple add-ons which might not be allowed into RISE with SAP 
  • You want a single vendor to take responsibility for both RISE-based workloads and non-RISE-based workloads 
  • You would like to still transition into cloud services before making a jump into S/4HANA, but want to make sure you don’t end up paying twice—once for the current move through ManagePlus, and at a later point, into RISE with SAP 
  • You’re looking for the same landing zone with pre-defined architectural patterns being leveraged for a true hybrid cloud deployment 

As announced earlier, IBM® is at the frontier, deploying RISE with SAP as a premium supplier successfully delivering RISE with SAP both from a cloud provisioning and from a technical managed services point of view. As part of the breakthrough partnership with IBM for RISE with SAP, global clients have been successfully provided with advisory, support and implementation services including the CAS package related services.

To add onto these great solutions and services, clients have also been given a comprehensive end-to-end solution under “ManagePlus,” which can address all of the above pain points around your aspiration to be RISE-ready, while still deciding when to move to RISE with SAP.  ManagePlus also covers SAP and non-SAP workloads, which are not on RISE, but can maintain similar standards. Last but not least, the scenario around CAS package-based scope to support your RISE with SAP based scope. 

We leverage our master control plane driven approach guided by AI and automation to support your requirements both for the build and manage on SAP and non-SAP workloads. This enables you to have a seamless transition across the systems with a single vendor-based experience and with improvement in Mean time to Diagnose (MTTD) and Mean Time to Resolve (MTTR). The control plane also allows you to easily extend across multiple hyperscalers using the same architectural patterns established earlier. This provides the ease of maintenance and repeatability of effort, without looking into different integration points between each solution. 

This solution brings the best capabilities to the forefront when it comes to industry best practices, industry solutions, technical core operations, migration/implementation of the functionality, functional/technical application management services and adoption of AI. To help you achieve no-touch and low-touch operations model. We leverage our SAP-certified Digital Operations framework at the core of the solution to generate insights and reduce the noise when it comes to the most complex landscape management-based functionality. 

The following are the benefits of using this solution either with RISE with SAP or beyond RISE with SAP: 

  • Provides a consistent platform of less than 3 days for system provisioning end-to-end 
  • Provides a single platform for not only RISE with SAP-based workloads, but for all systems both SAP and non-SAP with same standards as RISE with SAP 
  • Well-defined CAS package-related services around system monitoring, job scheduling/monitoring, print setup and monitoring, transport management, ChaRM maintenance and custom add-on management (along with other issues around usage of DBs, such as Oracle, DB2 with AIX and other operating systems)
  • 50–60% improvement in MTTD and MTTR leveraging AI

Source: ibm.com