Showing posts with label Automation. Show all posts
Showing posts with label Automation. Show all posts

Monday, 2 September 2024

Apache Flink for all: Making Flink consumable across all areas of your business

Apache Flink for all: Making Flink consumable across all areas of your business

In an era of rapid technological advancements, responding quickly to changes is crucial. Event-driven businesses across all industries thrive on real-time data, enabling companies to act on events as they happen rather than after the fact. These agile businesses recognize needs, fulfill them and secure a leading market position by delighting customers.


This is where Apache Flink shines, offering a powerful solution to harness the full potential of an event-driven business model through efficient computing and processing capabilities. Flink jobs, designed to process continuous data streams, are key to making this possible.

How Apache Flink enhances real-time event-driven businesses


Imagine a retail company that can instantly adjust its inventory based on real-time sales data pipelines. They are able to adapt to changing demands quickly to seize new opportunities. Or consider a FinTech organization that can detect and prevent fraudulent transactions as they occur. By countering threats, the organization prevents both financial losses and customer dissatisfaction. These real-time capabilities are no longer optional but essential for any companies that are looking to be leaders in today’s market.

Apache Flink takes raw events and processes them, making them more relevant in the broader business context. During event processing, events are combined, aggregated and enriched, providing deeper insights and enabling many types of use cases, such as: 

  1. Data analytics: Helps perform analytics on data processing on streams by monitoring user activities, financial transactions, or IoT device data. 
  2. Pattern detection: Enables identifying and extracting complex event patterns from continuous data streams. 
  3. Anomaly detection: Identifies unusual patterns or outliers in streaming data to pinpoint irregular behaviors quickly. 
  4. Data aggregation: Ensures efficient summarization and processing of continuous data flows for timely insights and decision-making. 
  5. Stream joins: Combines data from multiple streaming platforms and data sources for further event correlation and analysis. 
  6. Data filtering: Extracts relevant data by applying specific conditions to streaming data.
  7. Data manipulation: Transforms and modifies data streams with data mapping, filtering and aggregation.

The unique advantages of Apache Flink


Apache Flink augments event streaming technologies like Apache Kafka to enable businesses to respond to events more effectively in real time. While both Flink and Kafka are powerful tools, Flink provides additional unique advantages:

  • Data stream processing: Enables stateful, time-based processing of data streams to power use cases such as transaction analysis, customer personalization and predictive maintenance through optimized computing. 
  • Integration: Integrates seamlessly with other data systems and platforms, including Apache Kafka, Spark, Hadoop and various databases. 
  • Scalability: Handles large datasets across distributed systems, ensuring performance at scale, even in the most demanding Flink jobs.
  • Fault tolerance: Recovers from failures without data loss, ensuring reliability.

IBM empowers customers and adds value to Apache Kafka and Flink


It comes as no surprise that Apache Kafka is the de-facto standard for real-time event streaming. But that’s just the beginning. Most applications require more than just a single raw stream and different applications can use the same stream in different ways.

Apache Flink provides a means of distilling events so they can do more for your business. With this combination, the value of each event stream can grow exponentially. Enrich your event analytics, leverage advanced ETL operations and respond to increasing business needs more quickly and efficiently. You can harness the ability to generate real-time automation and insights at your fingertips.

IBM® is at the forefront of event streaming and stream processing providers, adding more value to Apache Flink’s capabilities. Our approach to event streaming and streaming applications is to provide an open and composable solution to address these large-scale industry concerns. Apache Flink will work with any Kafka topic, making it consumable for all.

The IBM technology builds on what customers already have, avoiding vendor lock-in. With its easy-to-use and no-code format, users without deep skills in SQL, Java, or Python can leverage events, enriching their data streams with real-time context, irrespective of their role. Users can reduce dependencies on highly skilled technicians and free up developers’ time to accelerate the number of projects that can be delivered. The goal is to empower them to focus on business logic, build highly responsive Flink applications and lower their application workloads.

Take the next step


IBM Event Automation, a fully composable event-driven service, enables businesses to drive their efforts wherever they are on their journey. The event streams, event endpoint management and event processing capabilities help lay the foundation of an event-driven architecture for unlocking the value of events. You can also manage your events like APIs, driving seamless integration and control.

Take a step towards an agile, responsive and competitive IT ecosystem with Apache Flink and IBM Event Automation.

Source: ibm.com

Saturday, 13 July 2024

IBM continues to support OpenSource AsyncAPI in breaking the boundaries of event driven architectures

IBM continues to support OpenSource AsyncAPI in breaking the boundaries of event driven architectures

IBM Event Automation’s event endpoint management capability makes it easy to describe and document your Kafka topics (event sources) according to the open source AsyncAPI Specification.

Why is this important? AsyncAPI already fuels clarity, standardization, interoperability, real-time responsiveness and beyond. Event endpoint management brings this to your ecosystem and helps you seamlessly manage the complexities of modern applications and systems.


The immense utility of Application Programming Interfaces (APIs) and API management are already widely recognized as it enables developers to collaborate effectively, which helps them to discover, access and build on existing solutions. As events are used for communication between applications, these same benefits can be delivered by formalizing event-based interfaces:

  • Describing events in a standardized way: Developers can quickly understand what they are and how to consume them
  • Event discovery: Interfaces can be added to catalogs, so they are advertised and searchable
  • Decentralized access: Self-service access with trackable use for interface owners
  • Lifecycle management: Interface versioning to ensure teams are not unexpectedly broken by changes

Becoming event driven has never been more important as customers demand immediate responsiveness and businesses need ways to quickly adapt to changing market dynamics. Thus, events need to be fully leveraged and spread across the organization in order for businesses to truly move with agility. This is where the value of event endpoint management becomes evident: event sources can be managed easily and consistently like APIs to securely reuse them across the enterprise; and then they can be discovered and utilized by any user across your teams.

One of the key benefits of event endpoint management is that it allows you to describe events in a standardized way according to the AysncAPI specification. With its intuitive UI, it makes it easy to produce a valid AsyncAPI document for any Kafka cluster or system that adheres to the Apache Kafka protocol.

Our AsycnAPI applicability is broadening in our implementation. Our latest event endpoint management release introduces the ability for client applications to write to an event source through the event gateway. This now means an application developer can produce to an event source that is published to the catalog, rather than just consume events. On top of this, we have provided controls such as schema enforcement to manage the kind of data a client can write to your topic.

Alongside providing self-service access to these event sources found in the catalog, we provide those finer grained approval controls. Access to the event sources is managed by the event gateway functionality: the event gateway handles the incoming requests from applications to access a topic, routing traffic securely between the Kafka cluster and the application.

Open innovation has rapidly become an engine of revenue growth and business performance. Organizations that embrace open innovation had a 59% higher rate of revenue growth compared to those that don’t. IBM Institute for Business Value

Since its inception, Event Endpoint Management has adopted and promoted AsyncAPI, which is the industry standard for defining asynchronous APIs. AsyncAPI version 3 was released in December last year and within a couple of weeks, IBM supported generating those v3 AsynchAPI docs in event endpoint management. Additionally, as part of giving back to the open-source community, IBM updated the open source AsyncAPI generator templates to support the latest version 3 updates.

Source: ibm.com

Tuesday, 9 July 2024

Why an SBOM should be at the center of your application management strategy

Why an SBOM should be at the center of your application management strategy

The concept of a Software Bill of Materials (SBOM) was originally focused on supply chain security or supply chain risk management. The idea was that if you know how all the different tools and components of your application are connected, you can minimize the risk associated with any component if it becomes compromised. SBOMs have become a staple of most security teams because they offer a quick way to trace the “blast radius” of a compromised piece of an application.

Yet the value of an SBOM goes well beyond application security. If you know how an application is put together (all the connections and dependencies that exist between components), then you can also use that perspective to improve how an application operates. Think of it as the reverse of the security use case. Instead of cutting off a compromised application component to avoid downstream impacts, you’re optimizing a component so downstream systems will benefit.

The role of SBOMs in Application Management


In this sense, SBOMs fill a critical gap in the discipline of application management. Most application teams use many different single-use tools to manage specific aspects of application operations and performance. Yet it’s easy to lose the broader strategic perspective of an application in the silos that those toolsets create. 

That loss of perspective is particularly concerning given the proliferation of application tools and the huge amount of data they create every day. All the widgets that optimize, monitor and report on applications can become so noisy that an application owner can simply drown in all that data.  All that data exists for a reason: someone thought it needed to be measured. But it’s only useful if it contributes to a broader application strategy.

An SBOM provides a more strategic view that can help application owners prioritize and analyze all the information they’re seeing from scattered toolsets and operating environments. It gives you a sense of the whole application, in all its glorious complexity and interconnectedness. That strategic view is a critical foundation for any application owner, because it places the data and dashboards created by siloed toolsets in context. It gives you a sense of what application tooling does and, more importantly, does not know.

SBOM maps of application dependencies and data flows can also point out observability gaps. Those gaps might be in operational components, which aren’t collecting the data that you need to gauge their performance. They could also be gaps between siloed data sources that require some way to provide context on how they interact.

SBOMs in action with IBM Concert


SBOMs play a key role in IBM Concert, a new application management tool which uses AI to contextualize and prioritize the information that flows through siloed application toolsets and operating environments. Uploading an SBOM is the easiest way to get started with IBM Concert, opening the door to a 360° view of your application.

IBM Concert uses SBOMs first to define the contours of an application. Associating data flows and operational elements with a particular application can be tricky, especially when you’re dealing with an application that spans on-prem and cloud environments with interconnected data flows. An SBOM draws a definitive barrier around an application, so IBM Concert can focus on the data sets that matter.

SBOMs also give IBM Concert a handy overview of how different data elements within an application are related to one another. By defining those connections and dependencies in advance, IBM Concert can then focus on analyzing data flows across that architecture instead of trying to generate a theory of how an application operates from scratch.

SBOMs also assist IBM Concert by providing a standardized data format which identifies relevant data sources. While the “language” of every application may be different, SBOMs serve as a type of translation layer, which helps to differentiate risk data from network data, cost information from security information. With these guardrails in place, IBM Concert has a reference point to start its analysis.

Your next step: SBOMs as a source of truth


Since SBOMs are a staple of security and compliance teams, it’s likely that your application already has this information ready for use. It’s simply a matter of making sure your SBOM is up to date and then repurposing that information by uploading it into IBM Concert. Even this simple step will pave the way for valuable strategic insights into your application.

Source: ibm.com

Saturday, 29 June 2024

Applying generative AI to revolutionize telco network operations

Applying generative AI to revolutionize telco network operations

Generative AI is shaping the future of telecommunications network operations. The potential applications for enhancing network operations include predicting the values of key performance indicators (KPIs), forecasting traffic congestion, enabling the move to prescriptive analytics, providing design advisory services and acting as network operations center (NOC) assistants.

In addition to these capabilities, generative AI can revolutionize drive tests, optimize network resource allocation, automate fault detection, optimize truck rolls and enhance customer experience through personalized services. Operators and suppliers are already identifying and capitalizing on these opportunities.

Nevertheless, challenges persist in the speed of implementing generative AI-supported use cases, as well as avoiding siloed implementations that impede comprehensive scaling and hinder the optimization of return on investment.

In a previous blog, we presented the three-layered model for efficient network operations. The main challenges in the context of applying generative AI across these layers are: 

  • Data layer: Generative AI initiatives are data projects at their core, with inadequate data comprehension being one of the primary complexities. In telco, network data is often vendor-specific, which makes it hard to understand and consume efficiently. It is also scattered across multiple operational support system (OSS) tools, complicating efforts to obtain a unified view of the network. 
  • Analytics layer: Foundation models have different capabilities and applications for different use cases. The perfect foundation model does not exist because a single model cannot uniformly address identical use cases across different operators. This complexity arises from the diverse requirements and unique challenges that each network presents, including variations in network architecture, operational priorities and data landscapes. This layer hosts a variety of analytics, including traditional AI and machine learning models, large language models and highly customized foundation models tailored for the operator. 
  • Automation layer: Foundation models excel at tasks such as summarization, regression and classification, but they are not stand-alone solutions for optimization. While foundation models can suggest various strategies to proactively address predicted issues, they cannot identify the absolute best strategy. To evaluate the correctness and impact of each strategy and to recommend the optimal one, we require advanced simulation frameworks. Foundation models can support this process but cannot replace it. 

Essential generative AI considerations across the 3 layers 


Instead of providing an exhaustive list of use cases or detailed framework specifics, we will highlight key principles and strategies. These focus on effectively integrating generative AI into telco network operations across the three layers, as illustrated in Figure 1.

Applying generative AI to revolutionize telco network operations
Figure 1 -Generative AI in three-layered model for future network operations 

We aim to emphasize the importance of robust data management, tailored analytics and advanced automation techniques that collectively enhance network operations, performance and reliability. 

1. Data layer: optimizing telco network data using generative AI 


Understanding network data is the starting point for any generative AI solution in telco. However, each vendor in the telecom environment has unique counters, with specific names and value ranges, which makes it difficult to understand data. Moreover, the telco landscape often features multiple vendors, adding to the complexity. Gaining expertise in these vendor-specific details requires specialized knowledge, which is not always readily available. Without a clear understanding of the data they possess, telecom companies cannot effectively build and deploy generative AI use cases. 

We have seen that retrieval-augmented generation (RAG)-based architectures can be highly effective in addressing this challenge. Based on our experience from proof-of-concept (PoC) projects with clients, here are the best ways to leverage generative AI in the data layer: 

  • Understanding vendor data: Generative AI can process extensive vendor documentation to extract critical information about individual parameters. Engineers can interact with the AI using natural language queries, receiving instant, precise responses. This eliminates the need to manually browse through complex and voluminous vendor documentation, saving significant time and effort. 
  • Building knowledge graphs: Generative AI can automatically build comprehensive knowledge graphs by understanding the intricate data models of different vendors. These knowledge graphs represent data entities and their relationships, providing a structured and interconnected view of the vendor ecosystem. This aids in better data integration and utilization in the upper layers. 
  • Data model translation: With an in-depth understanding of different vendors’ data models, generative AI can translate data from one vendor’s model to another. This capability is crucial for telecom companies that need to harmonize data across diverse systems and vendors, ensuring consistency and compatibility. 

Automating the understanding of vendor-specific data, generating metadata, constructing detailed knowledge graphs and facilitating seamless data model translation are key processes. Together, these processes, supported by a data layer with RAG-based architecture, enables telecom companies harness the full potential of their data. 

2. Analytics layer: harnessing diverse models for network insights 


On a high level, we can split the use cases of network analytics into two categories: use cases that revolve around understanding the past and current network state and use cases that predict future network state. 

For the first category, which involves advanced data correlations and creating insights about the past and current network state, operators can leverage large language models (LLMs) such as Granite™, Llama, GPT, Mistral and others. Although the training of these LLMs did not particularly include structured operator data, we can effectively use them in combination with multi-shot prompting. This approach helps in bringing additional knowledge and context to operator data interpretation. 

For the second category, which focuses on predicting the future network state, such as anticipating network failures and forecasting traffic loads, operators cannot rely on generic LLMs. This is because these models lack the necessary training to work with network-specific structured and semi-structured data. Instead, operators need foundation models specifically tailored to their unique data and operational characteristics. To accurately forecast future network behavior, we must train these models on the specific patterns and trends unique to the operator, such as historical performance data, incident reports and configuration changes. 

To implement specialized foundation models, network operators should collaborate closely with AI technology providers. Establishing a continuous feedback loop is essential, wherein you regularly monitor model performance and use the data to iteratively improve the model. Additionally, hybrid approaches that combine multiple models, each specializing in different aspects of network analytics, can enhance overall performance and reliability. Finally, incorporating human expertise to validate and fine-tune the model’s outputs can further improve accuracy and build trust in the system. 

3. Automation layer: integrating generative AI and network simulations for optimal solutions 


This layer is responsible for determining and enforcing optimal actions based on insights from the analytics layer, such as future network state predictions, as well as network operational instructions or intents from the operations team. 

There is a common misconception that generative AI handles optimization tasks and can determine the optimal response to predicted network states. However, for use cases of optimal action determination, the automation layer must integrate network simulation tools. This integration enables detailed simulations of all potential optimization actions using a digital network twin (a virtual replica of the network). These simulations create a controlled environment for testing different scenarios without affecting the live network.

By leveraging these simulations, operators can compare and analyze outcomes to identify the actions that best meet optimization goals. It is worth highlighting that simulations often leverage specialized foundation models from the analytics layer, like masked language models. These models allow manipulating parameters and evaluating their impact on specific masked parameters within the network context. 

The automation layer leverages another set of use cases for generative AI, namely the automated generation of scripts for action execution. These actions, triggered by network insights or human-provided intents, require tailored scripts to update network elements accordingly. Traditionally, this process has been manual within telcos, but with advancements in generative AI, there’s potential for automatic script generation. Architectures with generic LLMs augmented with retrieval-augmented generation (RAG) show good performance in this context, provided operators ensure access to vendor documentation and suitable methods of procedure (MOP). 

Generative AI plays a significant role in future telco operations, from predicting KPIs to responding to network insights and user intents. However, addressing challenges such as efficient data comprehension, specialized predictive analytics and automated network optimization is crucial. IBM has hands-on experience in each of these areas, offering solutions for efficient data integration, specialized foundation models and automated network optimization tools.

Source: ibm.com

Thursday, 13 June 2024

5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

In business and beyond, communication is king. Successful service level agreements (SLAs) operate on this principle, laying the foundation for successful provider-customer relationships.

A service level agreement (SLA) is a key component of technology vendor contracts that describes the terms of service between a service provider and a customer. SLAs describe the level of performance to be expected, how performance will be measured and repercussions if levels are not met. SLAs make sure that all stakeholders understand the service agreement and help forge a more seamless working relationship.

Types of SLAs


There are three main types of SLAs:

Customer-level SLAs

Customer-level SLAs define the terms of service between a service provider and a customer. A customer can be external, such as a business purchasing cloud storage from a vendor, or internal, as is the case with an SLA between business and IT teams regarding the development of a product.

Service-level SLAs

Service providers who offer the same service to multiple customers often use service-level SLAs. Service-level SLAs do not change based on the customer, instead outlining a general level of service provided to all customers.

Multilevel SLAs

When a service provider offers a multitiered pricing plan for the same product, they often offer multilevel SLAs to clearly communicate the service offered each level. Multilevel SLAs are also used when creating agreements between more than two more parties.

SLA components


SLAs include an overview of the parties involved, services to be provided, stakeholder role breakdowns, performance monitoring and reporting requirements. Other SLA components include security protocols, redressing agreements, review procedures, termination clauses and more. Crucially, they define how performance will be measured.

SLAs should precisely define the key metrics—service-level agreement metrics—that will be used to measure service performance. These metrics are often related to organizational service level objectives (SLOs). While SLAs define the agreement between organization and customer, SLOs set internal performance targets. Fulfilling SLAs requires monitoring important metrics related to business operations and service provider performance. The key is monitoring the right metrics.

What is a KPI in an SLA?


Metrics are specific measures of an aspect of service performance, such as availability or latency. Key performance indicators (KPIs) are linked to business goals and are used to judge a team’s progress toward those goals. KPIs don’t exist without business targets; they are “indicators” of progress toward a stated goal.

Let’s use annual sales growth as an example, with an organizational goal of 30% growth year-over-year. KPIs such as subscription renewals to date or leads generated provide a real-time snapshot of business progress toward the annual sales growth goal.

Metrics such as application availability and latency help provide context. For example, if the organization is losing customers and not on track to meet the annual goal, an examination of metrics related to customer satisfaction (that is, application availability and latency) might provide some answers as to why customers are leaving.

What SLA metrics to monitor


SLAs contain different terms depending on the vendor, type of service provided, client requirements, compliance standards and more and metrics vary by industry and use case. However, certain SLA performance metrics such as availability, mean time to recovery, response time, error rates and security and compliance measurements are commonly used across services and industries. These metrics set a baseline for operations and the quality of services provided.

Clearly defining which metrics and key performance indicators (KPIs) will be used to measure performance and how this information will be communicated helps IT service management (ITSM) teams identify what data to collect and monitor. With the right data, teams can better maintain SLAs and make sure that customers know exactly what to expect.

Ideally, ITSM teams provide input when SLAs are drafted, in addition to monitoring the metrics related to their fulfillment. Involving ITSM teams early in the process helps make sure that business teams don’t make agreements with customers that are not attainable by IT teams.

SLA metrics that are important for IT and ITSM leaders to monitor include:

1. Availability

Service disruptions, or downtime, are costly, can damage enterprise credibility and can lead to compliance issues. The SLA between an organization and a customer dictates the expected level of service availability or uptime and is an indicator of system functionality.

Availability is often measured in “nines on the way to 100%”: 90%, 99%, 99.9% and so on. Many cloud and SaaS providers aim for an industry standard of “five 9s” or 99.999% uptime.

For certain businesses, even an hour of downtime can mean significant losses. If an e-commerce website experiences an outage during a high traffic time such as Black Friday, or during a large sale, it can damage the company’s reputation and annual revenue. Service disruptions also negatively impact the customer experience. Services that are not consistently available often lead users to search for alternatives. Business needs vary, but the need to provide users with quick and efficient products and services is universal.

Generally, maximum uptime is preferred. However, providers in some industries might find it more cost effective to offer a slightly lower availability rate if it still meets client needs.

2. Mean time to recovery

Mean time to recovery measures the average amount of time that it takes to recover a product during an outage or failure. No system or service is immune from an occasional issue or failure, but enterprises that can quickly recover are more likely to maintain business profitability, meet customer needs and uphold SLAs.

3. Response time and resolution time

SLAs often state the amount of time in which a service provider must respond after an issue is flagged or logged. When an issue is logged or a service request is made, the response time indicates how long it takes for a provider to respond to and address the issue. Resolution time refers to how long it takes for the issue to be resolved. Minimizing these times is key to maintaining service performance.

Organizations should seek to address issues before they become system-wide failures and cause security or compliance issues. Software solutions that offer full-stack observability into business functions can play an important role in maintaining optimized systems and service performance. Many of these platforms use automation and machine learning (ML) tools to automate the process of remediation or identify issues before they arise.

For example, AI-powered intrusion detection systems (IDS) constantly monitor network traffic for malicious activity, violations of security protocols or anomalous data. These systems deploy machine learning algorithms to monitor large data sets and use them to identify anomalous data. Anomalies and intrusions trigger alerts that notify IT teams. Without AI and machine learning, manually monitoring these large data sets would not be possible.

4. Error rates

Error rates measure service failures and the number of times service performance dips below defined standards. Depending on your enterprise, error rates can relate to any number of issues connected to business functions.

For example, in manufacturing, error rates correlate to the number of defects or quality issues on a specific product line, or the total number of errors found during a set time interval. These error rates, or defect rates, help organizations identify the root cause of an error and whether it’s related to the materials used or a broader issue.

There is a subset of customer-based metrics that monitor customer service interactions, which also relate to error rates.

◉ First call resolution rate: In the realm of customer service, issues related to help desk interactions can factor into error rates. The success of customer services interactions can be difficult to gauge. Not every customer fills out a survey or files a complaint if an issue is not resolved—some will just look for another service. One metric that can help measure customer service interactions is the first call resolution rate. This rate reflects whether a user’s issue was resolved during the first interaction with a help desk, chatbot or representative. Every escalation of a customer service query beyond the initial contact means spending on extra resources. It can also impact the customer experience.
◉ Abandonment rate: This rate reflects the frequency in which a customer abandons their inquiry before finding a resolution. Abandonment rate can also add to the overall error rate and helps measure the efficacy of a service desk, chatbot or human workforce.

5. Security and compliance

Large volumes of data and the use of on-premises servers, cloud servers and a growing number of applications creates a greater risk of data breaches and security threats. If not monitored appropriately, security breaches and vulnerabilities can expose service providers to legal and financial repercussions.

For example, the healthcare industry has specific requirements around how to store, transfer and dispose of a patient’s medical data. Failure to meet these compliance standards can result in fines and indemnification for losses incurred by customers.

While there are countless industry-specific metrics defined by the different services provided, many of them fall under larger umbrella categories. To be successful, it is important for business teams and IT service management teams to work together to improve service delivery and meet customer expectations.

Benefits of monitoring SLA metrics


Monitoring SLA metrics is the most efficient way for enterprises to gauge whether IT services are meeting customer expectations and to pinpoint areas for improvement. By monitoring metrics and KPIs in real time, IT teams can identify system weaknesses and optimize service delivery.

The main benefits of monitoring SLA metrics include:

Greater observability

A clear end-to-end understanding of business operations helps ITSM teams find ways to improve performance. Greater observability enables organizations to gain insights into the operation of systems and workflows, identify errors, balance workloads more efficiently and improve performance standards.

Optimized performance

By monitoring the right metrics and using the insights gleaned from them, organizations can provide better services and applications, exceed customer expectations and drive business growth.

Increased customer satisfaction

Similarly, monitoring SLA metrics and KPIs is one of the best ways to make sure services are meeting customer needs. In a crowded business field, customer satisfaction is a key factor in driving customer retention and building a positive reputation.

Greater transparency

By clearly outlining the terms of service, SLAs help eliminate confusion and protect all parties. Well-crafted SLAs make it clear what all stakeholders can expect, offer a well-defined timeline of when services will be provided and which stakeholders are responsible for specific actions. When done right, SLAs help set the tone for a smooth partnership.

Understand performance and exceed customer expectations


The IBM® Instana® Observability platform and IBM Cloud Pak® for AIOps can help teams get stronger insights from their data and improve service delivery.

IBM® Instana® Observability offers full-stack observability in real time, combining automation, context and intelligent action into one platform. Instana helps break down operational silos and provides access to data across DevOps, SRE, platform engineering and ITOps teams.

IT service management teams benefit from IBM Cloud Pak for AIOps through automated tools that address incident management and remediation. IBM Cloud Pak for AIOps offers tools for innovation and the transformation if IT operations. Meet SLAs and monitor metrics with an advanced visibility solution that offers context into dependencies across environments.

IBM Cloud Pak for AIOps is an AIOps platform that delivers visibility into performance data and dependencies across environments. It enables ITOps managers and site reliability engineers (SREs) to use artificial intelligence, machine learning and automation to better address incident management and remediation. With IBM Cloud Pak for AIOps, teams can innovate faster, reduce operational cost and transform IT operations (ITOps).

Source: ibm.com

Tuesday, 4 June 2024

Streamlining digital commerce: Integrating IBM API Connect with ONDC

Streamlining digital commerce: Integrating IBM API Connect with ONDC

In the dynamic landscape of digital commerce, seamless integration and efficient communication drive the success of buyers, sellers and logistics providers. The Open Network for Digital Commerce (ONDC) platform stands as a revolutionary initiative to streamline the digital commerce ecosystem in India. When coupled with the robust capabilities of IBM API Connect, this integration presents a game-changing opportunity for buyers, sellers and logistics partners to thrive in the digital marketplace. Let’s delve into its benefits and potential impact on business.

Introduction to ONDC and IBM API Connect 


The ONDC platform, envisioned by the Government of India, aims to create an inclusive and interoperable digital commerce ecosystem. It facilitates seamless integration among various stakeholders—including buyers, sellers, logistics providers and financial institutions —fostering transparency, efficiency and accessibility in digital commerce. 

IBM API Connect is a comprehensive API management solution that enables organizations to create, secure, manage and analyze APIs throughout their lifecycle. It provides capabilities for designing, deploying and consuming APIs, thereby putting up secure and efficient communication between different applications and systems. 

Benefits for buyers and sellers apps 


1. Enhanced integration: Integration with IBM API Connect allows buyers and sellers apps to seamlessly connect with the ONDC platform, enabling real-time data exchange and transaction processing. This makes for smoother operations and improved user experience for buyers and sellers alike. 
2. Expanded services: Buyers and sellers apps can leverage the ONDC platform’s wide range of services, including inventory management, order processing and payment solutions. Integration with IBM API Connect enables easy access to these services, empowering apps to offer comprehensive solutions to their users. 
3. Improved efficiency: By automating processes and streamlining communication, the integration enhances the overall efficiency of buyers and sellers apps. Tasks such as inventory updates, order tracking and payment reconciliation can be performed seamlessly, reducing manual effort and minimizing errors. 
4. Better data insights: IBM API Connect provides advanced analytics capabilities that enable buyers and sellers apps to gain valuable insights into customer behavior, market trends and inventory management. By leveraging these insights, apps can optimize their operations, personalize user experiences and drive business growth.

Impact on business and logistics 


1. Operational efficiency: The integration of IBM API Connect with the ONDC platform streamlines operations for buyers, sellers and logistics partners, reducing costs and improving productivity. Automated processes and real-time data exchange enable faster order fulfilment and smoother logistics operations. 
2. Customer experience boost: Seamless communication between buyers and sellers apps and the ONDC platform translates into a better customer experience. From faster order processing to accurate inventory information, customers benefit from a more efficient and transparent shopping experience. 
3. Business growth: By leveraging the integrated capabilities of IBM API Connect and the ONDC platform, buyers and sellers apps can expand their reach, attract more customers and increase sales. The ability to offer a seamless and comprehensive shopping experience gives apps a competitive edge in the market. 
4. Logistics optimization: Logistics providers can also benefit from this integration by gaining access to real-time shipment data, optimizing delivery routes and improving inventory management. This leads to faster delivery times, reduced transportation costs and enhanced customer satisfaction. 

Enhanced integration with IBM API Connect


The integration of IBM API Connect with the ONDC network platform represents a significant advancement in the digital commerce ecosystem. Buyers, sellers and logistics partners stand to benefit from enhanced integration, expanded services, improved efficiency and valuable data insights.

As businesses embrace this integration, they can expect to see tangible impacts on operational efficiency, customer experience and overall business growth. By leveraging the combined capabilities of IBM API Connect and the ONDC platform, stakeholders can navigate the complexities of digital commerce with confidence and unlock new opportunities for success. 

Source: ibm.com

Saturday, 1 June 2024

How an AI Gateway provides leaders with greater control and visibility into AI services

How an AI Gateway provides leaders with greater control and visibility into AI services

Generative AI is a transformative technology that many organizations are experimenting with or already using in production to unlock rapid innovation and drive massive productivity gains. However, we have seen that this breakneck pace of adoption has left business leaders wanting more visibility and control around the enterprise usage of GenAI.

When I talk with clients about their organization’s use of GenAI, I ask them these questions:

  • Do you have visibility into which third-party AI services are being used across your company and for what purposes?
  • How much is your company cumulatively paying for LLM subscriptions, including signups by teams and individuals, and are those costs predictable and controllable?
  • Are you able to address common vulnerabilities when invoking LLMs, such as the leakage of sensitive data, unauthorized user access and policy violations?

These questions can all be answered if you have an AI Gateway.

What is an AI Gateway? 


An AI gateway provides a single point of control for organizations to access AI services via APIs in the public domain and brokers secured connectivity between your different applications and third-party AI APIs both within and outside an organization’s infrastructure. It acts as the gatekeeper for data and instructions that flow between those components. An AI Gateway provides policies to centrally manage and control the use of AI APIs with your applications, as well as key analytics and insights to help you make decisions faster on LLM choices. 

Announcing AI Gateway for IBM API Connect 


Today IBM is announcing the launch of AI Gateway for IBM API Connect, a feature of our market-leading and award-winning API management platform. This new AI gateway feature, generally available by the end of June, will empower customers to accelerate their AI journey. When this feature launches in June, you will be able to get started with centrally managing watsonx.ai APIs, with the ability to manage additional LLM APIs planned for later this year. 

Key benefits of AI Gateway for IBM API Connect 


1. Faster and more responsible adoption of GenAI: Centralized, controllable self-service access to enterprise AI APIs for developers. 
2. Insights and cost management: Address unexpected or excessive costs for AI services through limiting the rate of requests within a certain duration and by caching AI responses, use built-in analytics and dashboards to get visibility into enterprise-wide use of AI APIs. 
3. Governance and compliance: By funneling LLM API traffic through the AI Gateway, you can centrally manage the use of AI services through policy enforcement, data encryption, masking of sensitive data, access control, audit trails and more, in support of your compliance obligations. 

Take the next step 


Learn how to complement watsonx.ai and watsonx.gov with AI Gateway for IBM API Connect by visiting our webpage or requesting a live demo to see it in action.

Source: ibm.com

Thursday, 30 May 2024

Empower developers to focus on innovation with IBM watsonx

Empower developers to focus on innovation with IBM watsonx

In the realm of software development, efficiency and innovation are of paramount importance. As businesses strive to deliver cutting-edge solutions at an unprecedented pace, generative AI is poised to transform every stage of the software development lifecycle (SDLC).

McKinsey study shows that software developers can complete coding tasks up to twice as fast with generative AI. From use case creation to test script generation, generative AI offers a streamlined approach that accelerates development, while maintaining quality. This ground-breaking technology is revolutionizing software development and offering tangible benefits for businesses and enterprises.

Bottlenecks in the software development lifecycle


Traditionally, software development involves a series of time-consuming and resource-intensive tasks. For instance, creating use cases require meticulous planning and documentation, often involving multiple stakeholders and iterations. Designing data models and generating Entity-Relationship Diagrams (ERDs) demand significant effort and expertise. Moreover, techno-functional consultants with specialized expertise need to be onboarded to translate the business requirements (for example, converting use cases into process interactions in the form of sequence diagrams).

Once the architecture is defined, translating it into backend Java Spring Boot code adds another layer of complexity. Developers must write and debug code, a process that is prone to errors and delays. Crafting frontend UI mock-ups involves extensive design work, often requiring specialized skills and tools.

Testing further compounds these challenges. Writing test cases and scripts manually is laborious and maintaining test coverage across evolving codebases is a persistent challenge. As a result, software development cycles can be prolonged, hindering time-to-market and increasing costs.

In summary, traditional SDLC can be riddled with inefficiencies. Here are some common pain points:

  • Time-consuming Tasks: Creating use cases, data models, Entity Relationship Diagrams (ERDs), sequence diagrams and test scenarios and test cases creation often involve repetitive, manual work.
  • Inconsistent documentation: Documentation can be scattered and outdated, leading to confusion and rework.
  • Limited developer resources: Highly skilled developers are in high demand and repetitive tasks can drain their time and focus.

The new approach: IBM watsonx to the rescue


Tata Consultancy Services, in partnership with IBM®, developed a point of view that incorporates IBM watsonx™. It can automate many tedious tasks and empower developers to focus on innovation. Features include:

  • Use case creation: Users can describe a desired feature in natural language, then watsonx analyses the input and drafts comprehensive use cases to save valuable time.
  • Data model creation: Based on use cases and user stories, watsonx can generate robust data models representing the software’s data structure.
  • ERD generation: The data model can be automatically translated into a visual ERD, providing a clear picture of the relationships between entities.
  • DDL script generation: Once the ERD is defined, watsonx can generate the DDL scripts for creating the database.
  • Sequence diagram generation: watsonx can automatically generate the visual representation of the process interactions of a use case and data models, providing a clear understanding of the business process.
  • Back-end code generation: watsonx can translate data models and use cases into functional back-end code, like Java Springboot. This doesn’t eliminate developers, but allows them to focus on complex logic and optimization.
  • Front-end UI mock-up generation: watsonx can analyze user stories and data models to generate mock-ups of the software’s user interface (UI). These mock-ups help visualize the application and gather early feedback.
  • Test case and script generation: watsonx can analyse code and use cases to create automated test cases and scripts, thereby boosting software quality.

Efficiency, speed, and cost savings


All of these watsonx automations lead to benefits, such as:

  • Increased developer productivity: By automating repetitive tasks, watsonx frees up developers’ time for creative problem-solving and innovation.
  • Accelerated time-to-market: With streamlined processes and automated tasks, businesses can get their software to market quicker, capitalizing on new opportunities.
  • Reduced costs: Less manual work translates to lower development costs. Additionally, catching bugs early with watsonx-powered testing saves time and resources.

Embracing the future of software development


TCS and IBM believe that generative AI is not here to replace developers, but to empower them. By automating the mundane tasks  and generating artifacts throughout the SDLC, watsonx paves the way for faster, more efficient and more cost-effective software development. Embracing platforms like IBM watsonx is not just about adopting new technology, it’s about unlocking the full potential of efficient software development in a digital age.

Source: ibm.com

Tuesday, 28 May 2024

Achieving cloud excellence and efficiency with cloud maturity models

Achieving cloud excellence and efficiency with cloud maturity models

  • Cloud maturity models (CMMs) are helpful tools for evaluating an organization’s cloud adoption readiness and cloud security posture.
  • Cloud adoption presents tremendous business opportunity—to the tune of USD 3 trillion—and more mature cloud postures drive greater cloud ROI and more successful digital transformations.
  • There are many CMMs in practice and organizations need to decide which are most appropriate for their business and their needs. CMMs can be used individually, or in conjunction with one another.

Why move to the cloud?


Business leaders worldwide are asking their teams the same question: “Are we using the cloud effectively?” This quandary often comes with an accompanying worry: “Are we spending too much money on cloud computing?” Given the statistics—82% of surveyed respondents in a 2023 Statista study cited managing cloud spend as a significant challenge—it’s a legitimate concern.

Concerns around security, governance and lack of resources and expertise also top the list of respondents’ concerns. Cloud maturity models are a useful tool for addressing these concerns, grounding organizational cloud strategy and proceeding confidently in cloud adoption with a plan.

Cloud maturity models (or CMMs) are frameworks for evaluating an organization’s cloud adoption readiness on both a macro and individual service level. They help an organization assess how effectively it is using cloud services and resources and how cloud services and security can be improved.

Why move to cloud?


Organizations face increased pressure to move to the cloud in a world of real-time metrics, microservices and APIs, all of which benefit from the flexibility and scalability of cloud computing. An examination of cloud capabilities and maturity is a key component of this digital transformation and cloud adoption presents tremendous upside. McKinsey believes it presents a USD 3 trillion opportunity and nearly all of responding cloud leaders  (99%) view the cloud as the cornerstone of their digital strategy, according to a Deloitte study.

A successful cloud strategy requires a comprehensive assessment of cloud maturity. This assessment is used to identify the actions—such as upgrading legacy tech and adjusting organizational workflows—that the organization needs to take to fully realize cloud benefits and pinpoint current shortcomings. CMMs are a great tool for this assessment.

There are many CMMs in practice and organizations must decide what works best for their business needs. A good starting point for many organizations is to engage in a three-phase assessment of cloud maturity using the following models: a cloud adoption maturity model, a cloud security maturity model and a cloud-native maturity model.

Cloud adoption maturity model


This maturity model helps measure an organization’s cloud maturity in aggregate. It identifies the technologies and internal knowledge that an organization has, how suited its culture is to embrace managed services, the experience of its DevOps team, the initiatives it can begin to migrate to cloud and more. Progress along these levels is linear, so an organization must complete one stage before moving to the next stage.

  • Legacy: Organizations at the beginning of their journey will have no cloud-ready applications or workloads, cloud services or cloud infrastructure.
  • Ad hoc: Next is ad hoc maturity, which likely means the organization has begun its journey through cloud technologies like infrastructure as a service (IaaS), the lowest-level control of resources in the cloud. IaaS customers receive compute, network and storage resources on an on-demand, over the internet, pay-as-you-go pricing basis.
  • Repeatable: Organizations at this stage have begun to make more investments in the cloud. This might include establishing a Cloud Center of Excellence (CCoE) and examining the scalability of initial cloud investments. Most importantly, the organization has now created repeatable processes for moving apps, workstreams and data to the cloud.
  • Optimized: Cloud environments are now working efficiently and every new use case follows the same foundation set forth by the organdization.
  • Cloud-advanced: The organization now has most, if not all, of its workstreams on the cloud. Everything runs seamlessly and efficiently and all stakeholders are aware of the cloud’s potential to drive business objectives.

Cloud security maturity model


The optimization of security is paramount for any organization that moves to the cloud. The cloud can be more secure than on-premises data centers, thanks to robust policies and postures used by cloud providers. Prioritizing cloud security is important considering that public cloud-based breaches often take months to correct and can have serious financial and reputational consequences.

Cloud security represents a partnership between the cloud service provider (CSP) and the client. CSPs provide certifications on the security inherent in their offerings, but clients that build in the cloud can introduce misconfigurations or other issues when they build on top of the cloud infrastructure. So CSPs and clients must work together to create and maintain secure environments.

The Cloud Security Alliance, of which IBM® is a member, has a widely adopted cloud security maturity model (CSMM). The model provides good foundation for organizations looking to better embed security into their cloud environments.

Organizations may not want or need to adopt the entire model, but can use whichever components make sense. The model’s five stages revolve around the organization’s level of security automation.

  • No automation: Security professionals identify and address incidents and problems manually through dashboards.
  • Simple SecOps: This phase includes some infrastructure-as-code (IaC) deployments and federation on some accounts.
  • Manually executed scripts: This phase incorporates more federation and multi-factor authentication (MFA), although most automation is still executed manually.
  • Guardrails: It includes a larger library of automation expanding into multiple account guardrails, which are high-level governance policies for the cloud environment.
  • Automation everywhere: This is when everything is integrated into IaC and MFA and federation usage is pervasive.

Cloud-native maturity models


The first two maturity models refer more to an organization’s overall readiness; the cloud-native maturity model (CNMM) is used to evaluate an organization’s ability to create apps (whether built internally or through open source tooling) and workloads that are cloud-native. According to Deloitte, 87% of cloud leaders embrace cloud-native development.

As with other models, business leaders should first understand their business goals before diving into this model. These objectives will help determine what stage of maturity is necessary for the organization. Business leaders also need to look at their existing enterprise applications and decide which cloud migration strategy is most appropriate.

Most “lifted and shifted” apps can operate in a cloud environment but might not to reap the full benefits of cloud. Cloud mature organizations often decide it’s most effective to build cloud-native applications for their most important tools and services.

The Cloud Native Computing Foundation has put forth its own model.

  1. Level 1 – Build: An organization is in pre-production related to one proof of concept (POC) application and currently has limited organizational support. Business leaders understand the benefits of cloud native and, though new to the technology, team members have basic technical understanding.
  2. Level 2 – Operate: Teams are investing in training and new skills and SMEs are emerging within the organization. A DevOps practice is being developed, bringing together cloud engineers and developer groups. With this organizational change, new teams are being defined, agile project groups created and feedback and testing loops established.
  3. Level 3 – Scale: Cloud-native strategy is now the preferred approach. Competency is growing, there is increased stakeholder buy-in and cloud-native has become a primary focus. The organization is beginning to implement shift-left policies and actively training all employees on security initiatives. This level is often characterized by a high degree of centralization and clear delineation of responsibilities, however bottlenecks in the process emerge and velocity might decrease.
  4. Level 4 – Improve: At level 4, the cloud is the default infrastructure for all services. There is full commitment from leadership and team focus revolves heavily around cloud cost optimization. The organization explores areas to improve and processes that can be made more efficient. Cloud expertise and responsibilities are shifting from developers to all employees through self-service tools. Multiple groups have adopted Kubernetes for deploying and managing containerized applications.  With a strong, established platform, the decentralization process can begin in earnest.
  5. Level 5 – Optimize: At this stage, the business has full trust in the technology team and employees company-wide are onboarded to the cloud-native environment. Service ownership is established and distributed to self-sufficient teams. DevOps and DevSecOps are operational, highly skilled and fully scaled. Teams are comfortable with experimentation and skilled in using data to inform business decisions. Accurate data practices boost optimization efforts and enables the organization to further adopt FinOps practices. Operations are smooth, goals outlined in the initial phase have been achieved and the organization has a flexible platform that suits its needs.

What’s best for my organization?


An organization’s cloud maturity level dictates which benefits and to what degree it stands to gain from a move to the cloud. Not every organization will reach, or want to reach, the top level of maturity in each, or all, of the three models discussed here. However, it’s likely that organizations will find it difficult to compete without some level of cloud maturity, since 70% of workloads will be on the cloud by 2024, according to Gartner.

The more mature an organization’s cloud infrastructure, security and cloud-native application posture, the more the cloud becomes advantageous. With a thorough examination of current cloud capabilities and a plan to improve maturity moving forward, an organization can increase the efficiency of its cloud spend and maximize cloud benefits.

Advancing cloud maturity with IBM


Cloud migration with IBM® Instana® Observability helps set organizations up for success at each phase of the migration process (plan, migrate, run) to make sure that applications and infrastructure run smoothly and efficiently. From setting performance baselines and right-sizing infrastructure to identifying bottlenecks and monitoring the end-user experience, Instana provides several solutions that help organizations create more mature cloud environments and processes. 

However, migrating applications, infrastructure and services to cloud is not enough to drive a successful digital transformation. Organizations need an effective cloud monitoring strategy that uses robust tools to track key performance metrics—such as response time, resource utilization and error rates—to identify potential issues that could impact cloud resources and application performance.

Instana provides comprehensive, real-time visibility into the overall status of cloud environments. It enables IT teams to proactively monitor and manage cloud resources across multiple platforms, such as AWS, Microsoft Azure and Google Cloud Platform.

The IBM Turbonomic® platform proactively optimizes the delivery of compute, storage and network resources across stacks to avoid overprovisioning and increase ROI. Whether your organization is pursuing a cloud-first, hybrid cloud or multicloud strategy, the Turbonomic platform’s AI-powered automation can help contain costs while preserving performance with automatic, continuous cloud optimization.

Source: ibm.com

Saturday, 25 May 2024

Enhancing triparty repo transactions with IBM MQ for efficiency, security and scalability

Enhancing triparty repo transactions with IBM MQ for efficiency, security and scalability

The exchange of securities between parties is a critical aspect of the financial industry that demands high levels of security and efficiency. Triparty repo dealing systems, central to these exchanges, require seamless and secure communication across different platforms. The Clearing Corporation of India Limited (CCIL) recently recommended (link resides outside ibm.com) IBM® MQ as the messaging software requirement for all its members to manage the triparty repo dealing system.

Read on to learn more about the impact of IBM MQ on triparty repo dealing systems and how you can use IBM MQ effectively for smooth and safe transactions.

IBM MQ and its effect on triparty repo dealing system


IBM MQ is a messaging system that allows parties to communicate with each other in a protected and reliable manner. In a triparty repo dealing system, IBM MQ acts as the backbone of communication, enabling the parties to exchange information and instructions related to the transaction. IBM MQ enhances the efficiency of a triparty repo dealing system across various factors:

  • Efficient communication: IBM MQ enables efficient communication between parties, allowing them to exchange information and instructions in real-time. This reduces the risk of errors and miscommunications, which can lead to significant losses in the financial industry. With IBM MQ, parties can make sure that transactions are executed accurately and efficiently. IBM MQ makes sure that the messages are delivered exactly once, and this aspect is particularly important in the financial industry.
  • Scalable and can handle more messages: IBM MQ is designed to handle a large volume of messages, making it an ideal solution for triparty repo dealing systems. As the system grows, IBM MQ can scale up to meet the increasing demands of communication, helping the system remain efficient and reliable.
  • Robust security: IBM MQ provides a safe communication channel between parties, protecting sensitive information from unauthorized access. This is critical in the financial industry, where security is paramount. IBM MQ uses encryption and other security measures to protect data, so that transactions are conducted safely and securely.
  • Flexible and easy to integrate: IBM MQ is a flexible messaging system that can be seamlessly integrated with other systems and applications. This makes it easy to incorporate new features and functionalities into the triparty repo dealing system, allowing it to adapt to changing market conditions and customer needs.

How to use IBM MQ effectively in triparty repo dealing systems


Follow these guidelines to use IBM MQ effectively in a triparty repo dealing system and make a difference:

  • Define clear message formats for different types of communications, such as trade capture, confirmation and settlement. This will make sure that parties understand the structure and content of messages, reducing errors and miscommunications.
  • Implement strong security measures to protect sensitive information, such as encryption and access controls. This will protect the data  from unauthorized access and tampering.
  • Monitor message queues to verify that messages are being processed efficiently and that there are no errors or bottlenecks. This will help identify issues early, reducing the risk of disruptions to the system.
  • Use message queue management tools to manage and monitor message queues. These tools can help optimize message processing, reduce latency and improve system performance.
  • Test and validate messages regularly to ensure that they are formatted correctly and that the information is accurate. This will help reduce errors and miscommunications, enabling transactions to be executed correctly.

CCIL as triparty repo dealing system and IBM MQ


The Clearing Corporation of India Ltd. (CCIL) is a central counterparty (CCP) that was set up in April 2001 to provide clearing and settlement for transactions in government securities, foreign exchange and money markets in the country. CCIL acts as a central counterparty in various segments of the financial markets regulated by the Reserve Bank of India (RBI), namely., the government securities segment, that is, outright, market repo and triparty repo, USD-INR and forex forward segments.

As recommended by CCIL, all members are required to use IBM MQ as the messaging software for the triparty repo dealing system. IBM MQ v9.3 Long Term Support (LTS)IBM MQ v9.3 Long Term Support (LTS) release and above is the recommended software to have in the members’ software environment.

IBM MQ plays a critical role in triparty repo dealing systems, enabling efficient, secure, and reliable communication between parties. By following the guidelines outlined above, parties can effectively use IBM MQ to facilitate smooth and secure transactions. As the financial industry continues to evolve, the importance of IBM MQ in triparty repo dealing systems will only continue to grow, making it an essential component of the system.

Source: ibm.com

Thursday, 23 May 2024

How AI-powered recruiting helps Spain’s leading soccer team score

How AI-powered recruiting helps Spain’s leading soccer team score

Phrases like “striking the post” and “direct free kick outside the 18” may seem foreign if you’re not a fan of football (for Americans, see: soccer). But for a football scout, it’s the daily lexicon of the job, representing crucial language that helps assess a player’s value to a team. And now, it’s also the language spoken and understood by Scout Advisor—an innovative tool using natural language processing (NLP) and built on the IBM® watsonx™ platform especially for Spain’s Sevilla Fútbol Club. 

On any given day, a scout has several responsibilities: observing practices, talking to families of young players, taking notes on games and recording lots of follow-up paperwork. In fact, paperwork is a much more significant part of the job than one might imagine. 

As Victor Orta, Sevilla FC Sporting Director, explained at his conference during the World Football Summit in 2023: “We are never going to sign a player with data alone, but we will never do it without resorting to data either. In the end, the good player will always have good data, but then there is always the human eye, which is the one that must evaluate everything and decide.” 

Read on to learn more about IBM and Sevilla FC’s high-scoring partnership. 

Benched by paperwork 


Back in 2021, an avalanche of paperwork plagued Sevilla FC, a top-flight team based in Andalusia, Spain. With an elite scouting team featuring 20-to-25 scouts, a single player can accumulate up to 40 scout reports, requiring 200-to-300 hours of review. Overall, Sevilla FC was tasked with organizing more than 200,000 total reports on potential players—an immensely time-consuming job. 

Combining expert observation alongside the value of data remained key for the club. Scout reports look at the quantitative data of game-time minutiae, like scoring attempts, accurate pass percentages, assists, as well as qualitative data like a player’s attitude and alignment with team philosophy. At the time, Sevilla FC could efficiently access and use quantitative player data in a matter of seconds, but the process of extracting qualitative information from the database was much slower in comparison.  

In the case of Sevilla FC, using big data to recruit players had the potential to change the core business. Instead of scouts choosing players based on intuition and bias alone, they could also use statistics, and confidently make better business decisions on multi-million-dollar investments (that is, players). Not to mention, when, where and how to use said players. But harnessing that data was no easy task. 

Getting the IBM assist


Sevilla FC takes data almost as seriously as scoring goals. In 2021, the club created a dedicated data department specifically to help management make better business decisions. It has now grown to be the largest data department in European football, developing its own AI tool to help track player movements through news coverage, as well as internal ticketing solutions.  

But when it came to the massive amount of data collected by scouters, the department knew it had a challenge that would take a reliable partner. Initially, the department consulted with data scientists at the University of Sevilla to develop models to organize all their data. But soon, the club realized it would need more advanced technology. A cold call from an IBM representative was fortuitous. 

“I was contacted by [IBM Client Engineering Manager] Arturo Guerrero to know more about us and our data projects,” says Elias Zamora, Sevilla FC chief data officer. “We quickly understood there were ways to cooperate. Sevilla FC has one of the biggest scouting databases in the professional football, ready to be used in the framework of generative AI technologies. IBM had just released watsonx, its commercial generative AI and scientific data platform based on cloud. Therefore, a partnership to extract the most value from our scouting reports using AI was the right initiative.”  

Coordinating the play 


Sevilla FC connected with the IBM Client Engineering team to talk through its challenges and a plan was devised.  

Because Sevilla FC was able to clearly explain its challenges and goals—and IBM asked the right questions—the technology soon followed. The partnership determined that IBM watsonx.ai™ would be the best solution to quickly and easily sift through a massive player database using foundation models and generative AI to process prompts in natural language. Using semantic language for search provided richer results: for instance, a search for “talented winger” translated to “a talented winger is capable of taking on defenders with dribbling to create space and penetrate the opposition’s defense.”  

The solution—titled Scout Advisor—presents a curated list of players matching search criteria in a well-designed, user-friendly interface. Its technology helps unlock the entire potential of the Sevilla FC’s database, from the intangible impressions of a scout to specific data assets. 

How AI-powered recruiting helps Spain’s leading soccer team score
Sevilla FC Scout Advisor UI 

Scoring the goal 


Scout Advisor’s pilot program went into production in January 2024, and is currently training with 200,000 existing reports. The club’s plan is to use the tool during the summer 2024 recruiting season and see results in September. So far, the reviews have been positive.   
 
“Scout Advisor has the capability to revolutionize the way we approach player recruitment,” Zamora says. “It permits the identification of players based on the opinion of football experts embedded in the scouting reports and expressed in natural language. That is, we use the technology to fully extract the value and knowledge of our scouting department.”  

And with the time saved, scouts can now concentrate on human tasks: connecting with recruits, watching games and making decisions backed by data. 

When considering the high functionality of Scout Advisor’s NLP technology, it’s natural to think about how the same technology can be applied to other sports recruiting and other functions. But one thing is certain: making better decisions about who, when and why to play a footballer has transformed the way Sevilla FC recruits.  

Says Zamora: “This is the most revolutionary technology I have seen in football.” 

Source: ibm.com