Friday, 31 May 2024

Responsible AI can revolutionize tax agencies to improve citizen services

Responsible AI can revolutionize tax agencies to improve citizen services

The new era of generative AI has spurred the exploration of AI use cases to enhance productivity, improve customer service, increase efficiency and scale IT modernization.

Recent research commissioned by IBM® indicates that as many as 42% of surveyed enterprise-scale businesses have actively deployed AI, while an additional 40% are actively exploring the use of AI technology. But the rates of exploration of AI use cases and deployment of new AI-powered tools have been slower in the public sector because of potential risks.

However, the latest CEO Study by the IBM Institute for the Business Value found that 72% of the surveyed government leaders say that the potential productivity gains from AI and automation are so great that they must accept significant risk to stay competitive.

Driving innovation for tax agencies with trust in mind

Tax or revenue management agencies are a part of the public sector that might likely benefit from the use of responsible AI tools. Generative AI can revolutionize tax administration and drive toward a more personalized and ethical future. But tax agencies must adopt AI tools with adequate oversight and governance to mitigate risks and build public trust.

These agencies have a myriad of complex challenges unique to each country, but most of them share the goal of increasing efficiency and providing the transparency that taxpayers demand.

In the world of government agencies, risks associated to the deployment of AI present themselves in many ways, often with higher stakes than in the private sector. Mitigating data bias, unethical use of data, lack of transparency or privacy breaches is essential.

Governments can help manage and mitigate these risks by relying on IBM’s five fundamental properties for trustworthy AI: explainability, fairness, transparency, robustness and privacy. Governments can also create and execute AI design and deployment strategies that keep humans at the hearth of the decision-making process.

Exploring the views of global tax agency leaders

To explore the point of view of global tax agency leaders, the IBM Center for The Business of Government, in collaboration with the American University Kogod School of Business Tax Policy Center, organized a series of roundtables with key stakeholders and released a report exploring AI and taxes in the modern age. Drawing on insights from academics and tax experts from around the world, the report helps us understand how these agencies can harness technology to improve efficiencies and create a better experience for taxpayers.

The report details the potential benefits of scaling the use of AI by tax agencies, including enhancing customer service, detecting threats faster, identifying and tackling tax scams effectively and allowing citizens to access benefits faster.

Since the release of the report, a subsequent roundtable allowed global tax leaders to explore what is next in their journey to bring tax agencies around the globe closer to the future. At both gatherings, participants emphasized the importance of effective governance and risk management.

Responsible AI services improve taxpayer experiences

According to the FTA’s Tax Administration 2023 report, 85% of individual taxpayers and 90% of businesses now file taxes digitally. And 80% of tax agencies around the world are implementing leading-edge techniques to capture taxpayer data, with over 60% using virtual assistants. The FTA research indicates that this represents a 30% increase from 2018.

For tax agencies, virtual assistants can be a powerful way to reduce waiting time to answer citizen inquiries; 24/7 assistants, such as ™’s advanced AI chatbots, can help tax agencies by decentralizing tax support and reducing errors to prevent incorrect processing of tax filings. The use of these AI assistants also helps streamline fast, accurate answers that deliver elevated experiences with measurable cost savings. It also allows for compliance-by-design tax systems, providing early warnings of incidental errors made by taxpayers that can contribute to significant tax losses for governments if left unresolved.

However, these advanced AI and generative AI applications come with risks, and agencies must address concerns around data privacy and protection, reliability, tax rights and hallucinations from generative models.

Furthermore, biases against marginalized groups remain a risk. Current risk mitigation strategies (including having human-in-system roles and robust training data) are not necessarily enough. Every country needs to independently determine appropriate risk management strategies for AI, accounting for the complexity of their tax policies and public trust.

What’s next?

Whether using existing large language models or creating their own, global tax leaders should prioritize AI governance frameworks to manage risks, mitigate damage to their reputation and support their compliance programs. This is possible by training generative AI models using their own quality data and by having transparent processes with safeguards that identify and alert for risk mitigation and for instances of drift and toxic language.

Tax agencies should make sure that technology delivers benefits and produces results that are transparent, unbiased and appropriate. As leaders of these agencies continue to scale the use of generative AI, IBM can help global tax agency leaders deliver a personalized and supportive experience for taxpayers.

IBM’s decades of work with the largest tax agencies around the world, paired with leading AI technology with watsonx™ and watsonx.governance™, can help scale and accelerate the responsible and tailored deployment of governed AI in tax agencies.

Source: ibm.com

Thursday, 30 May 2024

Empower developers to focus on innovation with IBM watsonx

Empower developers to focus on innovation with IBM watsonx

In the realm of software development, efficiency and innovation are of paramount importance. As businesses strive to deliver cutting-edge solutions at an unprecedented pace, generative AI is poised to transform every stage of the software development lifecycle (SDLC).

McKinsey study shows that software developers can complete coding tasks up to twice as fast with generative AI. From use case creation to test script generation, generative AI offers a streamlined approach that accelerates development, while maintaining quality. This ground-breaking technology is revolutionizing software development and offering tangible benefits for businesses and enterprises.

Bottlenecks in the software development lifecycle


Traditionally, software development involves a series of time-consuming and resource-intensive tasks. For instance, creating use cases require meticulous planning and documentation, often involving multiple stakeholders and iterations. Designing data models and generating Entity-Relationship Diagrams (ERDs) demand significant effort and expertise. Moreover, techno-functional consultants with specialized expertise need to be onboarded to translate the business requirements (for example, converting use cases into process interactions in the form of sequence diagrams).

Once the architecture is defined, translating it into backend Java Spring Boot code adds another layer of complexity. Developers must write and debug code, a process that is prone to errors and delays. Crafting frontend UI mock-ups involves extensive design work, often requiring specialized skills and tools.

Testing further compounds these challenges. Writing test cases and scripts manually is laborious and maintaining test coverage across evolving codebases is a persistent challenge. As a result, software development cycles can be prolonged, hindering time-to-market and increasing costs.

In summary, traditional SDLC can be riddled with inefficiencies. Here are some common pain points:

  • Time-consuming Tasks: Creating use cases, data models, Entity Relationship Diagrams (ERDs), sequence diagrams and test scenarios and test cases creation often involve repetitive, manual work.
  • Inconsistent documentation: Documentation can be scattered and outdated, leading to confusion and rework.
  • Limited developer resources: Highly skilled developers are in high demand and repetitive tasks can drain their time and focus.

The new approach: IBM watsonx to the rescue


Tata Consultancy Services, in partnership with IBM®, developed a point of view that incorporates IBM watsonx™. It can automate many tedious tasks and empower developers to focus on innovation. Features include:

  • Use case creation: Users can describe a desired feature in natural language, then watsonx analyses the input and drafts comprehensive use cases to save valuable time.
  • Data model creation: Based on use cases and user stories, watsonx can generate robust data models representing the software’s data structure.
  • ERD generation: The data model can be automatically translated into a visual ERD, providing a clear picture of the relationships between entities.
  • DDL script generation: Once the ERD is defined, watsonx can generate the DDL scripts for creating the database.
  • Sequence diagram generation: watsonx can automatically generate the visual representation of the process interactions of a use case and data models, providing a clear understanding of the business process.
  • Back-end code generation: watsonx can translate data models and use cases into functional back-end code, like Java Springboot. This doesn’t eliminate developers, but allows them to focus on complex logic and optimization.
  • Front-end UI mock-up generation: watsonx can analyze user stories and data models to generate mock-ups of the software’s user interface (UI). These mock-ups help visualize the application and gather early feedback.
  • Test case and script generation: watsonx can analyse code and use cases to create automated test cases and scripts, thereby boosting software quality.

Efficiency, speed, and cost savings


All of these watsonx automations lead to benefits, such as:

  • Increased developer productivity: By automating repetitive tasks, watsonx frees up developers’ time for creative problem-solving and innovation.
  • Accelerated time-to-market: With streamlined processes and automated tasks, businesses can get their software to market quicker, capitalizing on new opportunities.
  • Reduced costs: Less manual work translates to lower development costs. Additionally, catching bugs early with watsonx-powered testing saves time and resources.

Embracing the future of software development


TCS and IBM believe that generative AI is not here to replace developers, but to empower them. By automating the mundane tasks  and generating artifacts throughout the SDLC, watsonx paves the way for faster, more efficient and more cost-effective software development. Embracing platforms like IBM watsonx is not just about adopting new technology, it’s about unlocking the full potential of efficient software development in a digital age.

Source: ibm.com

Tuesday, 28 May 2024

Achieving cloud excellence and efficiency with cloud maturity models

Achieving cloud excellence and efficiency with cloud maturity models

  • Cloud maturity models (CMMs) are helpful tools for evaluating an organization’s cloud adoption readiness and cloud security posture.
  • Cloud adoption presents tremendous business opportunity—to the tune of USD 3 trillion—and more mature cloud postures drive greater cloud ROI and more successful digital transformations.
  • There are many CMMs in practice and organizations need to decide which are most appropriate for their business and their needs. CMMs can be used individually, or in conjunction with one another.

Why move to the cloud?


Business leaders worldwide are asking their teams the same question: “Are we using the cloud effectively?” This quandary often comes with an accompanying worry: “Are we spending too much money on cloud computing?” Given the statistics—82% of surveyed respondents in a 2023 Statista study cited managing cloud spend as a significant challenge—it’s a legitimate concern.

Concerns around security, governance and lack of resources and expertise also top the list of respondents’ concerns. Cloud maturity models are a useful tool for addressing these concerns, grounding organizational cloud strategy and proceeding confidently in cloud adoption with a plan.

Cloud maturity models (or CMMs) are frameworks for evaluating an organization’s cloud adoption readiness on both a macro and individual service level. They help an organization assess how effectively it is using cloud services and resources and how cloud services and security can be improved.

Why move to cloud?


Organizations face increased pressure to move to the cloud in a world of real-time metrics, microservices and APIs, all of which benefit from the flexibility and scalability of cloud computing. An examination of cloud capabilities and maturity is a key component of this digital transformation and cloud adoption presents tremendous upside. McKinsey believes it presents a USD 3 trillion opportunity and nearly all of responding cloud leaders  (99%) view the cloud as the cornerstone of their digital strategy, according to a Deloitte study.

A successful cloud strategy requires a comprehensive assessment of cloud maturity. This assessment is used to identify the actions—such as upgrading legacy tech and adjusting organizational workflows—that the organization needs to take to fully realize cloud benefits and pinpoint current shortcomings. CMMs are a great tool for this assessment.

There are many CMMs in practice and organizations must decide what works best for their business needs. A good starting point for many organizations is to engage in a three-phase assessment of cloud maturity using the following models: a cloud adoption maturity model, a cloud security maturity model and a cloud-native maturity model.

Cloud adoption maturity model


This maturity model helps measure an organization’s cloud maturity in aggregate. It identifies the technologies and internal knowledge that an organization has, how suited its culture is to embrace managed services, the experience of its DevOps team, the initiatives it can begin to migrate to cloud and more. Progress along these levels is linear, so an organization must complete one stage before moving to the next stage.

  • Legacy: Organizations at the beginning of their journey will have no cloud-ready applications or workloads, cloud services or cloud infrastructure.
  • Ad hoc: Next is ad hoc maturity, which likely means the organization has begun its journey through cloud technologies like infrastructure as a service (IaaS), the lowest-level control of resources in the cloud. IaaS customers receive compute, network and storage resources on an on-demand, over the internet, pay-as-you-go pricing basis.
  • Repeatable: Organizations at this stage have begun to make more investments in the cloud. This might include establishing a Cloud Center of Excellence (CCoE) and examining the scalability of initial cloud investments. Most importantly, the organization has now created repeatable processes for moving apps, workstreams and data to the cloud.
  • Optimized: Cloud environments are now working efficiently and every new use case follows the same foundation set forth by the organdization.
  • Cloud-advanced: The organization now has most, if not all, of its workstreams on the cloud. Everything runs seamlessly and efficiently and all stakeholders are aware of the cloud’s potential to drive business objectives.

Cloud security maturity model


The optimization of security is paramount for any organization that moves to the cloud. The cloud can be more secure than on-premises data centers, thanks to robust policies and postures used by cloud providers. Prioritizing cloud security is important considering that public cloud-based breaches often take months to correct and can have serious financial and reputational consequences.

Cloud security represents a partnership between the cloud service provider (CSP) and the client. CSPs provide certifications on the security inherent in their offerings, but clients that build in the cloud can introduce misconfigurations or other issues when they build on top of the cloud infrastructure. So CSPs and clients must work together to create and maintain secure environments.

The Cloud Security Alliance, of which IBM® is a member, has a widely adopted cloud security maturity model (CSMM). The model provides good foundation for organizations looking to better embed security into their cloud environments.

Organizations may not want or need to adopt the entire model, but can use whichever components make sense. The model’s five stages revolve around the organization’s level of security automation.

  • No automation: Security professionals identify and address incidents and problems manually through dashboards.
  • Simple SecOps: This phase includes some infrastructure-as-code (IaC) deployments and federation on some accounts.
  • Manually executed scripts: This phase incorporates more federation and multi-factor authentication (MFA), although most automation is still executed manually.
  • Guardrails: It includes a larger library of automation expanding into multiple account guardrails, which are high-level governance policies for the cloud environment.
  • Automation everywhere: This is when everything is integrated into IaC and MFA and federation usage is pervasive.

Cloud-native maturity models


The first two maturity models refer more to an organization’s overall readiness; the cloud-native maturity model (CNMM) is used to evaluate an organization’s ability to create apps (whether built internally or through open source tooling) and workloads that are cloud-native. According to Deloitte, 87% of cloud leaders embrace cloud-native development.

As with other models, business leaders should first understand their business goals before diving into this model. These objectives will help determine what stage of maturity is necessary for the organization. Business leaders also need to look at their existing enterprise applications and decide which cloud migration strategy is most appropriate.

Most “lifted and shifted” apps can operate in a cloud environment but might not to reap the full benefits of cloud. Cloud mature organizations often decide it’s most effective to build cloud-native applications for their most important tools and services.

The Cloud Native Computing Foundation has put forth its own model.

  1. Level 1 – Build: An organization is in pre-production related to one proof of concept (POC) application and currently has limited organizational support. Business leaders understand the benefits of cloud native and, though new to the technology, team members have basic technical understanding.
  2. Level 2 – Operate: Teams are investing in training and new skills and SMEs are emerging within the organization. A DevOps practice is being developed, bringing together cloud engineers and developer groups. With this organizational change, new teams are being defined, agile project groups created and feedback and testing loops established.
  3. Level 3 – Scale: Cloud-native strategy is now the preferred approach. Competency is growing, there is increased stakeholder buy-in and cloud-native has become a primary focus. The organization is beginning to implement shift-left policies and actively training all employees on security initiatives. This level is often characterized by a high degree of centralization and clear delineation of responsibilities, however bottlenecks in the process emerge and velocity might decrease.
  4. Level 4 – Improve: At level 4, the cloud is the default infrastructure for all services. There is full commitment from leadership and team focus revolves heavily around cloud cost optimization. The organization explores areas to improve and processes that can be made more efficient. Cloud expertise and responsibilities are shifting from developers to all employees through self-service tools. Multiple groups have adopted Kubernetes for deploying and managing containerized applications.  With a strong, established platform, the decentralization process can begin in earnest.
  5. Level 5 – Optimize: At this stage, the business has full trust in the technology team and employees company-wide are onboarded to the cloud-native environment. Service ownership is established and distributed to self-sufficient teams. DevOps and DevSecOps are operational, highly skilled and fully scaled. Teams are comfortable with experimentation and skilled in using data to inform business decisions. Accurate data practices boost optimization efforts and enables the organization to further adopt FinOps practices. Operations are smooth, goals outlined in the initial phase have been achieved and the organization has a flexible platform that suits its needs.

What’s best for my organization?


An organization’s cloud maturity level dictates which benefits and to what degree it stands to gain from a move to the cloud. Not every organization will reach, or want to reach, the top level of maturity in each, or all, of the three models discussed here. However, it’s likely that organizations will find it difficult to compete without some level of cloud maturity, since 70% of workloads will be on the cloud by 2024, according to Gartner.

The more mature an organization’s cloud infrastructure, security and cloud-native application posture, the more the cloud becomes advantageous. With a thorough examination of current cloud capabilities and a plan to improve maturity moving forward, an organization can increase the efficiency of its cloud spend and maximize cloud benefits.

Advancing cloud maturity with IBM


Cloud migration with IBM® Instana® Observability helps set organizations up for success at each phase of the migration process (plan, migrate, run) to make sure that applications and infrastructure run smoothly and efficiently. From setting performance baselines and right-sizing infrastructure to identifying bottlenecks and monitoring the end-user experience, Instana provides several solutions that help organizations create more mature cloud environments and processes. 

However, migrating applications, infrastructure and services to cloud is not enough to drive a successful digital transformation. Organizations need an effective cloud monitoring strategy that uses robust tools to track key performance metrics—such as response time, resource utilization and error rates—to identify potential issues that could impact cloud resources and application performance.

Instana provides comprehensive, real-time visibility into the overall status of cloud environments. It enables IT teams to proactively monitor and manage cloud resources across multiple platforms, such as AWS, Microsoft Azure and Google Cloud Platform.

The IBM Turbonomic® platform proactively optimizes the delivery of compute, storage and network resources across stacks to avoid overprovisioning and increase ROI. Whether your organization is pursuing a cloud-first, hybrid cloud or multicloud strategy, the Turbonomic platform’s AI-powered automation can help contain costs while preserving performance with automatic, continuous cloud optimization.

Source: ibm.com

Monday, 27 May 2024

How will quantum impact the biotech industry?

How will quantum impact the biotech industry?

The physics of atoms and the technology behind treating disease might sound like disparate fields. However, in the past few decades, advances in artificial intelligence, sensing, simulation and more have driven enormous impacts within the biotech industry.

Quantum computing provides an opportunity to extend these advancements with computational speedups and/or accuracy in each of those areas. Now is the time for enterprises, commercial organizations and research institutions to begin exploring how to use quantum to solve problems in their respective domains.

As a Partner in IBM’s Quantum practice, I’ve had the pleasure of working alongside Wade Davis, Vice President of Computational Science & Head of Digital for Research at Moderna, to drive quantum innovation in healthcare. Below, you’ll find some of the perspectives we share on the future in quantum compute in biotech.

What is quantum computing?


Quantum computing is a new kind of computer processing technology that relies on the science that governs the behavior of atoms to solve problems that are too complex or not practical for today’s fastest supercomputers. We don’t expect quantum to replace classical computing. Rather, quantum computers will serve as a highly specialized and complementary computing resource for running specific tasks.

A classical computer is how you’re reading this blog. These computers represent information in strings of zeros and ones and manipulate these strings by using a set of logical operations. The result is a computer that behaves deterministically—these operations have well-defined effects, and a sequence of operations resulting in a single outcome. Quantum computers, however, are probabilistic—the same sequence of operations can have different outcomes, allowing these computers to explore and calculate multiple scenarios simultaneously. But this alone does not explain the full power of quantum computing. Quantum mechanics offers us access to a tweaked and counterintuitive version of probability that allows us to run computations inaccessible to classical computers. 

How will quantum impact the biotech industry?
Therefore, quantum computers enable us to evaluate new dimensions for existing problems and explore entirely new frontiers that are not accessible today. And they perform computations in a way that more closely mirrors nature itself.

As mentioned, we don’t expect quantum computers to replace classical computers. Each one has its strengths and weaknesses: while quantum will excel at running certain algorithms or simulating nature, classical will still take on much of the work. We anticipate a future wherein programs weave quantum and classical computation together, relying on each one where they’re more appropriate. Quantum will extend the power of classical. 

Unlocking new potential


A set of core enterprise applications has crystallized from an environment of rapidly maturing quantum hardware and software. What the following problems share are many variables, a structure that seems to map well to the rules of quantum mechanics, and difficulty solving them with today’s HPC resources. They broadly fall into three buckets:

  • Advanced mathematics and complex data structures. The multidimensional nature of quantum mechanics offers a new way to approach problems with many moving parts, enabling better analytic performance for computationally complex problems. Even with recent and transformative advancements in AI and generative AI, quantum compute promises the ability to identify and recognize patterns that are not detectable for classical-trained AI, especially where data is sparse and imbalanced. For biotech, this might be beneficial for combing through datasets to find trends that might identify and personalize interventions that target disease at the cellular level.
  • Search and optimization. Enterprises have a large appetite for tackling complex combinatorial and black-box problems to generate more robust insights for strategic planning and investments. Though further on the horizon, quantum systems are being intensely studied for their ability to consider a broad set of computations concurrently, by generating statistical distributions, unlocking a host of promising opportunities including the ability to rapidly identify protein folding structures and optimize sequencing to advance mRNA-based therapeutics.
  • Simulating nature. Quantum computers naturally re-create the behavior of atoms and even subatomic particles—making them valuable for simulating how matter interacts with its environment. This opens up new possibilities to design new drugs to fight emerging diseases within the biotech industry—and more broadly, to discover new materials that can enable carbon capture and optimize energy storage to help industries fight climate change.

At IBM, we recognize that our role is not only to provide world-leading hardware and software, but also to connect quantum experts with nonquantum domain experts across these areas to bring useful quantum computing sooner. To that end, we convened five working groups covering healthcare/life sciences, materials science, high-energy physics, optimization and sustainability. Each of these working groups gathers in person to generate ideas and foster collaborations—and then these collaborations work together to produce new research and domain-specific implementations of quantum algorithms.

As algorithm discovery and development matures and we expand our focus to real-world applications, commercial entities, too, are shifting from experimental proof-of-concepts toward utility-scale prototypes that will be integrated into their workflows. Over the next few years, enterprises across the world will be investing to upskill talent and prepare their organizations for the arrival of quantum computing.

Today, an organization’s quantum computing readiness score is most influenced by its operating model: if an organization invests in a team and a process to govern their quantum innovation, they are better positioned than peers that focus just on the technology without corresponding investment in their talent and innovation process.  IBM Institute for Business Value | Research Insights: Making Quantum Readiness Real

Among industries that are making the pivot to useful quantum computing, the biotech industry is moving rapidly to explore how quantum compute can help reduce the cost and speed up the time required to discover, create, and distribute therapeutic treatments that will improve the health, the well being and the quality of life for individuals suffering from chronic disease. According to BCG’s Quantum Computing Is Becoming Business Ready report: “eight of the top ten biopharma companies are piloting quantum computing, and five have partnered with quantum providers.”

Partnering with IBM


Recent advancements in quantum computing have opened new avenues for tackling complex combinatorial problems that are intractable for classical computers. Among these challenges, the prediction of mRNA secondary structure is a critical task in molecular biology, impacting our understanding of gene expression, regulation and the design of RNA-based therapeutics.

For example, Moderna has been pioneering the development of quantum for biotechnology. Emerging from the pandemic, Moderna established itself as a game-changing innovator in biotech when a decade of extensive R&D allowed them to use their technology platform to deliver a COVID-19 vaccine with record speed. 

Given the value of their platform approach, perhaps quantum might further push their ability to perform mRNA research, providing a host of novel mRNA vaccines more efficiently than ever before. This is where IBM can help. 

As an initial step, Moderna is working with IBM to benchmark the application of quantum computing against a classical CPlex protein analysis solver. They’re evaluating the performance of a quantum algorithm called CVaR VQE on randomly generated mRNA nucleotide sequences to accurately predict stable mRNA structures as compared to current state of the art. Their findings demonstrate the potential of quantum computing to provide insights into mRNA dynamics and offer a promising direction for advancing computational biology through quantum algorithms. As a next step, they hope to push quantum to sequence lengths beyond what CPLEX can handle.

This is just one of many collaborations that are transforming biotech processes with the help of quantum computation. Biotech enterprises are using IBM Quantum Systems to run their workloads on real utility-scale quantum hardware, while leveraging the IBM Quantum Network to share expertise across domains. And with our updated IBM Quantum Accelerator program, enterprises can now prepare their organizations with hands-on guidance to identify use cases, design workflows and develop utility-scale prototypes that use quantum computation for business impact. 

The time has never been better to begin your quantum journey—get started today.

Source: ibm.com

Saturday, 25 May 2024

Enhancing triparty repo transactions with IBM MQ for efficiency, security and scalability

Enhancing triparty repo transactions with IBM MQ for efficiency, security and scalability

The exchange of securities between parties is a critical aspect of the financial industry that demands high levels of security and efficiency. Triparty repo dealing systems, central to these exchanges, require seamless and secure communication across different platforms. The Clearing Corporation of India Limited (CCIL) recently recommended (link resides outside ibm.com) IBM® MQ as the messaging software requirement for all its members to manage the triparty repo dealing system.

Read on to learn more about the impact of IBM MQ on triparty repo dealing systems and how you can use IBM MQ effectively for smooth and safe transactions.

IBM MQ and its effect on triparty repo dealing system


IBM MQ is a messaging system that allows parties to communicate with each other in a protected and reliable manner. In a triparty repo dealing system, IBM MQ acts as the backbone of communication, enabling the parties to exchange information and instructions related to the transaction. IBM MQ enhances the efficiency of a triparty repo dealing system across various factors:

  • Efficient communication: IBM MQ enables efficient communication between parties, allowing them to exchange information and instructions in real-time. This reduces the risk of errors and miscommunications, which can lead to significant losses in the financial industry. With IBM MQ, parties can make sure that transactions are executed accurately and efficiently. IBM MQ makes sure that the messages are delivered exactly once, and this aspect is particularly important in the financial industry.
  • Scalable and can handle more messages: IBM MQ is designed to handle a large volume of messages, making it an ideal solution for triparty repo dealing systems. As the system grows, IBM MQ can scale up to meet the increasing demands of communication, helping the system remain efficient and reliable.
  • Robust security: IBM MQ provides a safe communication channel between parties, protecting sensitive information from unauthorized access. This is critical in the financial industry, where security is paramount. IBM MQ uses encryption and other security measures to protect data, so that transactions are conducted safely and securely.
  • Flexible and easy to integrate: IBM MQ is a flexible messaging system that can be seamlessly integrated with other systems and applications. This makes it easy to incorporate new features and functionalities into the triparty repo dealing system, allowing it to adapt to changing market conditions and customer needs.

How to use IBM MQ effectively in triparty repo dealing systems


Follow these guidelines to use IBM MQ effectively in a triparty repo dealing system and make a difference:

  • Define clear message formats for different types of communications, such as trade capture, confirmation and settlement. This will make sure that parties understand the structure and content of messages, reducing errors and miscommunications.
  • Implement strong security measures to protect sensitive information, such as encryption and access controls. This will protect the data  from unauthorized access and tampering.
  • Monitor message queues to verify that messages are being processed efficiently and that there are no errors or bottlenecks. This will help identify issues early, reducing the risk of disruptions to the system.
  • Use message queue management tools to manage and monitor message queues. These tools can help optimize message processing, reduce latency and improve system performance.
  • Test and validate messages regularly to ensure that they are formatted correctly and that the information is accurate. This will help reduce errors and miscommunications, enabling transactions to be executed correctly.

CCIL as triparty repo dealing system and IBM MQ


The Clearing Corporation of India Ltd. (CCIL) is a central counterparty (CCP) that was set up in April 2001 to provide clearing and settlement for transactions in government securities, foreign exchange and money markets in the country. CCIL acts as a central counterparty in various segments of the financial markets regulated by the Reserve Bank of India (RBI), namely., the government securities segment, that is, outright, market repo and triparty repo, USD-INR and forex forward segments.

As recommended by CCIL, all members are required to use IBM MQ as the messaging software for the triparty repo dealing system. IBM MQ v9.3 Long Term Support (LTS)IBM MQ v9.3 Long Term Support (LTS) release and above is the recommended software to have in the members’ software environment.

IBM MQ plays a critical role in triparty repo dealing systems, enabling efficient, secure, and reliable communication between parties. By following the guidelines outlined above, parties can effectively use IBM MQ to facilitate smooth and secure transactions. As the financial industry continues to evolve, the importance of IBM MQ in triparty repo dealing systems will only continue to grow, making it an essential component of the system.

Source: ibm.com

Friday, 24 May 2024

An integrated asset management data platform

An integrated asset management data platform

Part 2 of this four-part series discusses the complex tasks energy utility companies face as they shift to holistic grid asset management to manage through the energy transition. The first post of this series addressed the challenges of the energy transition with holistic grid asset management. In this part, we discuss the integrated asset management platform and data exchange that unite business disciplines in different domains in one network.

The asset management ecosystem


The asset management network is complex. No single system can manage all the required information views to enable end-to-end optimization. The following figure demonstrates how a platform approach can integrate data flows.

An integrated asset management data platform

Asset data is the basis for the network. Enterprise asset management (EAM) systems, geographic information systems and enterprise resource planning systems share technical, geographic and financial asset data, each with their respective primary data responsibility. The EAM system is the center for maintenance planning and execution via work orders. The maintenance, repair and overhaul (MRO) system provides necessary spare parts to carry out work and maintains an optimum stock level with a balance of stock out risk and part holding costs.

The health, safety and environment (HSE) system manages work permits for safe work execution and tracks and investigates incidents. The process safety management (PSM) system controls hazardous operations through safety practices, uses bow-tie analysis to define and monitor risk barriers, and manages safety and environmental critical elements (SECE) to prevent primary containment loss. Monitoring energy efficiency and greenhouse gas or fugitive emissions can directly contribute to environmental, social and governance (ESG) reporting, helping to manage and reduce the carbon footprint.

Asset performance management (APM) strategy defines the balance between proactive and reactive maintenance tasks. Asset criticality defines whether a preventive or predictive task is justified in terms of cost and risk. The process of defining the optimum maintenance strategy is called reliability-centered maintenance. The mechanical integrity of hazardous process assets, such as vessels, reactors or pipelines, requires a deeper approach to define the optimum risk-based inspection intervals. For process safety devices, a safety instrumented system approach determines the test frequency and safety integrity level for alarm functions.

Asset data APM collects real-time process data. Asset health monitoring  and predictive maintenance  functions receive data via distributed control systems or supervisory control and data acquisition systems (SCADA). Asset health monitoring defines asset health indexes to rank the asset conditions based on degradation models, failures, overdue preventive work and any other relevant parameters that reflect the health of the assets. Predict functionality builds predictive models to predict imminent failures and calculate assets’ remaining useful life. These models often incorporate machine learning and AI algorithms to detect the onset of degradation mechanisms in an early stage.

In the asset performance management and optimization (APMO) domain, the team collects and prioritizes asset needs resulting from asset strategies based on asset criticality. They optimize maintenance and replacement planning against the constraints of available budget and resource capacity. This method is useful for regulated industries such as energy transmission and distribution, as it allows companies to remain within the assigned budget for an arbitrage period of several years. The asset replacement requirements enter the asset investment planning (AIP) process, combining with new asset requests and expansion or upgrade projects. Market drivers, regulatory requirements, sustainability goals and resource constraints define the project portfolio and priorities for execution. The project portfolio management function manages the project management aspects of new build and replacement projects to stay within budget and on time. Product lifecycle management covers the stage-gated engineering process to optimize the design of the assets against the lowest total cost of ownership within the boundaries of all other stakeholders.

An industry-standard data model

A uniform data model is necessary to get a full view of combined systems with information flowing across the ecosystem. Technical, financial, geographical, operational and transactional data attributes are all parts of a data structure. In the utilities industry, the common information model offers a useful framework to integrate and orchestrate the ecosystem to generate optimum business value.

The integration of diverse asset management disciplines in one provides a full 360° view of assets. This integration allows companies to target the full range of business objectives and track performance across the lifecycle and against each stakeholder goal.

Source: ibm.com

Thursday, 23 May 2024

How AI-powered recruiting helps Spain’s leading soccer team score

How AI-powered recruiting helps Spain’s leading soccer team score

Phrases like “striking the post” and “direct free kick outside the 18” may seem foreign if you’re not a fan of football (for Americans, see: soccer). But for a football scout, it’s the daily lexicon of the job, representing crucial language that helps assess a player’s value to a team. And now, it’s also the language spoken and understood by Scout Advisor—an innovative tool using natural language processing (NLP) and built on the IBM® watsonx™ platform especially for Spain’s Sevilla Fútbol Club. 

On any given day, a scout has several responsibilities: observing practices, talking to families of young players, taking notes on games and recording lots of follow-up paperwork. In fact, paperwork is a much more significant part of the job than one might imagine. 

As Victor Orta, Sevilla FC Sporting Director, explained at his conference during the World Football Summit in 2023: “We are never going to sign a player with data alone, but we will never do it without resorting to data either. In the end, the good player will always have good data, but then there is always the human eye, which is the one that must evaluate everything and decide.” 

Read on to learn more about IBM and Sevilla FC’s high-scoring partnership. 

Benched by paperwork 


Back in 2021, an avalanche of paperwork plagued Sevilla FC, a top-flight team based in Andalusia, Spain. With an elite scouting team featuring 20-to-25 scouts, a single player can accumulate up to 40 scout reports, requiring 200-to-300 hours of review. Overall, Sevilla FC was tasked with organizing more than 200,000 total reports on potential players—an immensely time-consuming job. 

Combining expert observation alongside the value of data remained key for the club. Scout reports look at the quantitative data of game-time minutiae, like scoring attempts, accurate pass percentages, assists, as well as qualitative data like a player’s attitude and alignment with team philosophy. At the time, Sevilla FC could efficiently access and use quantitative player data in a matter of seconds, but the process of extracting qualitative information from the database was much slower in comparison.  

In the case of Sevilla FC, using big data to recruit players had the potential to change the core business. Instead of scouts choosing players based on intuition and bias alone, they could also use statistics, and confidently make better business decisions on multi-million-dollar investments (that is, players). Not to mention, when, where and how to use said players. But harnessing that data was no easy task. 

Getting the IBM assist


Sevilla FC takes data almost as seriously as scoring goals. In 2021, the club created a dedicated data department specifically to help management make better business decisions. It has now grown to be the largest data department in European football, developing its own AI tool to help track player movements through news coverage, as well as internal ticketing solutions.  

But when it came to the massive amount of data collected by scouters, the department knew it had a challenge that would take a reliable partner. Initially, the department consulted with data scientists at the University of Sevilla to develop models to organize all their data. But soon, the club realized it would need more advanced technology. A cold call from an IBM representative was fortuitous. 

“I was contacted by [IBM Client Engineering Manager] Arturo Guerrero to know more about us and our data projects,” says Elias Zamora, Sevilla FC chief data officer. “We quickly understood there were ways to cooperate. Sevilla FC has one of the biggest scouting databases in the professional football, ready to be used in the framework of generative AI technologies. IBM had just released watsonx, its commercial generative AI and scientific data platform based on cloud. Therefore, a partnership to extract the most value from our scouting reports using AI was the right initiative.”  

Coordinating the play 


Sevilla FC connected with the IBM Client Engineering team to talk through its challenges and a plan was devised.  

Because Sevilla FC was able to clearly explain its challenges and goals—and IBM asked the right questions—the technology soon followed. The partnership determined that IBM watsonx.ai™ would be the best solution to quickly and easily sift through a massive player database using foundation models and generative AI to process prompts in natural language. Using semantic language for search provided richer results: for instance, a search for “talented winger” translated to “a talented winger is capable of taking on defenders with dribbling to create space and penetrate the opposition’s defense.”  

The solution—titled Scout Advisor—presents a curated list of players matching search criteria in a well-designed, user-friendly interface. Its technology helps unlock the entire potential of the Sevilla FC’s database, from the intangible impressions of a scout to specific data assets. 

How AI-powered recruiting helps Spain’s leading soccer team score
Sevilla FC Scout Advisor UI 

Scoring the goal 


Scout Advisor’s pilot program went into production in January 2024, and is currently training with 200,000 existing reports. The club’s plan is to use the tool during the summer 2024 recruiting season and see results in September. So far, the reviews have been positive.   
 
“Scout Advisor has the capability to revolutionize the way we approach player recruitment,” Zamora says. “It permits the identification of players based on the opinion of football experts embedded in the scouting reports and expressed in natural language. That is, we use the technology to fully extract the value and knowledge of our scouting department.”  

And with the time saved, scouts can now concentrate on human tasks: connecting with recruits, watching games and making decisions backed by data. 

When considering the high functionality of Scout Advisor’s NLP technology, it’s natural to think about how the same technology can be applied to other sports recruiting and other functions. But one thing is certain: making better decisions about who, when and why to play a footballer has transformed the way Sevilla FC recruits.  

Says Zamora: “This is the most revolutionary technology I have seen in football.” 

Source: ibm.com

Tuesday, 21 May 2024

How to establish lineage transparency for your machine learning initiatives

How to establish lineage transparency for your machine learning initiatives

Machine learning (ML) has become a critical component of many organizations’ digital transformation strategy. From predicting customer behavior to optimizing business processes, ML algorithms are increasingly being used to make decisions that impact business outcomes.

Have you ever wondered how these algorithms arrive at their conclusions? The answer lies in the data used to train these models and how that data is derived. In this blog post, we will explore the importance of lineage transparency for machine learning data sets and how it can help establish and ensure, trust and reliability in ML conclusions.

Trust in data is a critical factor for the success of any machine learning initiative. Executives evaluating decisions made by ML algorithms need to have faith in the conclusions they produce. After all, these decisions can have a significant impact on business operations, customer satisfaction and revenue. But trust isn’t important only for executives; before executive trust can be established, data scientists and citizen data scientists who create and work with ML models must have faith in the data they’re using. Understanding the meaning, quality and origins of data are the key factors in establishing trust. In this discussion we are focused on data origins and lineage.  

Lineage describes the ability to track the origin, history, movement and transformation of data throughout its lifecycle. In the context of ML, lineage transparency means tracing the source of the data used to train any model understanding how that data is being transformed and identifying any potential biases or errors that may have been introduced along the way. 

The benefits of lineage transparency


There are several benefits to implementing lineage transparency in ML data sets. Here are a few:

  • Improved model performance: By understanding the origin and history of the data used to train ML models, data scientists can identify potential biases or errors that may impact model performance. This can lead to more accurate predictions and better decision-making.
  • Increased trust: Lineage transparency can help establish trust in ML conclusions by providing a clear understanding of how the data was sourced, transformed and used to train models. This can be particularly important in industries where data privacy and security are paramount, such as healthcare and finance. Lineage details are also required for meeting regulatory guidelines.
  • Faster troubleshooting: When issues arise with ML models, lineage transparency can help data scientists quickly identify the source of the problem. This can save time and resources by reducing the need for extensive testing and debugging.
  • Improved collaboration: Lineage transparency facilitates collaboration and cooperation between data scientists and other stakeholders by providing a clear understanding of how data is being utilized. This leads to better communication, improved model performance and increased trust in the overall ML process. 

So how can organizations implement lineage transparency for their ML data sets? Let’s look at several strategies:

  • Take advantage of data catalogs: Data catalogs are centralized repositories that provide a list of available data assets and their associated metadata. This can help data scientists understand the origin, format and structure of the data used to train ML models. Equally important is the fact that catalogs are also designed to identify data stewards—subject matter experts on particular data items—and also enable enterprises to define data in ways that everyone in the business can understand.
  • Employ solid code management strategies: Version control systems like Git can help track changes to data and code over time. This code is often the true source of record for how data has been transformed as it weaves its way into ML training data sets.
  • Make it a required practice to document all data sources: Documenting data sources and providing clear descriptions of how data has been transformed can help establish trust in ML conclusions. This can also make it easier for data scientists to understand how data is being used and identify potential biases or errors. This is critical for source data that is provided ad hoc or is managed by nonstandard or customized systems.
  • Implement data lineage tooling and methodologies: Tools are available that help organizations track the lineage of their data sets from ultimate source to target by parsing code, ETL (extract, transform, load) solutions and more. These tools provide a visual representation of how data has been transformed and used to train models and also facilitate deep inspection of data pipelines.

In conclusion, lineage transparency is a critical component of successful machine learning initiatives. By providing a clear understanding of how data is sourced, transformed and used to train models, organizations can establish trust in their ML results and ensure the performance of their models. Implementing lineage transparency can seem daunting, but there are several strategies and tools available to help organizations achieve this goal. By leveraging code management, data catalogs, data documentation and lineage tools, organizations can create a transparent and trustworthy data environment that supports their ML initiatives. With lineage transparency in place, data scientists can collaborate more effectively, troubleshoot issues more efficiently and improve model performance. 

Ultimately, lineage transparency is not just a nice-to-have, it’s a must-have for organizations that want to realize the full potential of their ML initiatives. If you are looking to take your ML initiatives to the next level, start by implementing data lineage for all your data pipelines. Your data scientists, executives and customers will thank you!

Source: ibm.com

Saturday, 18 May 2024

A new era in BI: Overcoming low adoption to make smart decisions accessible for all

A new era in BI: Overcoming low adoption to make smart decisions accessible for all

Organizations today are both empowered and overwhelmed by data. This paradox lies at the heart of modern business strategy: while there’s an unprecedented amount of data available, unlocking actionable insights requires more than access to numbers.

The push to enhance productivity, use resources wisely, and boost sustainability through data-driven decision-making is stronger than ever. Yet, the low adoption rates of business intelligence (BI) tools present a significant hurdle.

According to Gartner, although the number of employees that use analytics and business intelligence (ABI) has increased in 87% of surveyed organizations, ABI is still used by only 29% of employees on average. Despite the clear benefits of BI, the percentage of employees actively using ABI tools has seen minimal growth over the past 7 years. So why aren’t more people using BI tools?

Understanding the low adoption rate


The low adoption rate of traditional BI tools, particularly dashboards, is a multifaceted issue rooted in both the inherent limitations of these tools and the evolving needs of modern businesses. Here’s a deeper look into why these challenges might persist and what it means for users across an organization:

1. Complexity and lack of accessibility

While excellent for displaying consolidated data views, dashboards often present a steep learning curve. This complexity makes them less accessible to nontechnical users, who might find these tools intimidating or overly complex for their needs. Moreover, the static nature of traditional dashboards means they are not built to adapt quickly to changes in data or business conditions without manual updates or redesigns.

2. Limited scope for actionable insights

Dashboards typically provide high-level summaries or snapshots of data, which are useful for quick status checks but often insufficient for making business decisions. They tend to offer limited guidance on what actions to take next, lacking the context needed to derive actionable, decision-ready insights. This can leave decision-makers feeling unsupported, as they need more than just data; they need insights that directly inform action.

3. The “unknown unknowns”

A significant barrier to BI adoption is the challenge of not knowing what questions to ask or what data might be relevant. Dashboards are static and require users to come with specific queries or metrics in mind. Without knowing what to look for, business analysts can miss critical insights, making dashboards less effective for exploratory data analysis and real-time decision-making.

Moving beyond one-size-fits-all: The evolution of dashboards


While traditional dashboards have served us well, they are no longer sufficient on their own. The world of BI is shifting toward integrated and personalized tools that understand what each user needs. This isn’t just about being user-friendly; it’s about making these tools vital parts of daily decision-making processes for everyone, not just for those with technical expertise.

Emerging technologies such as generative AI (gen AI) are enhancing BI tools with capabilities that were once only available to data professionals. These new tools are more adaptive, providing personalized BI experiences that deliver contextually relevant insights users can trust and act upon immediately. We’re moving away from the one-size-fits-all approach of traditional dashboards to more dynamic, customized analytics experiences. These tools are designed to guide users effortlessly from data discovery to actionable decision-making, enhancing their ability to act on insights with confidence.

The future of BI: Making advanced analytics accessible to all


As we look toward the future, ease of use and personalization are set to redefine the trajectory of BI.

1. Emphasizing ease of use

The new generation of BI tools breaks down the barriers that once made powerful data analytics accessible only to data scientists. With simpler interfaces that include conversational interfaces, these tools make interacting with data as easy as having a chat. This integration into daily workflows means that advanced data analysis can be as straightforward as checking your email. This shift democratizes data access and empowers all team members to derive insights from data, regardless of their technical skills.

For example, imagine a sales manager who wants to quickly check the latest performance figures before a meeting. Instead of navigating through complex software, they ask the BI tool, “What were our total sales last month?” or “How are we performing compared to the same period last year?”

The system understands the questions and provides accurate answers in seconds, just like a conversation. This ease of use helps to ensure that every team member, not just data experts, can engage with data effectively and make informed decisions swiftly.

2. Driving personalization

Personalization is transforming how BI platforms present and interact with data. It means that the system learns from how users work with it, adapting to suit individual preferences and meeting the specific needs of their business.

For example, a dashboard might display the most important metrics for a marketing manager differently than for a production supervisor. It’s not just about the user’s role; it’s also about what’s happening in the market and what historical data shows.

Alerts in these systems are also smarter. Rather than notifying users about all changes, the systems focus on the most critical changes based on past importance. These alerts can even adapt when business conditions change, helping to ensure that users get the most relevant information without having to look for it themselves.

By integrating a deep understanding of both the user and their business environment, BI tools can offer insights that are exactly what’s needed at the right time. This makes these tools incredibly effective for making informed decisions quickly and confidently.

Navigating the future: Overcoming adoption challenges


While the advantages of integrating advanced BI technologies are clear, organizations often encounter significant challenges that can hinder their adoption. Understanding these challenges is crucial for businesses looking to use the full potential of these innovative tools.

1. Cultural resistance to change

One of the biggest hurdles is overcoming ingrained habits and resistance within the organization. Employees used to traditional methods of data analysis might be skeptical about moving to new systems, fearing the learning curve or potential disruptions to their routine workflows. Promoting a culture that values continuous learning and technological adaptability is key to overcoming this resistance.

2. Complexity of integration

Integrating new BI technologies with existing IT infrastructure can be complex and costly. Organizations must help ensure that new tools are compatible with their current systems, which often involve significant time and technical expertise. The complexity increases when trying to maintain data consistency and security across multiple platforms.

3. Data governance and security

Gen AI, by its nature, creates new content based on existing data sets. The outputs generated by AI can sometimes introduce biases or inaccuracies if not properly monitored and managed.

With the increased use of AI and machine learning in BI tools, managing data privacy and security becomes more complex. Organizations must help ensure that their data governance policies are robust enough to handle new types of data interactions and comply with regulations such as GDPR. This often requires updating security protocols and continuously monitoring data access and usage.

According to Gartner, by 2025, augmented consumerization functions will drive the adoption of ABI capabilities beyond 50% for the first time, influencing more business processes and decisions.

As we stand on the brink of this new era in BI, we must focus on adopting new technologies and managing them wisely. By fostering a culture that embraces continuous learning and innovation, organizations can fully harness the potential of gen AI and augmented analytics to make smarter, faster and more informed decisions.

Source: ibm.com

Friday, 17 May 2024

Enhancing data security and compliance in the XaaS Era

Enhancing data security and compliance in the XaaS Era

Recent research from IDC found that 85% of CEOs who were surveyed cited digital capabilities as strategic differentiators that are crucial to accelerating revenue growth. However, IT decision makers remain concerned about the risks associated with their digital infrastructure and the impact they might have on business outcomes, with data breaches and security concerns being the biggest threats.

With the rapid growth of XaaS consumption models and the integration of AI and data at the forefront of every business plan, we believe that protecting data security is pivotal to success. It can also help clients simplify their data compliance requirements as organizations to fuel their AI and data-intensive workloads.

Automation for efficiency and security 


Data is central to all AI applications. The ability to access and process the necessary data yields optimal results from AI models. IBM® remains committed to working diligently with partners and clients to introduce a set of automation blueprints called deployable architectures. 

These blueprints are designed to streamline the deployment process for customers. We aim to allow organizations to effortlessly select and deploy their cloud workloads in a way that is tailor-made to align with preset, reviewable security requirements and to help to enable a seamless integration of AI and XaaS. This commitment to the fusion of AI and XaaS is further exemplified by our recent accomplishment this past year. This platform is designed to enable enterprises to effectively train, validate, fine-tune and deploy AI models while scaling workloads and building responsible data and AI workflows. 

Protecting data in multicloud environments 


Business leaders need to take note of the importance of hybrid cloud support, while acknowledging the reality that modern enterprises often require a mix of cloud and on-premises environments to support their data storage and applications. The fact is that different workloads have different needs to operate efficiently.

This means that you cannot have all your workloads in one place, whether it’s on premises, in public or private cloud or at the edge. One example is our work with CrushBank. The institution uses watsonx to streamline desk operations with AI by arming its IT staff with improved information. This has led to improved productivity and , which ultimately enhances the customer experience. A custom hybrid cloud strategy manages security, data latency and performance, so your people can get out of the business of IT and into their business. 

This all begins with building a hybrid cloud XaaS environment by increasing your data protection capabilities to support the privacy and security of application data, without the need to modify the application itself. At IBM, security and compliance is at the heart of everything we do.

We recently expanded the IBM Cloud Security and Compliance Center, a suite of modernized cloud security and compliance solutions designed to help enterprises mitigate risk and protect data across their hybrid, multicloud environments and workloads. In this XaaS era, where data is the lifeblood of digital transformation, investing in robust data protection is paramount for success. 

XaaS calls for strong data security


IBM continues to demonstrate its dedication to meeting the highest standards of security in an increasingly interconnected and data-dependent world. We can help support mission-critical workloads because our software, infrastructure and services offerings are designed to support our clients as they address their evolving security and data compliance requirements. Amidst the rise of XaaS and AI, prioritizing data security can help you protect your customers’ sensitive information. 

Source: ibm.com

Thursday, 16 May 2024

A clear path to value: Overcome challenges on your FinOps journey

A clear path to value: Overcome challenges on your FinOps journey

In recent years, cloud adoption services have accelerated, with companies increasingly moving from traditional on-premises hosting to public cloud solutions. However, the rise of hybrid and multi-cloud patterns has led to challenges in optimizing value and controlling cloud expenditure, resulting in a shift from capital to operational expenses.

According to a Gartner report, cloud operational expenses are expected to surpass traditional IT spending, reflecting the ongoing transformation in expenditure patterns by 2025. FinOps is an evolving cloud financial management discipline and cultural practice that aims to maximize business value in hybrid and multi-cloud environments. But without a thorough understanding, adopting FinOps can be challenging. To maximize benefits and realize the potential of FinOps, organizations must forge a clear path and avoid common mistakes.

Enhanced capabilities to drive growth 


FinOps is closely intertwined with DevOps and can represent a radical transformation for many organizations. It necessitates a revised approach to cost and value management, challenging organizations to move beyond their comfort zones and embrace continuous innovation. To achieve this, development teams, product owners, finance, and commercial departments must come together to rethink and reimagine how they collaborate and operate. This collective effort is essential for fostering a culture of innovation and driving meaningful change throughout the organization. 

FinOps enables your organization to control costs and enhance consistency by managing average compute costs per hour, reducing licensing fees, decreasing total ownership costs, and tracking idle instances. It also drives improved outcomes and performance through enhanced visibility and planning, which includes comparing actual spending against forecasts, ensuring that architecture aligns with business and technological objectives, and increasing automation.

These improvements lead to faster decision-making, quicker demand forecasting, and more efficient “go” or “no-go” decision processes for business cases. Also, FinOps helps align business and IT goals, fostering an environment where enterprise goals are interconnected, and business cases are developed with clear, quantifiable benefits. This alignment ensures that both existing and new capabilities are enhanced, supporting strategic growth and innovation. 

Challenges and common mistakes when adopting FinOps


Organizations should develop a phased approach over time instead of attempting to implement everything from day one. Having the right people, processes, and technology in place is essential for validating changes and understanding their impact on the consumption model and usability. 

It’s crucial to lay out a clear journey path by defining the current state, establishing the future state, and devising a transition plan from the current to the future state with a clear execution strategy. To ensure repeatability across different organizations or business units within your organization, it’s essential to establish well-defined design principles and maintain consistency in adoption. Monitoring key performance indicators (KPIs) is essential to track progress effectively.

Many organizations are already considering FinOps approaches today, although often not in the most cost-effective manner. Rather than addressing root causes, they apply temporary fixes that result in ongoing challenges. These temporary fixes include: 

  • Periodic Reviews: IT teams convene periodically to address performance issues stemming from sizing or overspending, often in response to complaints from finance teams. However, this reactive approach perpetuates firefighting rather than proactive self-optimization. 
  • Architecture Patterns: Regular updates to architectural patterns based on new features and native services from hyperscalers may inadvertently introduce complexity without clear metrics for success. 
  • External SMEs: Bringing in external subject matter experts for reviews incurs significant costs and requires effort to bring them up to speed. Relying on this approach contributes to ongoing expenses without sustainable improvements. 

To avoid these pitfalls, it’s crucial to establish well-defined KPIs, benchmarking, and processes for real-time insights and measurable outcomes. 

While some organizations assign FinOps responsibility to a centralized team for monitoring spending and selecting cloud services. This approach can create silos and hinder visibility into planned changes, leading to dissatisfaction and downstream impacts on service delivery. Federating FinOps activities across the organization ensures broader participation and diverse skills, promoting collaboration and avoiding silos. 

The next steps in your FinOps journey


Regardless of where you are in your cloud journey, it is never too late to adopt best practices to make your cloud consumption more predictable. IBM Consulting®, along with Apptio as a product, can help you adopt the right architectural patterns for your unique journey.

Source: ibm.com

Tuesday, 14 May 2024

Scaling generative AI with flexible model choices

Scaling generative AI with flexible model choices

This blog series demystifies enterprise generative AI (gen AI) for business and technology leaders. It provides simple frameworks and guiding principles for your transformative artificial intelligence (AI) journey. We discussed the differentiated approach by IBM to delivering enterprise-grade models. In this blog, we delve into why foundation model choices matter and how they empower businesses to scale gen AI with confidence.

Why are model choices important?


In the dynamic world of gen AI, one-size-fits-all approaches are inadequate. As businesses strive to harness the power of AI, having a spectrum of model choices at their disposal is necessary to:

  • Spur innovation: A diverse palette of models not only fosters innovation by bringing distinct strengths to tackle a wide array of problems but also enables teams to adapt to evolving business needs and customer expectations.
  • Customize for competitive advantage: A range of models allows companies to tailor AI applications for niche requirements, providing a competitive edge. Gen AI can be fine-tuned to specific tasks, whether it’s question-answering chat applications or writing code to generate quick summaries.
  • Accelerate time to market: In today’s fast-paced business environment, time is of the essence. A diverse portfolio of models can expedite the development process, allowing companies to introduce AI-powered offerings rapidly. This is especially crucial in gen AI, where access to the latest innovations provides a pivotal competitive advantage.
  • Stay flexible in the face of change: Market conditions and business strategies constantly evolve. Various model choices allow businesses to pivot quickly and effectively. Access to multiple options enables rapid adaptation when new trends or strategic shifts occur, maintaining agility and resilience.
  • Optimize costs across use cases: Different models have varying cost implications. By accessing a range of models, businesses can select the most cost-effective option for each application. While some tasks might require the precision of high-cost models, others can be addressed with more affordable alternatives without sacrificing quality. For instance, in customer care, throughput and latency might be more critical than accuracy, whereas in resource and development, accuracy matters more.
  • Mitigate risks: Relying on a single model or a limited selection can be risky. A diverse portfolio of models helps mitigate concentration risks, helping to ensure that businesses remain resilient to the shortcomings or failure of one specific approach. This strategy allows for risk distribution and provides alternative solutions if challenges arise.
  • Comply with regulations:The regulatory landscape for AI is still evolving, with ethical considerations at the forefront. Different models can have varied implications for fairness, privacy and compliance. A broad selection allows businesses to navigate this complex terrain and choose models that meet legal and ethical standards.

Selecting the right AI models


Now that we understand the importance of model selection, how do we address the choice overload problem when selecting the right model for a specific use case? We can break down this complex problem into a set of simple steps that you can apply today:

1. Identify a clear use case: Determine the specific needs and requirements of your business application. This involves crafting detailed prompts that consider subtleties within your industry and business to help ensure that the model aligns closely with your objectives.

2. List all model options: Evaluate various models based on size, accuracy, latency and associated risks. This includes understanding each model’s strengths and weaknesses, such as the tradeoffs between accuracy, latency and throughput.

3. Evaluate model attributes: Assess the appropriateness of the model’s size relative to your needs, considering how the model’s scale might affect its performance and the risks involved. This step focuses on right-sizing the model to fit the use case optimally as bigger is not necessarily better. Smaller models can outperform larger ones in targeted domains and use cases.

4. Test model options: Conduct tests to see if the model performs as expected under conditions that mimic real-world scenarios. This involves using academic benchmarks and domain-specific data sets to evaluate output quality and tweaking the model, for example, through prompt engineering or model tuning to optimize its performance.

5. Refine your selection based on cost and deployment needs: After testing, refine your choice by considering factors such as return on investment, cost-effectiveness and the practicalities of deploying the model within your existing systems and infrastructure. Adjust the choice based on other benefits such as lower latency or higher transparency.

6. Choose the model that provides the most value: Make the final selection of an AI model that offers the best balance between performance, cost and associated risks, tailored to the specific demands of your use case.

IBM watsonx model library


By pursuing a multimodel strategy, the IBM watsonx library offers proprietary, open source and third-party models, as shown in the image:

Scaling generative AI with flexible model choices
List of watsonx foundation models as of 8 May 2024.

This provides clients with a range of choices, allowing them to select the model that best fits their unique business, regional and risk preferences.

Also, watsonx enables clients to deploy models on the infrastructure of their choice, with hybrid, multicloud and on-premises options, to avoid vendor lock-in and reduce the total cost of ownership.

IBM® Granite™: Enterprise-grade foundation models from IBM


The characteristics of foundation models can be grouped into 3 main attributes. Organizations must understand that overly emphasizing one attribute might compromise the others. Balancing these attributes is key to customize the model for an organization’s specific needs:

1. Trusted: Models that are clear, explainable and harmless.
2. Performant: The right level of performance for targeted business domains and use cases.
3. Cost-effective: Models that offer gen AI at a lower total cost of ownership and reduced risk.

IBM Granite is a flagship series of enterprise-grade models developed by IBM Research®. These models feature an optimal mix of these attributes, with a focus on trust and reliability, enabling businesses to succeed in their gen AI initiatives. Remember, businesses cannot scale gen AI with foundation models they cannot trust.

IBM watsonx offers enterprise-grade AI models resulting from a rigorous refinement process. This process begins with model innovation led by IBM Research, involving open collaborations and training on enterprise-relevant content under the IBM AI Ethics Code to promote data transparency.

IBM Research has developed an instruction-tuning technique that enhances both IBM-developed and select open-source models with capabilities essential for enterprise use. Beyond academic benchmarks, our ‘FM_EVAL’ data set simulates real-world enterprise AI applications. The most robust models from this pipeline are made available on IBM® watsonx.ai™, providing clients with reliable, enterprise-grade gen AI foundation models, as shown in the image:

Scaling generative AI with flexible model choices

Latest model announcements:


  • Granite code models: a family of models trained in 116 programming languages and ranging in size from 3 to 34 billion parameters, in both a base model and instruction-following model variants.
  • Granite-7b-lab: Supports general-purpose tasks and is tuned using the IBM’s large-scale alignment of chatbots (LAB) methodology to incorporate new skills and knowledge.

Try our enterprise-grade foundation models on watsonx with our new watsonx.ai chat demo. Discover their capabilities in summarization, content generation and document processing through a simple and intuitive chat interface.

Source: ibm.com