Showing posts with label IBM Watsonx. Show all posts
Showing posts with label IBM Watsonx. Show all posts

Thursday, 30 May 2024

Empower developers to focus on innovation with IBM watsonx

Empower developers to focus on innovation with IBM watsonx

In the realm of software development, efficiency and innovation are of paramount importance. As businesses strive to deliver cutting-edge solutions at an unprecedented pace, generative AI is poised to transform every stage of the software development lifecycle (SDLC).

McKinsey study shows that software developers can complete coding tasks up to twice as fast with generative AI. From use case creation to test script generation, generative AI offers a streamlined approach that accelerates development, while maintaining quality. This ground-breaking technology is revolutionizing software development and offering tangible benefits for businesses and enterprises.

Bottlenecks in the software development lifecycle


Traditionally, software development involves a series of time-consuming and resource-intensive tasks. For instance, creating use cases require meticulous planning and documentation, often involving multiple stakeholders and iterations. Designing data models and generating Entity-Relationship Diagrams (ERDs) demand significant effort and expertise. Moreover, techno-functional consultants with specialized expertise need to be onboarded to translate the business requirements (for example, converting use cases into process interactions in the form of sequence diagrams).

Once the architecture is defined, translating it into backend Java Spring Boot code adds another layer of complexity. Developers must write and debug code, a process that is prone to errors and delays. Crafting frontend UI mock-ups involves extensive design work, often requiring specialized skills and tools.

Testing further compounds these challenges. Writing test cases and scripts manually is laborious and maintaining test coverage across evolving codebases is a persistent challenge. As a result, software development cycles can be prolonged, hindering time-to-market and increasing costs.

In summary, traditional SDLC can be riddled with inefficiencies. Here are some common pain points:

  • Time-consuming Tasks: Creating use cases, data models, Entity Relationship Diagrams (ERDs), sequence diagrams and test scenarios and test cases creation often involve repetitive, manual work.
  • Inconsistent documentation: Documentation can be scattered and outdated, leading to confusion and rework.
  • Limited developer resources: Highly skilled developers are in high demand and repetitive tasks can drain their time and focus.

The new approach: IBM watsonx to the rescue


Tata Consultancy Services, in partnership with IBM®, developed a point of view that incorporates IBM watsonx™. It can automate many tedious tasks and empower developers to focus on innovation. Features include:

  • Use case creation: Users can describe a desired feature in natural language, then watsonx analyses the input and drafts comprehensive use cases to save valuable time.
  • Data model creation: Based on use cases and user stories, watsonx can generate robust data models representing the software’s data structure.
  • ERD generation: The data model can be automatically translated into a visual ERD, providing a clear picture of the relationships between entities.
  • DDL script generation: Once the ERD is defined, watsonx can generate the DDL scripts for creating the database.
  • Sequence diagram generation: watsonx can automatically generate the visual representation of the process interactions of a use case and data models, providing a clear understanding of the business process.
  • Back-end code generation: watsonx can translate data models and use cases into functional back-end code, like Java Springboot. This doesn’t eliminate developers, but allows them to focus on complex logic and optimization.
  • Front-end UI mock-up generation: watsonx can analyze user stories and data models to generate mock-ups of the software’s user interface (UI). These mock-ups help visualize the application and gather early feedback.
  • Test case and script generation: watsonx can analyse code and use cases to create automated test cases and scripts, thereby boosting software quality.

Efficiency, speed, and cost savings


All of these watsonx automations lead to benefits, such as:

  • Increased developer productivity: By automating repetitive tasks, watsonx frees up developers’ time for creative problem-solving and innovation.
  • Accelerated time-to-market: With streamlined processes and automated tasks, businesses can get their software to market quicker, capitalizing on new opportunities.
  • Reduced costs: Less manual work translates to lower development costs. Additionally, catching bugs early with watsonx-powered testing saves time and resources.

Embracing the future of software development


TCS and IBM believe that generative AI is not here to replace developers, but to empower them. By automating the mundane tasks  and generating artifacts throughout the SDLC, watsonx paves the way for faster, more efficient and more cost-effective software development. Embracing platforms like IBM watsonx is not just about adopting new technology, it’s about unlocking the full potential of efficient software development in a digital age.

Source: ibm.com

Tuesday, 14 May 2024

Scaling generative AI with flexible model choices

Scaling generative AI with flexible model choices

This blog series demystifies enterprise generative AI (gen AI) for business and technology leaders. It provides simple frameworks and guiding principles for your transformative artificial intelligence (AI) journey. We discussed the differentiated approach by IBM to delivering enterprise-grade models. In this blog, we delve into why foundation model choices matter and how they empower businesses to scale gen AI with confidence.

Why are model choices important?


In the dynamic world of gen AI, one-size-fits-all approaches are inadequate. As businesses strive to harness the power of AI, having a spectrum of model choices at their disposal is necessary to:

  • Spur innovation: A diverse palette of models not only fosters innovation by bringing distinct strengths to tackle a wide array of problems but also enables teams to adapt to evolving business needs and customer expectations.
  • Customize for competitive advantage: A range of models allows companies to tailor AI applications for niche requirements, providing a competitive edge. Gen AI can be fine-tuned to specific tasks, whether it’s question-answering chat applications or writing code to generate quick summaries.
  • Accelerate time to market: In today’s fast-paced business environment, time is of the essence. A diverse portfolio of models can expedite the development process, allowing companies to introduce AI-powered offerings rapidly. This is especially crucial in gen AI, where access to the latest innovations provides a pivotal competitive advantage.
  • Stay flexible in the face of change: Market conditions and business strategies constantly evolve. Various model choices allow businesses to pivot quickly and effectively. Access to multiple options enables rapid adaptation when new trends or strategic shifts occur, maintaining agility and resilience.
  • Optimize costs across use cases: Different models have varying cost implications. By accessing a range of models, businesses can select the most cost-effective option for each application. While some tasks might require the precision of high-cost models, others can be addressed with more affordable alternatives without sacrificing quality. For instance, in customer care, throughput and latency might be more critical than accuracy, whereas in resource and development, accuracy matters more.
  • Mitigate risks: Relying on a single model or a limited selection can be risky. A diverse portfolio of models helps mitigate concentration risks, helping to ensure that businesses remain resilient to the shortcomings or failure of one specific approach. This strategy allows for risk distribution and provides alternative solutions if challenges arise.
  • Comply with regulations:The regulatory landscape for AI is still evolving, with ethical considerations at the forefront. Different models can have varied implications for fairness, privacy and compliance. A broad selection allows businesses to navigate this complex terrain and choose models that meet legal and ethical standards.

Selecting the right AI models


Now that we understand the importance of model selection, how do we address the choice overload problem when selecting the right model for a specific use case? We can break down this complex problem into a set of simple steps that you can apply today:

1. Identify a clear use case: Determine the specific needs and requirements of your business application. This involves crafting detailed prompts that consider subtleties within your industry and business to help ensure that the model aligns closely with your objectives.

2. List all model options: Evaluate various models based on size, accuracy, latency and associated risks. This includes understanding each model’s strengths and weaknesses, such as the tradeoffs between accuracy, latency and throughput.

3. Evaluate model attributes: Assess the appropriateness of the model’s size relative to your needs, considering how the model’s scale might affect its performance and the risks involved. This step focuses on right-sizing the model to fit the use case optimally as bigger is not necessarily better. Smaller models can outperform larger ones in targeted domains and use cases.

4. Test model options: Conduct tests to see if the model performs as expected under conditions that mimic real-world scenarios. This involves using academic benchmarks and domain-specific data sets to evaluate output quality and tweaking the model, for example, through prompt engineering or model tuning to optimize its performance.

5. Refine your selection based on cost and deployment needs: After testing, refine your choice by considering factors such as return on investment, cost-effectiveness and the practicalities of deploying the model within your existing systems and infrastructure. Adjust the choice based on other benefits such as lower latency or higher transparency.

6. Choose the model that provides the most value: Make the final selection of an AI model that offers the best balance between performance, cost and associated risks, tailored to the specific demands of your use case.

IBM watsonx model library


By pursuing a multimodel strategy, the IBM watsonx library offers proprietary, open source and third-party models, as shown in the image:

Scaling generative AI with flexible model choices
List of watsonx foundation models as of 8 May 2024.

This provides clients with a range of choices, allowing them to select the model that best fits their unique business, regional and risk preferences.

Also, watsonx enables clients to deploy models on the infrastructure of their choice, with hybrid, multicloud and on-premises options, to avoid vendor lock-in and reduce the total cost of ownership.

IBM® Granite™: Enterprise-grade foundation models from IBM


The characteristics of foundation models can be grouped into 3 main attributes. Organizations must understand that overly emphasizing one attribute might compromise the others. Balancing these attributes is key to customize the model for an organization’s specific needs:

1. Trusted: Models that are clear, explainable and harmless.
2. Performant: The right level of performance for targeted business domains and use cases.
3. Cost-effective: Models that offer gen AI at a lower total cost of ownership and reduced risk.

IBM Granite is a flagship series of enterprise-grade models developed by IBM Research®. These models feature an optimal mix of these attributes, with a focus on trust and reliability, enabling businesses to succeed in their gen AI initiatives. Remember, businesses cannot scale gen AI with foundation models they cannot trust.

IBM watsonx offers enterprise-grade AI models resulting from a rigorous refinement process. This process begins with model innovation led by IBM Research, involving open collaborations and training on enterprise-relevant content under the IBM AI Ethics Code to promote data transparency.

IBM Research has developed an instruction-tuning technique that enhances both IBM-developed and select open-source models with capabilities essential for enterprise use. Beyond academic benchmarks, our ‘FM_EVAL’ data set simulates real-world enterprise AI applications. The most robust models from this pipeline are made available on IBM® watsonx.ai™, providing clients with reliable, enterprise-grade gen AI foundation models, as shown in the image:

Scaling generative AI with flexible model choices

Latest model announcements:


  • Granite code models: a family of models trained in 116 programming languages and ranging in size from 3 to 34 billion parameters, in both a base model and instruction-following model variants.
  • Granite-7b-lab: Supports general-purpose tasks and is tuned using the IBM’s large-scale alignment of chatbots (LAB) methodology to incorporate new skills and knowledge.

Try our enterprise-grade foundation models on watsonx with our new watsonx.ai chat demo. Discover their capabilities in summarization, content generation and document processing through a simple and intuitive chat interface.

Source: ibm.com

Tuesday, 5 March 2024

Empowering the digital-first business professional in the foundation model era

Empowering the digital-first business professional in the foundation model era

In the fast-paced digital age, business professionals constantly seek innovative ways to streamline processes, enhance productivity and drive growth. Today’s professionals, regardless of their fields, must fluently use advanced artificial intelligence (AI) tools. This is especially important given the application of foundation models and large language models (LLMs) in Open AI’s ChatGPT and IBM’s advances with IBM watsonx.
 
Professionals must keep up with rapid technological changes such as cloud computing and AI, recognizing the integrative power of foundation models, which are increasingly central to AI-based automation. The transition to the foundation model era signifies a substantial change in how professionals use technology to enhance their digital strategies. By using cutting-edge technology, professionals can optimize decision-making processes and enhance operational efficiency. 

For instance, business analysts now play a crucial role in bridging the gap between business and IT but also in integrating these foundational AI models into business strategies, further augmenting and optimizing operations. They translate business needs into solution requirements and propose ways to optimize business operations. 

The next leap: Beyond low-code platforms 


While the rise of low-code platforms has marked a significant evolution in bridging business requirements with IT implementation, the current market trend is veering toward more intuitive, AI-driven solutions. Foundation models urge businesses to look beyond conventional limitations, with their inherent ability to understand, generate and process human-like text, allowing non-technical professionals to interact and build applications by using natural language, marking a shift from conventional programming. By transcending the constraints of low-code platforms, businesses can build more robust, tailored solutions that align closely with their evolving digital strategies. 

Deploying LLMs effectively, like any tool, requires professionals to understand their capabilities and potential biases. Blending creativity and domain-specific expertise with AI’s computational prowess helps ensure technologically sound and contextually relevant solutions. 

From digital assistants to AI assistants


The narrative surrounding digital assistants is evolving. While computer-aided instructions or computer-assisted instructions represented a previous breakthrough, AI platforms like watsonx are elevating the concept. Instead of mere assistants, these AI-based automation platforms act as collaborators, offering insights, handling routine tasks with precision and accuracy, and enhancing decision-making processes for knowledge workers. 

The distinction between traditional robotic process automation (RPA) robots and AI-driven digital collaborators is paramount. The latter not only automates but also comprehends, reasons and learns, providing richer, more dynamic interactions. More importantly, they enable systems and tools to conform to the needs of the users, responding intelligently to users’ natural language requests.  

IBM’s vision: watsonx, watsonx Orchestrate, foundation models and beyond 


IBM strategically innovates by venturing into the world of foundation models with watsonx, demonstrating their dedication to revolutionizing businesses through AI. Their powerful IBM watsonx™ Orchestrate platform equips digital assistants with essential tools to deliver unparalleled value. Simultaneously, IBM complements this ecosystem with its RPA and Process Mining tools, offering a low-code interface for business analysts to unearth and enhance business processes. 

In essence, IBM’s comprehensive suite, centered on watsonx, aims to usher in a new era by empowering businesses to use foundation models. This supercharges their operations, helping to ensure a harmonized dance between human expertise and AI-driven automation. 

Source: ibm.com

Thursday, 11 January 2024

Breaking down the advantages and disadvantages of artificial intelligence

Breaking down the advantages and disadvantages of artificial intelligence

Artificial intelligence (AI) refers to the convergent fields of computer and data science focused on building machines with human intelligence to perform tasks that would previously have required a human being. For example, learning, reasoning, problem-solving, perception, language understanding and more. Instead of relying on explicit instructions from a programmer, AI systems can learn from data, allowing them to handle complex problems (as well as simple-but-repetitive tasks) and improve over time.

Today’s AI technology has a range of use cases across various industries; businesses use AI to minimize human error, reduce high costs of operations, provide real-time data insights and improve the customer experience, among many other applications. As such, it represents a significant shift in the way we approach computing, creating systems that can improve workflows and enhance elements of everyday life.

But even with the myriad benefits of AI, it does have noteworthy disadvantages when compared to traditional programming methods. AI development and deployment can come with data privacy concerns, job displacements and cybersecurity risks, not to mention the massive technical undertaking of ensuring AI systems behave as intended.

In this article, we’ll discuss how AI technology functions and lay out the advantages and disadvantages of artificial intelligence as they compare to traditional computing methods.

What is artificial intelligence and how does it work?


AI operates on three fundamental components: data, algorithms and computing power. 

  • Data: AI systems learn and make decisions based on data, and they require large quantities of data to train effectively, especially in the case of machine learning (ML) models. Data is often divided into three categories: training data (helps the model learn), validation data (tunes the model) and test data (assesses the model’s performance). For optimal performance, AI models should receive data from a diverse datasets (e.g., text, images, audio and more), which enables the system to generalize its learning to new, unseen data.
  • Algorithms: Algorithms are the sets of rules AI systems use to process data and make decisions. The category of AI algorithms includes ML algorithms, which learn and make predictions and decisions without explicit programming. AI can also work from deep learning algorithms, a subset of ML that uses multi-layered artificial neural networks (ANNs)—hence the “deep” descriptor—to model high-level abstractions within big data infrastructures. And reinforcement learning algorithms enable an agent to learn behavior by performing functions and receiving punishments and rewards based on their correctness, iteratively adjusting the model until it’s fully trained.
  • Computing power: AI algorithms often necessitate significant computing resources to process such large quantities of data and run complex algorithms, especially in the case of deep learning. Many organizations rely on specialized hardware, like graphic processing units (GPUs), to streamline these processes.

AI systems also tend to fall in two broad categories:

  • Artificial Narrow Intelligence, also called narrow AI or weak AI, performs specific tasks like image or voice recognition. Virtual assistants like Apple’s Siri, Amazon’s Alexa, IBM watsonx and even OpenAI’s ChatGPT are examples of narrow AI systems.
  • Artificial General Intelligence (AGI), or Strong AI, can perform any intellectual task a human can perform; it can understand, learn, adapt and work from knowledge across domains. AGI, however, is still just a theoretical concept.

How does traditional programming work?


Unlike AI programming, traditional programming requires the programmer to write explicit instructions for the computer to follow in every possible scenario; the computer then executes the instructions to solve a problem or perform a task. It’s a deterministic approach, akin to a recipe, where the computer executes step-by-step instructions to achieve the desired result.

The traditional approach is well-suited for clearly defined problems with a limited number of possible outcomes, but it’s often impossible to write rules for every single scenario when tasks are complex or demand human-like perception (as in image recognition, natural language processing, etc.). This is where AI programming offers a clear edge over rules-based programming methods.

What are the pros and cons of AI (compared to traditional computing)?


The real-world potential of AI is immense. Applications of AI include diagnosing diseases, personalizing social media feeds, executing sophisticated data analyses for weather modeling and powering the chatbots that handle our customer support requests. AI-powered robots can even assemble cars and minimize radiation from wildfires.

As with any technology, there are advantages and disadvantages of AI, when compared to traditional programing technologies. Aside from foundational differences in how they function, AI and traditional programming also differ significantly in terms of programmer control, data handling, scalability and availability.

  • Control and transparency: Traditional programming offers developers full control over the logic and behavior of software, allowing for precise customization and predictable, consistent outcomes. And if a program doesn’t behave as expected, developers can trace back through the codebase to identify and correct the issue. AI systems, particularly complex models like deep neural networks, can be hard to control and interpret. They often work like “black boxes,” where the input and output are known, but the process the model uses to get from one to the other is unclear. This lack of transparency can be problematic in industries that prioritize process and decision-making explainability (like healthcare and finance).
  • Learning and data handling: Traditional programming is rigid; it relies on structured data to execute programs and typically struggles to process unstructured data. In order to “teach” a program new information, the programmer must manually add new data or adjust processes. Traditionally coded programs also struggle with independent iteration. In other words, they may not be able to accommodate unforeseen scenarios without explicit programming for those cases. Because AI systems learn from vast amounts of data, they’re better suited for processing unstructured data like images, videos and natural language text. AI systems can also learn continually from new data and experiences (as in machine learning), allowing them to improve their performance over time and making them especially useful in dynamic environments where the best possible solution can evolve over time.
  • Stability and scalability: Traditional programming is stable. Once a program is written and debugged, it will perform operations the exact same way, every single time. However, the stability of rules-based programs comes at the expense of scalability. Because traditional programs can only learn through explicit programming interventions, they require programmers to write code at scale in order to scale up operations. This process can prove unmanageable, if not impossible, for many organizations. AI programs offer more scalability than traditional programs but with less stability. The automation and continuous learning features of AI-based programs enable developers to scale processes quickly and with relative ease, representing one of the key advantages of ai. However, the improvisational nature of AI systems means that programs may not always provide consistent, appropriate responses.
  • Efficiency and availability: Rules-based computer programs can provide 24/7 availability, but sometimes only if they have human workers to operate them around the clock.

AI technologies can run 24/7 without human intervention so that business operations can run continuously. Another of the benefits of artificial intelligence is that AI systems can automate boring or repetitive jobs (like data entry), freeing up employees’ bandwidth for higher-value work tasks and lowering the company’s payroll costs. It’s worth mentioning, however, that automation can have significant job loss implications for the workforce. For instance, some companies have transitioned to using digital assistants to triage employee reports, instead of delegating such tasks to a human resources department. Organizations will need to find ways to incorporate their existing workforce into new workflows enabled by productivity gains from the incorporation of AI into operations.

Maximize the advantages of artificial intelligence with IBM Watson


Omdia projects that the global AI market will be worth USD 200 billion by 2028.¹ That means businesses should expect dependency on AI technologies to increase, with the complexity of enterprise IT systems increasing in kind. But with the IBM watsonx™ AI and data platform, organizations have a powerful tool in their toolbox for scaling AI.

IBM watsonx enables teams to manage data sources, accelerate responsible AI workflows, and easily deploy and embed AI across the business—all on one place. watsonx offers a range of advanced features, including comprehensive workload management and real-time data monitoring, designed to help you scale and accelerate AI-powered IT infrastructures with trusted data across the enterprise.

Though not without its complications, the use of AI represents an opportunity for businesses to keep pace with an increasingly complex and dynamic world by meeting it with sophisticated technologies that can handle that complexity.

Source: ibm.com

Thursday, 19 October 2023

Watsonx Orders helps restaurant operators maximize revenue with AI-powered order taker for drive-thrus

Watsonx Orders helps restaurant operators maximize revenue with AI-powered order taker for drive-thrus

We’re pleased to announce IBM watsonx Orders, an AI-powered voice agent for drive-thrus of quick-service restaurants. Powered by the latest technology from IBM Research, watsonx Orders is designed to help restaurant owners solve persistent labor challenges by handling almost all orders and interactions without the help of human cashiers, while delighting restaurant guests with quick service and accurate orders. 

Watsonx Orders joins IBM’s watsonx family of AI Assistants that help you easily deploy and scale conversational AI to maximize revenue and lower costs. It’s finely tuned to understand fast food menu items, customization options, limited time offers and promotions. Integrated into the corporate and restaurant-level points of sale, watsonx Orders can tell customers what’s available—and what’s not—at that moment. And it routes final orders to the cash register and screens and printers in the kitchen.

Across the industry, more than 70 percent of the revenue for quick-service restaurants comes through drive-thrus. Operators consider the drive-thru the most important sales channel by far, even more so than sales inside the restaurant. What’s more, the order taker position in the drive-thru is one of the most difficult and stressful among the crew—which is why it’s also one of the most difficult positions to fill for every shift. So when restaurant owners can’t find someone to fill the order-taker position, either because of unfilled job opening or absenteeism, they face disastrous revenue drops for shifts or whole days.

That’s why we designed watsonx Orders to be considered a valuable crew member: accurate, polite, speedy and reliable 24/7.

How watsonx Orders works


Watsonx Orders is a combination of hardware and software installed in the restaurant. It understands the full menu of the restaurant chains and knows which items are available at that specific restaurant by season and time of day. It knows the nicknames of every item and how customers can customize them. And we continuously update the software to be aware of limited time offers nationwide or locally.

Upon guests pulling up to the outdoor menu board, watsonx Orders greets them. Customers can start the conversation by saying, “I’m picking up a mobile order,” or “I’m a rewards member.” Or then can start ordering: “I want a cheeseburger, chocolate shake, large fries, no onions.” Thanks to watsonx’s natural language processing capabilities, the agent asks the customer to specify the size for the shake, and knows that the “no onions” is a customization option for the cheeseburger. 

Unlike many AI voice agents, watsonx Orders doesn’t convert each spoken word into text for analysis. Instead, it converts audio into phonemes—the individual sounds that make up words, such as the “ha” in “hamburger”—then into orders, vastly speeding up speech recognition. We’ve also used generative AI techniques to think of the hundreds of millions of ways that humans can express the thought “I’d like to order a hamburger.” We use the latest in neural net technology to block noise from freeways, airports, railroads, stereos and cars in the other lane.

On the backend, we connect watsonx Orders to the restaurant’s technology stack. Once we’ve gathered the order ticket, we convert it into commands that the point of sales (cash register) understands, and we send the order to the kitchen staff through displays and printers.

IBM makes and installs all the hardware and software itself for a smooth delivery to the restaurant owner.

The benefits of watsonx Orders


In the field, watsonx Orders is as accurate as a human order taker. But it can work every shift, every day, and every holiday. As such, the restaurant owner gains the equivalent of more than eight hours of labor every day, freeing up the crew to focus on delivering even better customer service. 

Just as importantly, watsonx Orders takes orders quickly—it speaks in a cheerful, polite voice, knows how to keep the conversation short, and helps the customer toward a decision without being pushy. Restaurant owners have told us speed of service and car counts per hour are among the most important measures of operations. So our team is constantly experimenting to shave split-seconds off each order without sacrificing accuracy. 

The system also knows how to upsell or sell additional items to customers for additional revenue with each order.

Should something go wrong, or if the guest insists on speaking with a human, the AI agent can gracefully hand the conversation over without any need for customers to repeat themselves. 

Get started with watsonx Orders


IBM continues to drive innovation for watsonx Orders to achieve the highest level of accuracy and fast service at every restaurant. We are interested in working with the corporate offices of quick-service restaurant brands to understand their pain points and customize the product to meet their specific needs.

Source: ibm.com

Tuesday, 17 October 2023

3 reasons why business and data analysts need to work with real-time events

IBM, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials

In a research brief defining “5 trends for 2023,” the IBM Institute for Business Value reports, “In 2023, rapid response is the new baseline. Uncertainty is expected and complexity is compounding. As threats materialize on multiple fronts, organizations must reduce the time from insight to action.”

Business and data analysts are intimately familiar with the growing business need for precise, real-time intelligence. They are being increasingly challenged to improve efficiency and cost savings, embrace automation, and engage in data-driven decision making that helps their organization stand out from the competition. To meet these objectives, business and data professionals need to go beyond cookie-cutter business intelligence, data visualization dashboards and data analytics tools. They need to uncover the right digital signals and constantly be aware, so they can detect and respond to critical business scenarios in real-time.

Advantages of event-driven solutions


This is where event-driven solutions excel. Working with “business events” is essential for unlocking real-time insights that enable intelligent decision making and automated responses.

A business event can describe anything that happens which is significant to an enterprise’s operation. A business event is represented by a change in state of the data flowing between your applications, systems and databases and, most importantly, the time it occurred.

Some simple examples of business events include:

  • Receiving a new product order
  • A decrease in the stock level of a best-selling item
  • A new customer support ticket
  • A surge in demand levels
  • Customer behavior in an e-commerce platform, e.g., placing an item in the checkout cart
  • A drop in the pricing of raw materials in your supply chain
  • A customer’s social media post about your company

By working directly with streams of business events, users can define critical scenarios, detect them in real-time and respond intelligently. For example, events that describe changes in demand can help make better business decisions on inventory optimization. Events that represent customer satisfaction can drive more proactive responses, quickly turning negative customer experiences into positive ones. Or events identifying repeated component failures in a production line can enable predictive maintenance and earlier resolution of issues to help minimize the costs of rework and downtime.

These advantages are only possible if companies can quickly capture and connect the dots between events in time to impact outcomes and influence KPIs, proactively responding to new opportunities or emerging threats.

3 reasons to take advantage of event-driven solutions


1. They deliver accurate, real-time data you can depend on.

Businesses can’t afford to rely solely on historical data. Streams of business events provide users with a persistent, continuously updated record of their data, as it is being generated. Instead of waiting for data sets to accumulate over time or processing large amounts of data in batches, events provide the ability to perform data analysis in motion. This ensures that key events can be detected and acted upon before their usefulness expires.

IBM worked with Norsk Tipping to build a modern event-driven architecture that accelerated their data processing and delivered more responsive user services. By processing data in motion with IBM Event Automation, Norsk Tipping was able to analyze thousands of transactions from up to a million users at a rate 6x faster than before.

2. They are highly configurable, allowing you to define and detect scenarios unique to your business operations. 

By processing multiple streams of events from different data sources, users can define and detect both simple and complex business scenarios. Joining, aggregating, filtering and transforming streams of events allows users to effectively paint a picture—describing patterns of events that make up a time-sensitive threat or opportunity. IBM has created an intuitive, low code tool designed to empower less technical stakeholders to work with business events and make better decisions, without having to write code or be an expert in SQL. This helps businesses quickly expand into new use cases and derive actionable insights that maximize revenue potential.

IBM, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials

3. They go beyond insights, providing key triggers to drive intelligent automation.   

Once users define a critical business situation, they can generate a new stream of events that reports anytime a particular scenario is detected. While this intelligence is already helpful to streamline operations and fuel smarter decisions, organizations can go a step further by activating automation. This could be to trigger process workflows, take decisions based on business rules or prompt digital workers to get more time back for things that matter. Event-driven solutions allow businesses to build more intelligent automations that respond to new trends, customer issues or competitive threats in the moments that matter most.

IBM provides a comprehensive suite of automation capabilities that help you benefit and act on event-driven intelligence. With products such as IBM Business Automation Workflow, IBM Operational Decision Manager and IBM watsonx Orchestrate, there are a wealth of possibilities on your journey to becoming an autonomous and event-driven enterprise.

Source: ibm.com

Saturday, 19 August 2023

US Open heralds new era of fan engagement with watsonx and generative AI

IBM Exam Study, IBM Exam Prep, IBM Tutorial and Materials, IBM Certification, IBM Learning, IBM Certification Prep

As the tournament’s official digital innovation partner, IBM has helped the US Open attract and engage viewers for more than three decades. Year after year, IBM Consulting works with the United States Tennis Association (USTA) to transform massive amounts of data into meaningful insight for tennis fans.


This year, the USTA is using watsonx, IBM’s new AI and data platform for business. Bringing together traditional machine learning and generative AI with a family of enterprise-grade, IBM-trained foundation models, watsonx allows the USTA to deliver fan-pleasing, AI-driven features much more quickly. With watsonx, users can collaborate to specialize and deploy models for a wide variety of use cases, or build their own—making massive AI scalability possible.

Watsonx powers AI-generated tennis commentary


This year the US Open is using the generative AI capabilities of watsonx to deliver audio commentary and text captions on video highlight reels of every men’s and women’s singles match. Fans can hear play-by-play narration at the start and end of each reel, and for key points within. The AI commentary feature will be available through the US Open app and the US Open website.

The process to create the commentary began by populating a data store on watsonx.data, which connects and governs trusted data from disparate sources (such as player rankings going into the match, head-to-head records, match details and statistics).

Next, the teams trained a foundation model using watsonx.ai, a powerful studio for training, validating, tuning and deploying generative AI models for business. The US Open’s model was trained on the unique language of tennis, incorporating a wide variety of contextual description (such as adjectives like brilliant, dominant or impressive) based on lengths of rallies, number of aces, first-serve percentages, relative rankings and other key stats.

Beyond helping enterprise clients embed AI in their daily workflows, watsonx helps them manage the entire AI lifecycle. That’s why the US Open will also use watsonx.governance to direct, manage and monitor its AI activities. It will help them operationalize and automate governance of their models to ensure responsible, transparent and explainable AI workflows, identify and mitigate bias and drift, capture and document model metadata and foster a collaborative environment.

Using watsonx to provide wide-ranging Match Insights


The US Open also relies on watsonx to provide Match Insights, an engaging variety of tennis statistics and predictions delivered through the US Open app and website.

For example, the IBM Power Index is a measure of momentum that melds performance and punditry. Structured historical data about every player is combined with an analysis of unstructured data (language and sentiment derived from millions of news articles about athletes in the tournament), using watsonx.data and watsonx.ai. As play progresses, a further 2.7 million data points are captured, drawn from every shot of every match. This creates a rich, up-to-the-minute data set on which to run predictive AI, project winners and identify keys to match success. The Power Index provides nuanced and timely predictions that spark lively engagement and debate.

When a tournament draw is released, pundits and fans often assess each player’s luck and path through the field: do they have a “good draw” or a “bad draw”? This year, IBM AI Draw Analysis helps them make more data-informed predictions by providing a statistical factor (a draw ranking) for each player in the men’s and women’s singles events. The analysis, derived from structured and unstructured data using watsonx, determines the level of advantage or disadvantage for each player and is updated throughout the day as the tournament progresses and players are eliminated. Every player has their draw ranked from 1 (most favorable) to 128 (most difficult). Fans can also click on individual matches to see a projected difficulty for that round.

Based on the AI Draw Analysis, users of the US Open app can explore a player’s road to the final and the difficulty of a player’s draw. The AI Draw Analysis feature shows potential matchups, informed by player data derived from the Power Index. For matches in progress, fans can also follow the live scores and stats provided by IBM SlamTracker.

A new era of scalable enterprise AI


Through their longstanding partnership, the IBM and the USTA collaborate to explore new ways to use automation and AI to deliver compelling fan experiences at the US Open. This year’s innovations demonstrate how watsonx can help organizations quickly and effectively implement both predictive and generative AI technologies. Through a collaborative, centrally governed environment that empowers non-technical users to make the most of their organization’s high-quality data and leverage foundation models trained on IBM-curated datasets, watsonx opens the door to true AI scalability for the enterprise.

Source: ibm.com

Tuesday, 27 June 2023

Enhancing the Wimbledon fan experience with AI from watsonx

Watsonx, IBM Exam, IBM Exam Prep, IBM Preparation, IBM Certification, IBM Tutorial and Materials, IBM Guides, IBM Learning

IBM’s partnership with the All-England Lawn Tennis Club (AELTC) has driven digital transformation at Wimbledon for more than 30 years. And this year, Wimbledon is tapping into the power of generative AI, producing new digital experiences on the Wimbledon app and website using IBM’s new trusted AI and data platform, watsonx.

Automated AI commentary built from foundation models


IBM first pioneered the use of AI to curate video highlight reels in 2017, work that earned the IBM Consulting team a 2023 Emmy® Award. The solution uses gesture recognition (such as fist pumps and players’ reactions), crowd noise and game analytics (such as break points) to identify highlight-worthy videos in both golf and tennis.

This year, fans can add AI-generated spoken commentary to Wimbledon highlight reels, hearing play-by-play narration for the start and end of each reel, along with key points. Fans can also turn on closed captions to further enhance accessibility, a key consideration for AELTC.

The solution is built from a foundation model developed using watsonx, IBM’s enterprise-grade AI platform designed to manage the entire lifecycle of AI models, from curating trusted data sources to governing responsible, trusted AI. Work began with watsonx.data, a data store that connects disparate data sources and allows developers to filter the data for things like profanity, hate speech or personally identifiable information. For AI Commentary, the team drew source material from nearly 130 million documents.

The data was then used to train a large language model chosen from watsonx.ai, a next-generation studio for building and training generative AI models for business use cases. The IBM team then fine-tuned the model, adding the specific domain expertise of Wimbledon, including the use of unique Wimbledon nomenclature, such as “gentlemen’s draw” rather than “men’s draw.” The final model boasts 3 billion parameters, and the team will continue to monitor its performance using governance tools, ensuring the model performs as expected.

“The watsonx platform has allowed us to quickly leverage the power of generative AI without sacrificing trust or transparency,” says Aaron Baughman, a Distinguished Engineer and Master Inventor at IBM. “This is what we mean when we say ‘AI for Business.’ To use AI in a commercial setting, you need to have confidence that a model is scalable, reliable and trusted.”   

Co-creation with IBM iX


IBM iX, the experience design arm of IBM Consulting, works with the Club year-round to design, develop, maintain and secure the tournament’s website and mobile apps. The goal of this work is to continually enhance the digital experience with new, innovative features, while maintaining the tradition, beauty and design simplicity of Wimbledon itself.

Twice a year, the IBM iX and the Wimbledon team meet for workshops guided by the IBM Garage™ methodology, an enterprise design thinking collaboration that sets a roadmap for building and iterating the next generation of features that drive fan engagement. They use personas and journey maps to guide the design process, and agile development techniques to quickly iterate and build new features. In addition to the AI Commentary feature, the team is introducing several other enhancements, including:

AI draw analysis


As soon as any tournament draw is released, players and fans alike intuitively assess each player’s luck and path through the field: do they have a “good draw” or a “bad draw”? This year, IBM AI Draw Analysis helps them make more data-informed predictions by providing a statistical factor (a draw ranking) for each player in the singles draw.

The analysis leverages two previous innovations IBM built for the Club: the IBM Power Index, an AI-powered analysis of recent player performance and momentum (plus sentiment gleaned from natural language processing of media discussion by IBM Watson Discovery), and Likelihood to Win, a prediction of who will win a singles match.

The draw analysis, derived from structured and unstructured data, determines the level of advantage or disadvantage for each player and is updated throughout the day as the tournament progresses and players are eliminated. Every player has their draw ranked from 1 (most favorable) to 128 (most difficult). Fans can also click on individual matches to see a projected difficulty for that round.

Path to the final


Based on the AI Draw Analysis, users of the Wimbledon app can also see which opponent each player is most likely to play. It lists all potential matchups in the draw, ranked by the IBM Likely to Play. For each match in progress, fans can also follow the live scores and stats provided by IBM SlamTracker.

As a proven consultative process that taps into enterprise-scale automation and AI toolsets, the partnership between IBM and Wimbledon continues to deliver an innovative fan experience to millions around the world.

Source: ibm.com

Saturday, 3 June 2023

Accelerating AI & Innovation: the future of banking depends on core modernization

IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Skills, IBM Jobs, IBM Learning, IBM Certification, IBM Tutorial and Materials, IBM Guides, IBM Certification

In the rapidly evolving landscape of financial services, embracing AI and digital innovation at scale has become imperative for banks to stay competitive. With the power of AI and machine learning, financial institutions can leverage predictive analytics, anomaly detection and shared learning models to enhance system stability, detect fraud and drive superior customer-centric experiences. As we step into 2023, the focus has shifted to digital financial services, encompassing embedded finance, generative AI and the migration of super apps from China into a global phenomenon. And all this while balancing the adoption of a hybrid multicloud strategy. For banks to stay relevant and competitive in this new world, it is imperative for them to adjust to new trends, understand the importance of open finance and transform their core systems. Ultimately, banks must start with modernizing their core through technologies like hybrid multicloud and AI.

Generative AI: unleashing new opportunities 


Generative AI, exemplified by the explosion in advanced large language model solutions on the market and seen most recently via the launch of IBM watsonx, offers exciting possibilities in financial advisory and data analysis. While the unexplored future of generative AI poses opportunities in deterministic financial environments, configuring these models properly can simplify complex financial concepts and enable easier understanding for customers. Financial institutions must carefully leverage generative AI to strike the right balance between innovation and ethical usage. This is why IBM puts all of its AI technologies through rigorous processes and protocols to offer trustworthy solutions.


In such a highly regulated industry like banking, it is that much more important for clients to have this access to the toolset, technology, infrastructure, and consulting expertise to build their own — or fine-tune and adapt available AI models — on their own data and deploy them at scale in a more trustworthy and open environment to drive business success. Competitive differentiation and unique business value will be able to be increasingly derived from how adaptable an AI model can be to an enterprise’s unique data and domain knowledge. 

Embedded finance: redefining customer experiences 


Embedded finance has emerged as a rapidly growing trend, revolutionizing the way customers interact with financial products and services. Banks now have the opportunity to seamlessly integrate financial capabilities into various contexts, such as online commerce or car buying and emerging digital ecosystems, without disrupting customer workflows. By embedding financial services into everyday activities, banks can deliver hyper-personalized and convenient experiences, enhancing customer satisfaction and loyalty. 

The rise of super apps: transforming digital ecosystems 


Super apps, popular in China, have the potential to reshape the financial services landscape globally. By consolidating multiple applications and services under a single entity, super apps offer customers a comprehensive ecosystem that seamlessly integrates digital identity, instant payment, and data-driven capabilities. As embedded finance gains traction and open banking APIs become more prevalent, the vision of super apps is becoming a reality. Financial institutions need to adapt to this emerging trend and actively participate in the evolving digital ecosystems to deliver enhanced value and cater to evolving customer expectations. 

Open finance: accelerating the API-driven economy 


Open banking has been a topic of discussion for several years, with PSD2 regulations driving initial progress. Now open finance, an extension of PSD2, is set to open up even more services and foster an API-driven economy. With open finance, banks are compelled to open up additional APIs beyond payment accounts, enabling greater innovation and competition in the financial sector. This shift toward data-driven economies places embedded finance at the core of financial services. Forward-thinking banks are not only complying with regulatory requirements but also proactively leveraging open finance to distribute their services efficiently and reach customers wherever they are. 

The critical need for modernizing core systems and the role of hybrid cloud


In this new paradigm of AI-powered digital finance, modernizing core systems becomes imperative for banks to deliver seamless experiences, leverage emerging technologies, and remain competitive. Traditional legacy systems often lack the flexibility, scalability and agility required to support the integration of embedded finance, generative AI and open finance. By transforming core systems, banks can create a solid foundation that enables the seamless integration of new technologies, facilitates efficient API-driven ecosystems and enhances the overall customer experience. 

Hybrid multicloud plays a crucial role in facilitating the shift. It allows banks to leverage the scalability and flexibility of public cloud services while maintaining control over sensitive data through private cloud and on-premises infrastructure. By adopting a hybrid multicloud approach, banks can transform their core systems, leverage AI and machine learning capabilities, ensure data security and compliance and seamlessly integrate with third-party services and APIs. The hybrid cloud provides the agility and scalability necessary to support the rapid deployment of new digital services, while also offering the control and customization required by financial institutions. 

Modernization starts at the core


However, transforming core systems and transitioning to a hybrid cloud infrastructure is not a one-size-fits-all solution. Each bank has unique requirements, existing technology landscapes and strategic goals. It is crucial to align the technology roadmap of fintech solutions with the overall bank strategy, including the digital strategy. This alignment ensures a competitive advantage, sustainability and a seamless convergence between the two roadmaps. Collaboration between banks, fintech providers and IBM can facilitate this alignment and help banks navigate the complexities of digital transformation. 

The financial services industry is undergoing a profound transformation driven by AI, digital innovation and the shift toward digital financial services. Embedded finance, generative AI, the rise of super apps, and open finance are reshaping customer experiences and creating new opportunities for financial institutions. To fully leverage these transformative trends, banks must transform their core systems and adopt a hybrid multicloud infrastructure. This transformation not only enables seamless integration of new technologies but also enhances operational efficiency, agility and data security. As banks embark on this journey, strategic alignment between the technology roadmap and the overall bank strategy is paramount. 

Source: ibm.com