Tuesday 31 October 2023

Don’t get another surprise bill from your observability vendor

Don’t get another surprise bill from your observability vendor

Businesses rely heavily on monitoring solutions to ensure the optimal performance and availability of their applications. While features and capabilities are important to evaluate, it’s also important to consider pricing to ensure the right solution that will meet your needs.

Over the years, many legacy APM providers have developed complicated pricing structures that make it difficult to understand exactly what the solution will end up costing and discourages broad adoption by charging per user seat. That may have worked well in the past, but it can be insufficient for today’s modern cloud-native environments.

Remember the issue last year when a company got a USD 65 million surprise bill from their observability solution? While that specific pricing policy may have been changed, many legacy APM vendors still employ complicated pricing structures that produce unexpected charges and fees. Let’s take a look at some key pricing features to consider when evaluating an APM or observability platform.

Transparent and predictable pricing


Instana’s pricing structure is transparent and predictable. Instana follows a per-host pricing model, where customers are charged based on the number of hosts — physical or virtual — that need to be monitored. This straightforward approach eliminates confusion and simplifies budgeting, making it easier to estimate and control monitoring costs. In contrast, legacy APM tools like New Relic employ a more complex pricing framework, including charges for a combination of hosts, user seats, throughput and data retention, leading to potential surprises in monthly bills.

All-inclusive monitoring


Be careful with solutions that offer a low entry price but have additional charges for different features. With Instana, customers get access to all features and capabilities — all included in the base price. This means that you don’t have to worry about paying extra for essential capabilities such as distributed tracing, root cause analysis, service mapping, synthetic monitoring or anomaly detection.

Pricing built for microservices and containers


As the industry shifts towards microservices and containerized environments, Instana’s pricing structure aligns perfectly with these modern architectures. Instana offers granular pricing that allows you to monitor individual containers or microservices without having to pay for an entire container cluster or host. This level of flexibility allows you to only pay for what you use, helping to optimize costs and meet the specific needs of your application architecture. Most organizations monitoring cloud-native applications want to extend observability and monitoring information to all application stakeholders. When legacy APM providers employ usage-based pricing models, it creates a quandary for customers, making them choose between providing the tool to everyone that needs it and keeping costs down.

Easier scalability and growth


For growing businesses, Instana’s pricing model provides a more scalable and cost-effective path when compared to New Relic. As new hosts or containers are added to the infrastructure, you only pay for the additional resources being monitored, not the users monitoring it. This scalability aligns with your organization’s growth trajectory, allowing you to avoid unnecessary costs for infrastructure that is not yet deployed. And since Instana does not charge per user, it’s easy to onboard new users as you grow your business. In contrast, many legacy APM vendors, like New Relic, have complicated pricing structures that can become a significant cost burden as your business expands, as each new addition of a host, throughput, or data retention tier comes with additional charges.

Pricing considerations are a critical component when evaluating a monitoring solution. Having the right set of capabilities won’t do much good if the pricing structure inhibits you from using them when needed. Instana’s pricing structure offers organizations a more transparent, predictable, and cost-effective solution. Its per-host pricing, all-inclusive features, granular pricing for microservices, and scalability accommodate businesses of all sizes, so you only pay for what you need.

When considering a monitoring solution, it’s vital to evaluate not only the features but also the financial implications, making Instana a compelling choice for optimizing monitoring costs. If you have a legacy APM tool that produces surprise bills based on usage, it’s time to move to Instana.

Source: ibm.com

Saturday 28 October 2023

Unified Endpoint Management vs. device lifecycle management: what do they have in common?

Unified Endpoint Management vs. device lifecycle management: what do they have in common?

It is a new day for James, a new IT administrator. Today, he has to figure out an order for a whole batch of mobile devices for his colleagues, who have chosen both iOS and Android smartphones. He needs to activate the device lifecycle program and do all the deployment and endpoint security tasks afterward. Most probably, in another tool. He also knows that Rich from Sales and Alyssa from Finance will leave the company on Friday, so he needs to wipe the devices. One is under the BYOD program and the other is company-owned, and everything needs to be done as soon as possible. Can’t he use only one management tool?

Unified Endpoint Management (UEM) is a technology that provides one single platform to manage and protect all types of devices such as smartphones, tablets, laptops, desktops and IoT, running multiple types of operating systems from one single console throughout their lifecycle. UEM solutions include previous technologies such as MDM (Mobile Device Management), EMM (Enterprise Mobility Management), MAM (Mobile Application Management) and laptop management. Using UEM tools, James will manage both personal devices and corporate devices and include content management capabilities and cybersecurity capabilities such as security policies, authentication and identity and access management, data security and data protection, patch management, threat detection and response, and many more.

Device lifecycle is the process of managing end-to-end all the devices in a company, from the moment they leave the provider to the moment they are sunset by the IT teams in an organization. They can be Apple devices, Android devices, IoT devices, macOS, laptops or desktops running Microsoft Windows, purpose-built devices, and many more. James and other IT admins would need to make sure he does the device enrollment, maintains them, service when there is the need, retire them or repurpose them according to his company’s policies and end users’ actions.

Top 5 commonalities:


UEM and device lifecycle management share several commonalities, as they both play essential roles in the management and optimization of endpoints within an organization. Here are five of the most common aspects they share:

1. Device Inventory: UEM tools and device lifecycle management processes include inventories where IT Admins have real-time access to detailed information about each endpoint. For example the device model, the specifications, the owner and more.

2. Security and Compliance: Both UEM technologies and device lifecycle management are in line with security and compliance regulations. With UEM, IT teams make sure that devices are protected, have the patches they need and are in sync with compliance policies. By managing device lifecycle processes, IT departments make sure the endpoints are decommissioned, avoiding the loss of sensitive data and reducing security risks.

3. Configuration Management: IT administrators use UEM to configure endpoints and manage them during their usage. With device lifecycle, James and his colleagues configure endpoints during procurement and provisioning.

4. Integrations and automation: Both UEM and device lifecycle management integrate with other apps and have a lot of automated features that can streamline IT team’s efforts. The modern UEM tools and lifecycle processes offer IT admins a lot of self-service opportunities, due to a high degree of automation of tasks.

5. Reporting and Analytics: Both UEM and device lifecycle management have rich reporting and analytics capabilities. UEM provides real-time data on the endpoints, patches and updates, user security and device security, while device lifecycle management offers data on the device lifecycle and usage, helping decision-makers with take action on budgets. Both generate valuable data used in audits.

UEM and device lifecycle management have a lot in common and when done properly they protect corporate data, ensure a great user experience, enhance mobile security and cybersecurity overall, and create a great digital workspace. They also differ in terms of purposes: UEM focuses on managing devices throughout their life, while device lifecycle management focuses on the entire lifecycle, from the day they are purchased till they are sunset.

IBM Security MaaS360 is a modern, advanced UEM platform that offers one single console to manage types of endpoints from smartphones to laptops and protects them with built-in threat management capabilities. This way, IT teams can be both efficient and effective and keep the total cost of ownership under control.

Source: ibm.com

Thursday 26 October 2023

IBM Consulting accelerates the future of FinOps collaboration with Apptio

IBM Consulting accelerates the future of FinOps collaboration with Apptio

Making the right technology investment decisions today is critical to building competitive advantage, fueling innovation and driving ROI. However, dispersed, unreliable data and time-consuming, error prone processes can lead to bloated budgets, ineffective planning and missed opportunities. Organizations need simplified, integrated and automated solutions to help optimize IT spend, improve operations and drive greater financial returns.

IBM Consulting is uniquely positioned to provide exceptional FinOps and TBM services, from strategic planning to operating model implementation and managed services. Supported by an all-encompassing end-to-end toolchain, IBM Consulting distinguishes itself as an industry leader in the marketplace. And by acquiring Apptio Inc. — a family of technology financial management, cloud financial management and enterprise agile planning software products that allow you to tie your tech investments to clear business value — IBM has empowered clients to unlock additional value through the seamless integration of Apptio and IBM.

IBM Consulting’s vision for the future of FinOps


We at IBM Consulting understand that it is hard to manage, predict and optimize cloud spending. By leveraging our internal knowledge and technology stack to address your cloud spending issues, you gain access to a comprehensive suite of tools and expertise that will enable your organization to make data-driven decisions, optimize costs and maximize the return on your cloud investments, all while driving innovation and growth for your business.

Navigating the complexity of today’s digital landscape


IBM Consulting’s FinOps and TBM solutions drive success through the four methods below:

  1. Cost Management: Controlling organizational costs and ensuring consistency.
  2. Value Optimization: Pursuing operational excellence by focusing on value rather than just cost.
  3. Organizational Transparency: Enhancing performance and driving better outcomes through leadership visibility and planning.
  4. Enterprise Agility: Aligning business and IT goals for increased adaptability and the development of new capabilities.

By leveraging IBM Consulting’s exceptional FinOps and TBM services, organizations can confidently navigate the complexities of today’s digital landscape and ensure long-term success in an increasingly competitive market.

Proven FinOps success: three case studies


The below three case studies showcase the success of IBM Consulting’s FinOps methods:

  1. An American manufacturer needed optimization for multi-cloud environment and DevOps activities. They began in the initial stages of the FinOps journey and maturity. The result of the implemented solution led to a FinOps practice enablement, integrated tooling with ServiceNow and simple leadership transparency. This led to a 16% cloud spend savings.
  2. An international airline needed a major end-to-end hyperscaler migration. They were also challenged by a fast-growing cloud ecosystem. The result of the implemented solutions was increased involvement in the FinOps practice. Increased multi-cloud management and tooling implementation. They also regained control of cloud activities industry compliance regulations. This led to a 22% increase in cloud spend savings.
  3. An international manufacturing company needed to improve asset management and cost allocation processes. They also reportedly had configuration challenges using multi-cloud technologies. The result of the implemented solutions was increased automation and improved budgeting. They also were able to simply cost reporting processes, and they were equipped with self-service implementation tooling. This led to a 25% increase in cloud spending savings.

Unlock your organization’s full potential with IBM Consulting


Embrace IBM’s FinOps and TBM services to overcome complex cloud challenges and unlock your organization’s full potential. Join the ranks of successful companies across industries who have optimized cloud investments, fueled innovation and driven growth.

Source: ibm.com

Monday 23 October 2023

How foundation models can help make steel and cement production more sustainable

How foundation models can help make steel and cement production more sustainable

Heavy industries, particularly cement, steel and chemicals, are the top greenhouse gas emitting industries, contributing 25% of global CO2 emission. They use high temperature heat in many of their processes that is primarily driven by fossil fuel. Fighting climate change requires lowering heavy industry emissions. However, these industries face tremendous challenges to reduce greenhouse gas emissions. Replacing equipment is not a viable route to reduce emissions, as these industries are capital intensive, with asset lifecycles of over 40 years. They are also trying alternate fuels, which come with their own challenges of alternate fuel availability, and the ability to manage processes with fuel-mixes. The Paris Agreement on climate change also mandates that these industries will need to reduce annual emissions by 12-16% by 2030. Generative AI, when applied to industrial processes, can improve production yield, reduce quality variability and lower specific energy consumption (thereby reducing operational costs and emissions).

Higher variability in processes and operations results in higher specific energy consumption (SEC) and higher emissions. This variability comes from material inconsistency (raw material comes from earth), varying weather conditions, machine conditions and the human inability to operate the processes at top efficiency 24 hours a day, every day of the week. Artificial Intelligence technology can predict future variability in the processes and the resultant impact on yield, quality and energy consumption. For example, say we predict the quality of the clinker in advance, then we are able to optimize the heat energy and combustion in the cement kiln in such a way that quality clinker is produced at minimum energy. Such optimization of the processes reduces energy consumption and in turn reduces both energy emission and process emission.

Foundation models make AI more scalable by consolidating the cost and effort of model training by up to 70%. The most common use of foundation models is in natural-language processing (NLP) applications. However, when adapted accordingly, foundation models enable organizations to successfully model complex industrial processes accurately, creating a digital twin of the process. These digital twins capture multivariate relationships between process variables, material characteristics, energy requirements, weather conditions, operator actions, and product quality. With these digital twins, we can simulate complex operating conditions to get accurate operating set points for process “sweet spots.” For example, the cement kiln digital twin would recommend the optimal fuel, air, kiln speed and feed that minimizes heat energy consumption and still produces the right quality of clinker. When these optimized set points are applied to the process, we see efficiency improvements and energy reductions that have not been seen or realized before. The improved efficiency and SEC not only translate to EBITDA value, but also reduced energy emission and process emission.

Optimize industrial production with Foundation Models


Heavy industry has been optimizing processes with AI models for the last few years. Typically, regression models are used to capture process behavior; each regression model captures the behavior of a part of the process. When stitched together with an optimizer this group of models represents the overall behavior of the process. These groups of 10-20 models are orchestrated by an optimizer like an orchestra to generate optimized operating point recommendations for plants. However, this approach could not capture the process dynamics, such as ramp-ups, ramp-downs especially during disruptions. And training and maintaining dozens of regression models is not easy, making it a bottleneck for accelerated scaling.

Today, foundation models are used mostly in natural language processing. They use the transformer architecture to capture longer term relationships between words (tokens in Gen AI terminology) in a body of text. These relationships are encoded as vectors. These relationship vectors are then used to generate content for any specific context (say, a rental agreement). The accuracy of resultant content generated from these mapped vectors is impressive, as demonstrated by ChatGPT. What if we could represent time series data as a sequence of tokens? What if we can use the parallelized transformer architecture to encode multivariate time series data to capture long and short-term relationships between variables? 

IBM Research, in collaboration with IBM Consulting, has adapted the transformer architecture for Time Series data and found promising results. Using this technology, we can model an entire industrial process, say a cement kiln with just one foundation model. The foundation models are trained for a process domain and can capture the behavior of the entire asset and process class. For instance, a cement mill foundation model can capture the behavior of several capacities of cement mills. Therefore, every subsequent mill that we deploy to needs to go through only finetuning of the “Cement Mill Foundation Model” rather than a top-down training process. This cuts model training and deployment time by half, making it a viable technology for large-scale rollouts. We have observed that these foundation models are 7 times as accurate as regression models. And to top it all, we can capture process dynamics as these models do multi-variate forecasting with good accuracy.

Generative AI powered future of heavy industry


 Generative AI technology is bound to transform industrial production to an unforeseen level. This is the solution to reign in industrial emissions and increase productivity with minimal CAPEX impact and positive EBITDA impact. IBM is engaging with several clients to bring this technology to the production floor and seeing up to a 5% increase in productivity and up to 4% reduction in specific energy consumption and emissions. We form a joint innovation team along with the client teams and together train and deploy these models for several use cases ranging from supply chain optimization, production optimization, asset optimization, quality optimization to planning optimization. We have started deploying this technology in a large steel plant in India, a cement plant in Latin America and CPG manufacturing in North America. 

Ultimately, it’s about people: the operators in the plant must embrace it, the process engineers should love it, and the plant management must value it. That can only be achieved with effective collaboration and change management, which we focus on throughout the engagement. Let’s partner together on fostering in an era where we can grow our production capacities without compromising on the sustainability ambitions and create a better, healthier world for future generations to come.

Source: ibm.com

Saturday 21 October 2023

Empowering farmers across the digital divide in Malawi with OpenHarvest

IBM Exam Study, IBM Career, IBM Jobs, IBM Prep, IBM Preparation, IBM Certification, IBM Prep, IBM Preparation

The landlocked country of Malawi, located in southeastern Africa, is home to rich, arable land and a subtropical climate suitable for farming. As a result, over 80% of the population is employed in agriculture, and their livelihood revolves around alternating rainy and dry seasons that dictate how the year’s planting, growing and harvesting will unfold. But the once predictable seasons that smallholder farmers rely on are steadily shifting due to climate change.

When the rainy season arrives later than expected, many Malawian farmers still follow outdated agronomy practices that may lead them to plant too early or too late. Smallholder farmers lack access to hyperlocal weather forecasting and data that can help increase their crops’ chances of success, which jeopardizes the productivity and profitability of their season. Their challenges are compounded further by inherent and unavoidable farming risks, such as pests, contamination and natural disasters.

But with access to advanced technology, smart farming recommendations and specialized weather forecasts, farmers can build resilient and flexible operations that can help maximize their fields’ productive potential. That’s why IBM and global nonprofit Heifer International collaborated through the IBM Sustainability Accelerator to develop OpenHarvest—a digital tool to empower Malawi’s smallholder farmers through technology and a community ecosystem.

OpenHarvest sets out to close a digital divide


OpenHarvest is an open source platform with a mobile application that expands access to visual agricultural data, delivers specialized recommendations to farmers through AI and climate modeling, and enables better farm and field management.

The OpenHarvest model assigns each participating farmer’s field a set of latitude-longitude points that trigger comprehensive recommendations according to local weather and crop growth stages. Additionally, it monitors soil composition data (nitrogen, phosphorous and other nutrient levels) to identify how fertilizers should be applied.

From the beginning, Heifer International and IBM sought to develop a low-cost tool that maximizes output. A serverless architecture was ideal to keep infrastructure costs to a minimum under a “pay-per-use” model. IBM Cloud Code Engine allowed IBM developers to reduce time to deployment and focus on core objectives for Heifer International and the farms at the heart of the project—namely, being cost-effective, scalable and reliable.

Historically, Malawian farmers have relied on generalized weather information transmitted via radio to make operational decisions. Most farmers do not own smartphones, so Heifer International and IBM had to find an information-sharing method that could transmit precise crop and soil management recommendations generated by the OpenHarvest model, while remaining accessible and affordable to the end user. The solution was an SMS text message.

IBM Consulting also brought their sustainability experience to the pilot deployment of the OpenHarvest solution, joining a project ecosystem that included Heifer International’s community facilitators, volunteers from a local university in Malawi and smallholder farmers. It was crucial to support farmers not only with smart technology, but with a network of hands-on experts to help build trust and implement solutions.

Creating a profitable future


Climate change is not the only risk that smallholder farmers encounter in Malawi. Though the economy relies on agriculture, farmers have limited access to affordable credit or competitive markets. The cycle of poverty and lack of access to capital have historically pushed farmers in Malawi to purchase cheaper supplies (like recycled seed) which can result in low yields and subpar crops. For this reason, access to affordable capital can be an essential component to promote environmentally resilient practices and drive behavioral change.

IBM and Heifer International saw an opportunity to incentivize farmers to adopt best agricultural practices through a digital extension solution, while simultaneously facilitating connections to access finance and the formal market. Ultimately, the OpenHarvest platform is differentiated by this structure, which encourages farmers to embrace digital technology and retain new farming practices. This leads to long-term profitability and success in a changing environment and economy.

Expanding deployment for greater impact


OpenHarvest has now reached 200 users in the district of Mchinji in western Malawi. The application’s impact translates to about 1,000 direct beneficiaries, as Malawi has an average family size of about 5 people. The pilot deployment has now concluded with the sale of the year’s crops. Compared to previous years, most farmers saw increased yields, with some participants even doubling or tripling their output for the season.

As a next step, Heifer International plans to onboard around 300 additional farmers and expand the project into Kasungu, a district in the central region of Malawi. Looking ahead, the program is also evaluating other innovations, such as building out robust AI models and AI integrations based on a roadmap developed with IBM.

IBM and Heifer International are proud to help to change lives in Malawi and build sustainable farming solutions alongside farmers and their communities.

Source: ibm.com

Thursday 19 October 2023

Watsonx Orders helps restaurant operators maximize revenue with AI-powered order taker for drive-thrus

Watsonx Orders helps restaurant operators maximize revenue with AI-powered order taker for drive-thrus

We’re pleased to announce IBM watsonx Orders, an AI-powered voice agent for drive-thrus of quick-service restaurants. Powered by the latest technology from IBM Research, watsonx Orders is designed to help restaurant owners solve persistent labor challenges by handling almost all orders and interactions without the help of human cashiers, while delighting restaurant guests with quick service and accurate orders. 

Watsonx Orders joins IBM’s watsonx family of AI Assistants that help you easily deploy and scale conversational AI to maximize revenue and lower costs. It’s finely tuned to understand fast food menu items, customization options, limited time offers and promotions. Integrated into the corporate and restaurant-level points of sale, watsonx Orders can tell customers what’s available—and what’s not—at that moment. And it routes final orders to the cash register and screens and printers in the kitchen.

Across the industry, more than 70 percent of the revenue for quick-service restaurants comes through drive-thrus. Operators consider the drive-thru the most important sales channel by far, even more so than sales inside the restaurant. What’s more, the order taker position in the drive-thru is one of the most difficult and stressful among the crew—which is why it’s also one of the most difficult positions to fill for every shift. So when restaurant owners can’t find someone to fill the order-taker position, either because of unfilled job opening or absenteeism, they face disastrous revenue drops for shifts or whole days.

That’s why we designed watsonx Orders to be considered a valuable crew member: accurate, polite, speedy and reliable 24/7.

How watsonx Orders works


Watsonx Orders is a combination of hardware and software installed in the restaurant. It understands the full menu of the restaurant chains and knows which items are available at that specific restaurant by season and time of day. It knows the nicknames of every item and how customers can customize them. And we continuously update the software to be aware of limited time offers nationwide or locally.

Upon guests pulling up to the outdoor menu board, watsonx Orders greets them. Customers can start the conversation by saying, “I’m picking up a mobile order,” or “I’m a rewards member.” Or then can start ordering: “I want a cheeseburger, chocolate shake, large fries, no onions.” Thanks to watsonx’s natural language processing capabilities, the agent asks the customer to specify the size for the shake, and knows that the “no onions” is a customization option for the cheeseburger. 

Unlike many AI voice agents, watsonx Orders doesn’t convert each spoken word into text for analysis. Instead, it converts audio into phonemes—the individual sounds that make up words, such as the “ha” in “hamburger”—then into orders, vastly speeding up speech recognition. We’ve also used generative AI techniques to think of the hundreds of millions of ways that humans can express the thought “I’d like to order a hamburger.” We use the latest in neural net technology to block noise from freeways, airports, railroads, stereos and cars in the other lane.

On the backend, we connect watsonx Orders to the restaurant’s technology stack. Once we’ve gathered the order ticket, we convert it into commands that the point of sales (cash register) understands, and we send the order to the kitchen staff through displays and printers.

IBM makes and installs all the hardware and software itself for a smooth delivery to the restaurant owner.

The benefits of watsonx Orders


In the field, watsonx Orders is as accurate as a human order taker. But it can work every shift, every day, and every holiday. As such, the restaurant owner gains the equivalent of more than eight hours of labor every day, freeing up the crew to focus on delivering even better customer service. 

Just as importantly, watsonx Orders takes orders quickly—it speaks in a cheerful, polite voice, knows how to keep the conversation short, and helps the customer toward a decision without being pushy. Restaurant owners have told us speed of service and car counts per hour are among the most important measures of operations. So our team is constantly experimenting to shave split-seconds off each order without sacrificing accuracy. 

The system also knows how to upsell or sell additional items to customers for additional revenue with each order.

Should something go wrong, or if the guest insists on speaking with a human, the AI agent can gracefully hand the conversation over without any need for customers to repeat themselves. 

Get started with watsonx Orders


IBM continues to drive innovation for watsonx Orders to achieve the highest level of accuracy and fast service at every restaurant. We are interested in working with the corporate offices of quick-service restaurant brands to understand their pain points and customize the product to meet their specific needs.

Source: ibm.com

Tuesday 17 October 2023

3 reasons why business and data analysts need to work with real-time events

IBM, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials

In a research brief defining “5 trends for 2023,” the IBM Institute for Business Value reports, “In 2023, rapid response is the new baseline. Uncertainty is expected and complexity is compounding. As threats materialize on multiple fronts, organizations must reduce the time from insight to action.”

Business and data analysts are intimately familiar with the growing business need for precise, real-time intelligence. They are being increasingly challenged to improve efficiency and cost savings, embrace automation, and engage in data-driven decision making that helps their organization stand out from the competition. To meet these objectives, business and data professionals need to go beyond cookie-cutter business intelligence, data visualization dashboards and data analytics tools. They need to uncover the right digital signals and constantly be aware, so they can detect and respond to critical business scenarios in real-time.

Advantages of event-driven solutions


This is where event-driven solutions excel. Working with “business events” is essential for unlocking real-time insights that enable intelligent decision making and automated responses.

A business event can describe anything that happens which is significant to an enterprise’s operation. A business event is represented by a change in state of the data flowing between your applications, systems and databases and, most importantly, the time it occurred.

Some simple examples of business events include:

  • Receiving a new product order
  • A decrease in the stock level of a best-selling item
  • A new customer support ticket
  • A surge in demand levels
  • Customer behavior in an e-commerce platform, e.g., placing an item in the checkout cart
  • A drop in the pricing of raw materials in your supply chain
  • A customer’s social media post about your company

By working directly with streams of business events, users can define critical scenarios, detect them in real-time and respond intelligently. For example, events that describe changes in demand can help make better business decisions on inventory optimization. Events that represent customer satisfaction can drive more proactive responses, quickly turning negative customer experiences into positive ones. Or events identifying repeated component failures in a production line can enable predictive maintenance and earlier resolution of issues to help minimize the costs of rework and downtime.

These advantages are only possible if companies can quickly capture and connect the dots between events in time to impact outcomes and influence KPIs, proactively responding to new opportunities or emerging threats.

3 reasons to take advantage of event-driven solutions


1. They deliver accurate, real-time data you can depend on.

Businesses can’t afford to rely solely on historical data. Streams of business events provide users with a persistent, continuously updated record of their data, as it is being generated. Instead of waiting for data sets to accumulate over time or processing large amounts of data in batches, events provide the ability to perform data analysis in motion. This ensures that key events can be detected and acted upon before their usefulness expires.

IBM worked with Norsk Tipping to build a modern event-driven architecture that accelerated their data processing and delivered more responsive user services. By processing data in motion with IBM Event Automation, Norsk Tipping was able to analyze thousands of transactions from up to a million users at a rate 6x faster than before.

2. They are highly configurable, allowing you to define and detect scenarios unique to your business operations. 

By processing multiple streams of events from different data sources, users can define and detect both simple and complex business scenarios. Joining, aggregating, filtering and transforming streams of events allows users to effectively paint a picture—describing patterns of events that make up a time-sensitive threat or opportunity. IBM has created an intuitive, low code tool designed to empower less technical stakeholders to work with business events and make better decisions, without having to write code or be an expert in SQL. This helps businesses quickly expand into new use cases and derive actionable insights that maximize revenue potential.

IBM, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials

3. They go beyond insights, providing key triggers to drive intelligent automation.   

Once users define a critical business situation, they can generate a new stream of events that reports anytime a particular scenario is detected. While this intelligence is already helpful to streamline operations and fuel smarter decisions, organizations can go a step further by activating automation. This could be to trigger process workflows, take decisions based on business rules or prompt digital workers to get more time back for things that matter. Event-driven solutions allow businesses to build more intelligent automations that respond to new trends, customer issues or competitive threats in the moments that matter most.

IBM provides a comprehensive suite of automation capabilities that help you benefit and act on event-driven intelligence. With products such as IBM Business Automation Workflow, IBM Operational Decision Manager and IBM watsonx Orchestrate, there are a wealth of possibilities on your journey to becoming an autonomous and event-driven enterprise.

Source: ibm.com

Saturday 14 October 2023

Responding in real time to changing market dynamics

IBM Exam, IBM Exam Study, IBM Exam Prep, IBM Exam Prep, IBM Exam Preparation, IBM Tutorial and Materials

Keeping up with the volatility of the market is no easy task. When you feel like you understand the changes, you might wake up the next morning to encounter something drastically different. These shifting dynamics bring about unexpected disruptions—like changes in levels of demand—impacting your ability to effectively manage your inventory, satisfy your customer needs, and ultimately your bottom line.

In IDC’s 2022 Global Supply Chain Survey, they identified that “lack of visibility and resiliency to see necessary changes in time to react effectively” was the most problematic and unaddressed deficiency in modern supply chain management and inventory optimization. Changes in customer needs and customer demand are happening as you read this, but lack of visible data might prevent you from realizing these changes in time to act on them.

To stay on top of these frequent changes in demand, it is imperative that organizations evaluate their business model and develop the ability to adapt in real-time. Specifically, organizations that can implement more continuously aware, dynamic, and automated inventory management systems create a significant competitive advantage to increase market share.

Lack of visibility fuels siloed data


Lack of visibility into how changing market conditions interact with your inventory has a clear and distinct impact on your supply chain management. It prevents you from noticing changes in demand across multiple sales channels, and how well your current stock levels and locations are suited to satisfy it. Moreover, it can have a significant effect on your e-commerce models, hindering your ability to stay on top of supply changes and reflecting them in your online inventory.

Supply chain teams often have visibility into global supply chain shifts, but the data is often siloed and inaccessible to other team members. Consider financial analysts and product managers that must keep up with the changes in the price of their raw materials to ensure overall profitability and optimal decision-making. Individuals in these roles often have delayed awareness of changes and have no easy way to evaluate the resulting impact on their margins, whether positive or negative. This creates a domino effect: if your product teams are unable to access this pricing information in real-time, marketing strategy teams might then struggle to provide up-to-date messaging to potential customers.

Even if business teams could access real-time events, little to no training in coding becomes yet another wall to climb in their efforts to perform data analysis. New technologies are constantly emerging, but business team skill levels often lag behind the speed of these changes.

Benefit from events, no matter your role


Instead of waiting for supply chain teams to forward reports with operational data (which often take too long), what if your financial analysts could directly gain insight into what is going on so they can make more immediate operational decisions.

Consider a situation where the price of a material used in production dropped after you closed negotiations with your supplier. You were unaware of this since the supply chain teams only provide this information in quarterly reports. If you had access to real-time pricing data, you could receive immediate notification of drops in prices and take earlier action. For example, you could renegotiate a supply contract to keep business inventory costs low.

With IBM Event Automation, an analyst can use real-time events to identify business situations in a user interface that doesn’t require any coding. This brings operational visibility to the forefront; even line of business teams without a technical background can detect when a business is overpaying for materials used in production. For example, you could build an event-driven flow that detects whenever a supplier’s real-time pricing drops 10% below the price paid.

In addition, teams can perform cost/price analyses in real time, turning these changes into insights. Data that was previously siloed can now be used to optimize the organization’s pricing strategy, improve customer experience and customer satisfaction, and positively impact profit margins.

In an instant, these analysts can have a full picture of what is going on within the organization, something that would not have been possible without simple access to real-time data. It can provide important metrics and customer insights, paint a strong picture of the customer journey, and help analysts stay on top of changing market dynamics, just to name a few possibilities.

Act in the “now economy”


Knowledge is power, but it must be used to your advantage and faster than ever before. Our digitized economy requires speed: leveraging the right information at the right time. The inability to act fast might result in lost revenue, lost opportunities, and damaged customer relationships.

The now economy requires us to be focused on what is going on with our businesses in the present moment. The present moment comes with opportunities, which unfortunately often get lost in the amount of data organizations generate. Unforeseen economic disruptions, shifting market trends, and rapidly changing customer behavior all have the potential to drastically affect your business initiatives—unless you can stay on top of them. In addition, digital transformation initiatives have created the proliferation of applications, creating data siloes.

Our increasingly digitized world has skyrocketed customer expectations; our customers know what they want, and they want it now. To address these expectations, it’s important to put this information in the hands of those who need it, such as market research and data analytics teams, that can leverage data in the moment to build a truly customer-centric organization.

Put automation to work for your business


IBM Event Automation is designed to help organizations and stakeholders become continuously aware by making business events accessible directly to those who need to use it. It can allow you to empower users from all areas of your business to identify and act on situations in the moment, helping them avoid getting lost in algorithms, heavy code, or disparate data sources.

Source: ibm.com

Thursday 12 October 2023

IBM watsonx Assistant: Driving generative AI innovation with Conversational Search

IBM Watsonx Assistant, IBM Career, IBM Skill, IBM Jobs, IBM Prep, IBM Preparation, IBM Tutorial and Materials, IBM Prep, IBM Preparation

Generative AI has taken the business world by storm. Organizations around the world are trying to understand the best way to harness these exciting new developments in AI while balancing the inherent risks of using these models in an enterprise context at scale. Whether its concerns over hallucination, traceability, training data, IP rights, skills, or costs, enterprises must grapple with a wide variety of risks in putting these models into production. However, the promise of transforming customer and employee experiences with AI is too great to ignore while the pressure to implement these models has become unrelenting.

Paving the way: Large language models


The current focus of generative AI has centered on Large language models (LLMs). These language-based models are ushering in a new paradigm for discovering knowledge, both in how we access knowledge and interact with it. Traditionally, enterprises have relied on enterprise search engines to harness corporate and customer-facing knowledge to support customers and employees alike. These search engines are reliant on keywords and human feedback. Search played a key role in the initial roll out of chatbots in the enterprise by covering the “long tail” of questions that did not have a pre-defined path or answer. In fact, IBM  watsonx Assistant has been successfully enabling this pattern for close to four years. Now, we are excited to take this pattern even further with large language models and generative AI.

Introducing Conversational Search for watsonx Assistant  


Today, we are excited to announce the beta release of Conversational Search in watsonx Assistant. Powered by our IBM Granite large language model and our enterprise search engine Watson Discovery, Conversational Search is designed to scale conversational answers grounded in business content so your AI Assistants can drive outcome-oriented interactions, and deliver faster, more accurate answers to your customers and employees.

Conversational search is seamlessly integrated into our augmented conversation builder, to enable customers and employees to automate answers and actions. From helping your customers understand credit card rewards and helping them apply, to offering your employees information about time off policies and the ability to seamlessly book their vacation time.

Last month, IBM announced the General Availability of Granite, IBM Research´s latest Foundation model series designed to accelerate the adoption of generative AI into business applications and workflows with trust and transparency. Now, with this beta release, users can leverage a Granite LLM model pre-trained on enterprise-specialized datasets and apply it to watsonx Assistant to power compelling and comprehensive question and answering assistants quickly. Conversational Search expands the range of user queries handled by your AI Assistant, so you can spend less time training and more time delivering knowledge to those who need.

Users of the Plus or Enterprise plans of watsonx Assistant can now request early access to Conversational Search. Contact your IBM Representative to get exclusive access to Conversational Search Beta or schedule a demo with one of our experts.

How does Conversational Search work behind the scenes?


When a user asks an assistant a question, watsonx Assistant first determines how to help the user – whether to trigger a prebuilt conversation, conversational search, or escalate to a human agent. This is done using our new transformer model, achieving higher accuracy with dramatically less training needed.

Once conversational search is triggered, it relies on two fundamental steps to succeed: the retrieval portion, how to find the most relevant information possible, and the generation portion, how to best structure that information to get the richest responses from the LLM. For both portions, IBM watsonx Assistant leverages the Retrieval Augmented Generationframework packaged as a no-code out-of-the-box solution to reduce the need to feed and retrain the LLM model. Users can simply upload the latest business documentation or policies, and the model will retrieve information and return with an updated response.

For the retrieval portion, watsonx Assistant leverages search capabilities to retrieve relevant content from business documents. IBM watsonx Discovery enables semantic searches that understand context and meaning to retrieve information. And, because these models understand language so well, business-users can improve the quantity of topics and quality of answers their AI assistant can cover with no training. Semantic search is available today on IBM Cloud Pak for Data and will be available as a configurable option for you to run as software and SaaS deployments in the upcoming months.

Once the retrieval is done and the search results have been organized in order of relevancy, the information is passed along to an LLM – in this case the IBM model Granite – to synthesize and generate a conversational answer grounded in that content. This answer is provided with traceability so businesses and their users can see  the source of the answer. The result: A trusted contextual response based on your company´s content.

At IBM we understand the importance of using AI responsibly and we enable our clients to do the same with conversational search. Organizations can enable the functionality if only certain topics are recognized, and/or have the option of utilizing conversational search as a general fallback to long-tail questions. Enterprises can adjust their preference for using search based on their corporate policies for using generative AI. We also offer “trigger words” to automatically escalate to a human agent if certain topics are recognized to ensure conversational search is not used.

Conversational Search in action


Let’s look at a real-life scenario and how watsonx Assistant leverages Conversational Search to help a customer of a bank apply for a credit card.

Let’s say a customer opens the bank’s assistant and asks what sort of welcome offer they would be eligible for if they apply for the Platinum Card. Watsonx Assistant leverages its transformer model to examine the user’s message and route to a pre-built conversation flow that can handle this topic. The assistant can seamlessly and naturally extract the relevant information from the user’s messages to gather the necessary details, call the appropriate backend service, and return the welcome offer details back to the user.

Before the user applies, they have a couple questions. They start by asking for some more details on what sort rewards the card offers. Again, Watsonx assistant utilizes its transformer model, but this time decides to route to Conversational Search because there are no suitable pre-built conversations. Conversational Search looks through the bank’s knowledge documents and answers the user’s question.

The user is now ready to apply but wants to make sure applying won’t affect their credit score. When they ask this question to the assistant, the assistant recognizes this as a special topic and escalates to a human agent. Watsonx Assistant can condense the conversation into a concise summary and send it to the human agent, who can quickly understand the user’s question and resolve it for them.

From there, the user is satisfied and applies for their new credit card.

Conversational AI that drives open innovation


IBM has been and will continue to be committed to an open strategy, offering of deployment options to clients in a way that best suits their enterprise needs. IBM watsonx Assistant Conversational Search provides a flexible platform that can deliver accurate answers across different channels and touchpoints by bringing together enterprise search capabilities and IBM base LLM models built on watsonx. Today, we offer this Conversational Search Beta on IBM Cloud as well as a self-managed Cloud Pak for Data deployment option for semantic search with watsonx Discovery. In the coming months, we will offer semantic search as a configurable option for Conversational Search for both software and SaaS deployments – ensuring enterprises can run and deploy where they want.

For greater flexibility in model-building, organizations can also bring their proprietary data to IBM LLM models and customize these using watsonx.ai or leverage third-party models like Meta’s Llama and others from the Hugging Face community for use with conversational search or other use cases.

Source: ibm.com

Tuesday 10 October 2023

Fertility care provider Ovum Health gives patients information using chat and scheduling tools with IBM watsonx Assistant

IBM, IBM Exam, IBM Exam Prep, IBM Preparation, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Preparations Exam

As a healthcare activist, a mom to a fertility preservation miracle, a business owner and a cancer survivor, Alice Crisci has dedicated her life to ending the spread of health misinformation. She founded MedAnswers and its telemedicine spinout, Ovum Health, with the hopes of providing increased access to family-building solutions like pre-pregnancy, prenatal and postnatal healthcare.

Ovum Health’s platform provides medical software, clinical decision support, advanced lab testing and analytics to deliver a personalized approach to a healthy pregnancy. With the generative AI boom in full effect, Crisci decided it was time to enlist a partner that could scale Ovum’s web and mobile app-based chat and scheduling solution with AI.

Embedding IBM AI capabilities to drive the digital front door to Ovum’s network


After evaluating many products and partners, Ovum turned to the IBM Ecosystem and IBM watsonx Assistant to build transformative digital service wherever patients might be. IBM helped Ovum embed watsonx Assistant in their website and mobile application, called FertilityAnswers Bot.

The bot addresses personal and private fertility questions vetted by a panel of board-certified healthcare experts engaged by Ovum. The solution also features a scheduling component that allows patients to schedule medical appointments in available states.

With over 67,000 registered FertilityAnswers iOS and Android users on Ovum Health’s free question-and-answer platform, the bot can interact with patients at scale while maintaining the care and empathy of a healthcare professional. Since 2017, the FertilityAnswers network of more than 400 medical professionals has been providing responses to anonymous user questions, but the company was concerned about scaling their volunteer-based system. 

Through the collaboration with IBM, Ovum Health’s FertilityAnswers App in iOS has been updated with the IBM-powered bot now driving the first question-and-answer interaction for users. The “Ask a Question” button in the app, newly integrated with IBM watsonx Assistant capabilities, includes the first 150 questions and answers from Ovum Health’s content library of 18,000 user-generated questions with clinically validated responses provided by multidisciplinary reproductive experts engaged by Ovum.

The FertilityAnswers Bot is a powerful tool that helps people with fertility concerns get the information they need to help them make decisions about their healthcare. It is a confidential and convenient way to get clinically validated responses to questions about fertility, and it can also connect people with the right medical care providers in the Ovum Health network which takes both Medicaid and commercial insurance in certain states.

A no-code solution with long-term technical benefits


When IBM works with partners like Ovum, they help stakeholders identify the return on investment (ROI) of embedding IBM’s AI technology into their products and services, and then IBM helps partners take that technology to market. IBM assisted Ovum Health in the creation of a no-code platform for an AI assistant that leverages natural language models—which became the FertilityAnswers Bot. 

Using a no-code platform as the foundation of the FertilityAnswers Bot allowed Ovum to fully integrate watsonx Assistant into its web interface and iOS app in less than two months. The company now expects to scale its user base in a way that was not possible before. Ovum also expects to drive appointment setting for its telemedicine clinics.

Ovum continues to work with IBM to develop new AI-powered solutions. The IBM Build Fund provides financial support to Ovum as well as other early-stage startups that are developing innovative AI solutions.

Source: ibm.com

Saturday 7 October 2023

Seven key insights on GraphQL trends

IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Tutorial and Materials, IBM Certification, IBM Guides, IBM Preparation

GraphQL has emerged as a key technology in the API space, with a growing number of organizations adopting this new API structure into their ecosystems. GraphQL is often seen as an alternative to REST APIs, which have been around for a long time. Compared to REST APIs (or other traditional API specifications), GraphQL provides more flexibility to API consumers (like app developers) and delivers many benefits, along with a few new challenges to API development and delivery.

I recently attended GraphQLConf 2023, the GraphQL conference in San Francisco where GraphQL experts and users from all over the world came together to discuss the future of the technology. This very first GraphQLConf was organized by the GraphQL Foundation, which IBM is proudly sponsoring. I will highlight seven key insights on GraphQL trends for the coming years based on learnings from the event.

1. GraphQL at scale


GraphQL adoption amongst enterprises has been growing rapidly. A report from Gartner predicted that by 2025, more than 50% of enterprises will use GraphQL in production, up from less than 10% in 2021. At the GraphQLConf, it became clear that the technology is well on its way to fulfilling this prediction. The conference included speakers and attendees from companies like Pinterest, AWS, Meta, Salesforce, Netflix, Coinbase and Atlassian.

2. API management for GraphQL


Similar to other API specifications, GraphQL should be paired with API management software to get the most benefits. GraphQL is often implemented as a gateway or middleware for different data sources, which means that the API performance and security depend on these downstream sources. To optimize GraphQL API performance, you should make use of a query cost analysis to implement rate limiting based on the connected data sources. Presentations at GraphQLConf discussed how observability and rate limiting play important roles in API management for GraphQL.

3. GraphQL security


Security for GraphQL APIs is becoming even more critical now that enterprises have started running GraphQL at scale. As the structure of GraphQL is different from other API specifications, it has its own needs in terms of security. During the conference, GraphQL-specific vulnerabilities like complexity issues and schema leaks were highlighted. Of course, security threats that apply to standard API specifications—such as injections and server errors—also apply to GraphQL APIs and can often be mitigated by API management solutions.

4. Declarative, SDL-first GraphQL API development


There are two distinct approaches to building GraphQL APIs: “code-first” and “schema-first.” At the core of every GraphQL API is a schema that serves as the type-system.

In a “code-first” approach, the schema would be generated from the business logic implemented in the framework that’s used to build the GraphQL API.
In the “schema-first” approach, you’d start by defining the schema and map this schema to your business logic separately.

A new emerging approach is called “SDL-first” (Schema Definition Language), where instead of separating the schema and business logic, you define both directly inside the GraphQL schema. I discussed this declarative, SDL-first approach in my talk at GraphQLConf.

5. Incremental delivery of streaming data


IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Tutorial and Materials, IBM Certification, IBM Guides, IBM Preparation
Streaming data in GraphQL has long been neglected, but it is getting more relevant with the increased adoption of GraphQL at scale. Real-time data in GraphQL is implemented by using an operation type called “Subscription,” but streaming data has different needs. For streaming data, two new built-in directives will be introduced to the GraphQL specification, which are called “@stream” and “@defer.” By adding these new directives, GraphQL will be able to handle more complex situations where incremental delivery of data is needed. It’s expected that this development will make GraphQL more compatible with asynchronous or event-driven data sources.

6. Open specification for GraphQL federation


GraphQL federation is used to bring together multiple GraphQL APIs to consume all their data from a single API. This will improve the usability and discoverability of all services within the organization. Often, federation will require every downstream service to be a GraphQL API, but some GraphQL solutions allow every data source to be federated into a single GraphQL API. So far, GraphQL federation depended on vendor-specific requirements, which led to many different implementations.

At GraphQLConf it was announced that IBM has joined efforts with other leading companies in the API space to develop an open specification for GraphQL federation under the GraphQL Foundation.

7. GraphQL and AI


As artificial intelligence (AI) transforms how developers write and interact with code, it provides challenges and opportunities for GraphQL, too. For example, how will developers build GraphQL APIs in a world dominated by AI? How can AI help find and prevent security vulnerabilities for GraphQL?

Both at GraphQLConf and IBM TechXchange, IBM Fellow and CTO, Anant Jhingran, presented what role GraphQL plays for AI and API integration. This keynote from IBM TechXchange shows what the combination of GraphQL and AI looks like.

Source: ibm.com

Thursday 5 October 2023

IBM and ESPN use AI models built with watsonx to transform fantasy football data into insight

IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Tutorial and Materials, IBM Certification

If you play fantasy football, you are no stranger to data-driven decision-making. Every week during football season, an estimated 60 million Americans pore over player statistics, point projections and trade proposals, looking for those elusive insights to guide their roster decisions and lead them to victory. But numbers only tell half the story.

For the past seven years, ESPN has worked closely with IBM to help tell the whole tale. And this year, ESPN Fantasy Football is using AI models built with watsonx to provide 11 million fantasy managers with a data-rich, AI-infused experience that transcends traditional statistics.

In fantasy football, success hinges on decisions fueled by information and insights. Each football season, millions of articles, blog posts, podcasts and videos are produced by the media, offering expert analysis on everything from player performance to injury reports. Every week, dedicated fans analyze player statistics, projections and trade options, all in pursuit of that elusive edge. However, the challenge lies in harnessing the wealth of “unstructured” data that permeates the sports media landscape. For decades, this treasure trove of expertise went largely untapped by fantasy footballers, who could only consume a tiny fraction of this precious content. Not anymore.

To identify and distill the insights locked inside this sea of data, ESPN and IBM tapped into the power of watsonx—IBM’s new AI and data platform for business—to build AI models that understand the language of football. The models are expected to produce more than 48 billion insights for fantasy manager this year—everything from recommending mutually beneficial trade opportunities to identifying waiver wire players that are best suited to meet a team’s specific needs.

Serious business


Fantasy sports are more than fun and games. It’s also a $9 billion industry. And for ESPN, fantasy football is a critical driver of digital engagement. To keep its experience fresh and competitive, ESPN needs to introduce new features and enhancements that drive customer satisfaction and new membership.

“We want ESPN to be the destination for all fans playing Fantasy Football, whether it’s their first time or they’ve been managing a league for 20 years,” says Chris Jason, Executive Director, Product Management at ESPN. “To meet that bar, we have to continuously improve the game and find ways to enhance the experience with new innovations.”

To help, ESPN partnered with IBM Consulting using the IBM Garage methodology to better understand the kinds of data-driven insights fantasy players want. Together, they created unique Player Insights now integrated into the ESPN fantasy football app: Waiver Grades and Trade Grades.

Waiver Grades and Trade Grades use containerized applications—software packages that include everything needed to run the application—built on Red Hat OpenShift, a platform for managing and orchestrating containerized applications. These applications are all hosted on the IBM Cloud to ensure uninterrupted availability.

Encouraging trades and transactions


These new features are powered by AI models built with watsonx, and are designed to provide users with more information to help them make the best roster decisions possible.

Using neural networks and advanced natural language processing, Waiver Grades give a personalized rating for the value a player would add to your team. This nuanced algorithm delves deep into your roster to make realistic projections based on your team’s fluctuating strengths and weaknesses. For example, if you have an excellent quarterback on a bye week, your waiver grade might not be as high for a replacement quarterback—given you already have a strong one.

“An active league is a fun league,” says Jason. “So, we want to encourage roster moves and trading between teams. These features help us do just that.”

Trade Grades are another new feature that helps managers assess the value of potential trades. When managers initiate transactions with each other, the AI models serves up trade insights, featuring a grade for each athlete involved in the trade and a grade for the trade’s overall value. With one look, managers can tell if their trade is a good deal. Once the managers have these insights, they can move ahead with the trade, cancel it or edit the trade package.

Managers can also use the AI models to analyze structured and unstructured data to compare players, estimate the potential upside and downside of starting a particular player and assess the impact of an injury. These “boom-and-bust” analyses allow fantasy owners to see the risk-and-reward scenarios, trends over time, and field a more competitive team.

“Because we’re incorporating insight from media experts, it presents a more comprehensive analysis of a player’s potential on any given week,” says Aaron Baughman, Distinguished Engineer and Master Inventor with IBM Consulting.

The AI models built with watsonx ingest and analyze millions of news stories, opinion pieces by fantasy experts, and reports on player injuries. The resulting insights correlate with traditional statistical data on more than 1,900 players across all 32 teams to help fantasy managers decide who to start weekly.

A dynamic league with personalized insights


Throughout this seven-year partnership, IBM’s AI models have produced hundreds of billions of AI-generated insights for ESPN’s fantasy football platform. Waiver Grades, Trade Grades, and Player Insights with Watson use AI to spawn fresh insights from available data, breathing new life into the user experience and encouraging better decisions by fantasy managers. But they also make ESPN Fantasy Football more fun and engaging. And the partnership with ESPN allows IBM to demonstrate AI’s ability to transform massive quantities of data into meaningful insights, something business leaders seek in every industry.

Source: ibm.com

Tuesday 3 October 2023

What can AI and generative AI do for governments?

What can AI and generative AI do for governments?

Few technologies have taken the world by storm the way artificial intelligence (AI) has over the past few years. AI and its many use cases have become a topic of public discussion no longer relegated to tech experts. AI—generative AI, in particular—has tremendous potential to transform society as we know it for good, boost productivity and unlock trillions in economic value in the coming years.

AI’s value is not limited to advances in industry and consumer products alone. When implemented in a responsible way—where the technology is fully governed, privacy is protected and decision making is transparent and explainable—AI has the power to usher in a new era of government services. Such services can empower citizens and help restore trust in public entities by improving workforce efficiency and reducing operational costs in the public sector. On the backend, AI likewise has the potential to supercharge digital modernization in by, for example, automating the migration of legacy software to more flexible cloud-based applications, or accelerating mainframe application modernization.

Despite the many potential advantages, many government agencies are still grasping on how to implement AI and generative AI in particular In many cases, government agencies around the globe face a choice. They can either embrace AI and its advantages, tapping into the technology’s potential to help improve the lives of the citizens they serve. Or they can stay on the sidelines and risk missing out on AI’s ability to help agencies more effectively meet their objectives.

Government agencies early to adopt solutions leveraging AI and automation offer concrete insights into the technology’s public sector benefits—whether modernizing the US Internal Revenue Service (IRS) tax return processing or using automation to greatly improve the efficiency of the U.S. Agency for International Development’s Global Health Supply Chain Program. Other successful AI deployments reach citizens directly, including a virtual assistants like the one created by the Ukranian Embassy in the Czech Republic to provides information to Ukrainian citizens. The new wave of AI, with foundational models provided by generative AI, could represent the new major opportunity to put AI to work for governments.

Three main areas of focus 


Getting there, however, requires government agencies to focus on the main areas where AI use cases can benefit their agencies, and its customers the most. In our view, there are three main areas.

The first is workforce transformation, or digital labor. At all levels of governments, from national entities to local governments, public employees must be ready for this new AI era. While that can mean hiring new talent like data scientists and software programmers, it should also mean providing existing workers with the training they need to manage AI-related projects. With this can come improved productivity, as technologies such as natural language processing (NLP) hold the promise of relieving the need for heavy text data reading and analysis. The goal is to free up time for public employees to engage in high value meetings, creative thinking and meaningful work.  

The second major focus must be citizen support. For AI to truly benefit society, the public sector needs to prioritize use cases that directly benefit citizens. There is potential for a variety of uses in the future—whether it’s providing information in real time, personalizing services based on a citizen’s particular needs, or hastening processes that have a reputation for being slow. For example, anyone who has ever had to file paperwork, or file a claim knows the feeling all too well: Sitting in an office for hours, waiting while employees click through endless screens, hunting and pecking for information stored in different databases. What if AI’s ability to access, organize and leverage data could create new possibilities for improving government offerings, even those already available online, by unlocking data across agencies to deliver information and services more intuitively and proactively?

Third, AI is also becoming a crucial component of the public sector’s digital transformation efforts. Governments are regularly held back from true transformation by legacy systems with tightly coupled workflow rules that require substantial effort and significant cost to modernize. For example, public sector agencies can make better use of data by migrating certain technology systems to the cloud and infuse it with AI. AI-powered tools hold the potential to help with pattern detection in large stores of data, and also be able to write computer programs. This could benefit cost optimization and also strengthen cybersecurity, as it can help detect threats quickly. This way, instead of seeking hard-to-find skills, agencies can reduce their skills gap and tap into evolving talent. 

Commitment to responsible AI 


Last but not least, in IBM’s view, no discussion of responsible AI in the public sector is complete without emphasizing the importance of the ethical use of the technology throughout its lifecycle of design, development, use, and maintenance—something in which IBM has promoted in the industry for years. Along with healthcare organizations and financial services entities, government and public sector entities must strive to be seen as the most trusted institutions. That means humans should continue to be at the heart of the services delivered by government while monitoring for responsible deployment by relying on the five fundamental properties for trustworthy AI: explainability, fairness, transparency, robustness and privacy.

  • Explainability: An AI system’s ability to provide a human-interpretable explanation for its predictions and insights to the public in a way that does not hide behind technical jargon.
  • Fairness: An AI system’s ability to treat individuals or groups equitably, depending on the context in which the AI system is used, countering biases and addressing discrimination related to protected characteristics, such as gender, race, age, and veteran status.
  • Transparency: An AI system’s ability to include and share information on how it has been designed and developed and what data from which sources have fed the system.
  • Robustness: An AI system’s ability to effectively handle exceptional conditions, such as abnormalities in input to guarantee consistent outputs.
  • Privacy: An AI system’s ability to prioritize and safeguard consumers’ privacy and data rights and address existing regulations in the data collection, storage and access and disclosure.

As long as AI is implemented in a way that includes all the traits mentioned above, it can help both governments and citizens alike in new ways. Perhaps the biggest benefit to AI and foundational models is its range: It can extend to even the smallest of agencies. It can be used even in state and local governmental projects, such as using models to improve how employees and citizens search databases to find out more about policies or government-issued benefits. By staying informed, responsible, and well-equipped on AI, the public sector has the ability to help shape a brighter and better future for all.  

IBM is committed to unleashing the transformative potential of foundation models and generative AI to help address high-stakes challenges. We provide open and targeted value creating AI solutions for businesses and public sector institutions. IBM watsonx, our integrated AI and data platform, embodies these principles, offering a seamless, efficient, and responsible approach to AI deployment across a variety of environments. IBM stands ready to empower governmental organizations in the age of AI. Let’s embrace the age of AI value creation together.

Source: ibm.com