Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Friday, 11 October 2024

How a solid generative AI strategy can improve telecom network operations

How a solid generative AI strategy can improve telecom network operations

Generative AI (gen AI) has transformed industries with applications such as document-based Q&A with reasoning, customer service chatbots and summarization tasks. These use cases have demonstrated the impressive capabilities of large language models (LLMs) in understanding and generating human-like responses, particularly in fields requiring nuanced language understanding and inferencing.

However, in the realm of telecom network operations, the data is different. The observability data comes from proprietary sources and encompasses a wide variety of formats, including alarms, performance metrics, probes and ticketing systems capturing incidents, defects and changes. This data, whether structured or unstructured, is deeply embedded in a domain-specific language. This includes terms and concepts from technologies like 5G, IP-MPLS and other network protocols.

A notable challenge arises from the fact that standard foundational LLMs are not typically trained on this highly specialized and technical data. This needs a careful strategy for integrating gen AI into the telecom operations domain, where operational efficiencies and accuracy are paramount.

Successfully using gen AI for network operations requires tailoring the models to this niche context while addressing unique challenges around data specificity and system integration.

How generative AI addresses network operations challenges

The complexity and diversity of network data, along with rapidly changing technologies, presents several challenges for network operations. Gen AI offers efficient solutions where traditional methods are costly or impractical.

  • Time-consuming processes: Switching between multiple systems (such as alarms, performance or traces) delays problem resolution. Generative AI centralizes data into one interface providing natural language experience, speeding up issue resolution by reducing system toggling.
  • Data fragmentation: Scattered data across platforms prevents a cohesive view of issues. Generative AI consolidates data from various sources based on the training. It can correlate and present data in a unified view, enhancing issue comprehension.
  • Complex interfaces: Engineers spend extra time adapting to various system interfaces (such as UIs, scripts and reports). Generative AI provides a natural language interface, simplifying navigation across complex systems.
  • Human error: Manual data consolidation leads to misdiagnoses due to data fragmentation challenges. AI-driven data analysis reduces errors, helping ensure accurate diagnosis and resolution.
  • Inconsistent data formats: Varying data formats make analysis difficult. Gen AI model training can provide standardized data output, improving correlation and troubleshooting.

Challenges in applying generative AI in network operations

While gen AI offers transformative potential in network operations, several challenges must be addressed to help ensure effective implementation:

  • Relevance and contextual precision: General-purpose language models perform well in nontechnical contexts, but in network-specific use cases, models need to be fine-tuned with domain-specific terminology to deliver relevant and precise results.
  • AI guardrails and hallucinations: In network operations, outputs must be grounded in technical accuracy, not just linguistic sense. Strong AI guardrails are essential to prevent incorrect or misleading results.
  • Chain-of-thought (CoT) loops: Network use cases often involve multistep reasoning across multiple data sources. Without proper control, AI agents can enter endless loops, leading to inefficiencies due to incomplete or misunderstood data.
  • Explainability and transparency: In critical network operations, engineers must understand how AI-derived decisions are made. AI systems must provide clear and transparent reasoning to build trust and help ensure effective troubleshooting, avoiding “black box” situations.
  • Continuous model enhancements: Constant feedback from technical experts is crucial for model improvement. This feedback loop should be integrated into model training to keep pace with the evolving network environment.

Implementing a workable strategy to maximize business benefits

Key design principles can help ensure the successful implementation of gen AI in network operations. These include:  

  • Multilayer agent architecture: A supervisor/worker model offers modularity, making it easier to integrate legacy network interfaces while supporting scalability.
  • Intelligent data retrieval: Using Reflective Retrieval-Augmented Generation (RAG) with hallucination safeguards helps ensure reliable, relevant data processing.
  • Directed chain of thought: This pattern helps guide AI reasoning to deliver predictable outcomes and avoid deadlocks in decision-making.
  • Transactional-level traceability: Every AI decision should be auditable, ensuring accountability and transparency at a granular level.
  • Standardized tooling: Seamless integration with various enterprise data sources is crucial for broad network compatibility.
  • Exit prompt tuning: Continuous model improvement is enabled through prompt tuning, ensuring that it adapts and evolves based on operational feedback.

How a solid generative AI strategy can improve telecom network operations

Implementing a gen AI strategy in network operations can lead to significant performance improvements, including:

  • Faster mean time to repair (MTTR): Achieve a 30-40% reduction in MTTR, resulting in enhanced network uptime.
  • Reduced average handle time (AHT): Decrease the time network operations center (NOC) technicians expenditure addressing field technician queries by 30-40%.
  • Lower escalation rates: Reduce the percentage of tickets escalated to L3/L4 by 20-30%.

Beyond these KPIs, gen AI can enhance the overall quality and efficiency of network operations, benefiting both staff and processes.

IBM Consulting, as part of its telecommunications solution offerings, provides reference implementation of the above strategy, helping our clients in applying gen AI-based solutions successfully in their network operations.

Source: ibm.com

Tuesday, 8 October 2024

New IBM study: How business leaders can harness the power of gen AI to drive sustainable IT transformation

New IBM study: How business leaders can harness the power of gen AI to drive sustainable IT transformation

As organizations strive to balance productivity, innovation and environmental responsibility, the need for sustainable IT practices is even more pressing. A new global study from the IBM Institute for Business Value reveals that emerging technologies, particularly generative AI, can play a pivotal role in advancing sustainable IT initiatives. However, successful transformation of IT systems demands a strategic and enterprise-wide approach to sustainability.

The power of generative AI in sustainable IT

Generative AI is creating new opportunities to transform IT operations and make them more sustainable. Teams can use this technology to quickly translate code into more energy-efficient languages, develop more sustainable algorithms and software and analyze code performance to optimize energy consumption. 27% of organizations surveyed are already applying generative AI in their sustainable IT initiatives, and 63% of respondents plan to follow suit by the end of 2024. By 2027, 89% are expecting to be using generative AI in their efforts to reduce the environmental impact of IT.

Despite the growing interest in using generative AI for sustainability initiatives, leaders must first consider its broader implications, particularly energy consumption.

64% say they are using generative AI and large language models, yet only one-third of those report having made significant progress in addressing its environmental impact. To bridge this gap, executives must take a thoughtful and intentional approach to generative AI, asking questions like, “What do we need to achieve?” and “What is the smallest model that we can use to get there?”

A holistic approach to sustainability

To have a lasting impact, sustainability must be woven into the very fabric of an organization, breaking free from traditional silos and incorporating it into every aspect of operations. Leading organizations are already embracing this approach, integrating sustainable practices across their entire operations, from data centers to supply chains, to networks and products. This enables operational efficiency by optimizing resource allocation and utilization, maximizing output and minimizing waste.

The results are telling: 98% of surveyed organizations that take a holistic, enterprise-wide approach to sustainable IT report seeing benefits in operational efficiency—compared to 50% that do not. The leading organizations also attribute greater reductions in energy usage and costs to their efforts. Moreover, they report impressive environmental benefits, with two times greater reduction in their IT carbon footprint.

Hybrid cloud and automation: key enablers of sustainable IT

Many organizations are turning to hybrid cloud and automation technologies to help reduce their environmental footprint and improve business performance. By providing visibility into data, workloads and applications across multiple clouds and systems, a hybrid cloud platform enables leaders to make data-driven decisions. This allows them to determine where to run their workloads, thereby reducing energy consumption and minimizing their environmental impact.

In fact, one quarter (25%) of surveyed organizations are already using hybrid cloud solutions to boost their sustainability and energy efficiency. Nearly half (46%) of those report a substantial positive impact on their overall IT sustainability. Automation is also playing a key role in this shift. With 83% of leading organizations harnessing its power to dynamically adjust IT environments based on demand.

Sustainable IT strategies for a better tomorrow

The future of innovation is inextricably linked to a deep commitment to sustainability. As business leaders harness the power of technology to drive impact, responsible decision-making is crucial, particularly in the face of emerging technologies such as generative AI. To better navigate this intersection of IT and sustainability, here are a few actions to consider: 

1. Actively manage the energy consumption associated with AI: Optimize the value of generative AI while minimizing its environmental footprint by actively managing energy consumption from development to deployment. For example, choose AI models that are designed for speed and energy efficiency to process information more effectively while reducing the computational power required.

2. Identify your environmental impact drivers: Understand how different elements of your IT estate influence environmental impacts and how this can change as you scale new IT efforts.

3. Embrace sustainable-by-design principles: Embed sustainability assessments into the design and planning stages of every IT project, by using a hybrid cloud platform to centralize control and gain better visibility across your entire IT estate.

Source: ibm.com

Saturday, 14 September 2024

How Data Cloud and Einstein 1 unlock AI-driven results

Cloud Architecture, Artificial Intelligence, Data Architecture

Einstein 1 is going to be a major focus at Dreamforce 2024, and we’ve already seen a tremendous amount of hype and development around the artificial intelligence capabilities it provides. We have also seen a commensurate focus on Data Cloud as the tool that brings data from multiple sources to make this AI wizardry possible. But how exactly do the two work together? Is Data Cloud needed to enable Einstein 1? Why is there such a focus on data, anyway?

Data Cloud as the foundation for data unification


As a leader in the IBM Data Technology & Transformation practice, I’ve seen firsthand that businesses need a solid data foundation. Clean, comprehensive data is necessary to optimize the execution and reporting of their business strategy. Over the past few years, Salesforce has made heavy investments in Data Cloud. As a result, we’ve seen it move from a mid-tier customer Data Platform to the Leader position in the 2023 Gartner® Magic Quadrant™. Finally, we can say definitively that Cloudera Data Platform (CDP) is the most robust foundation as a comprehensive data solution inside the Salesforce ecosystem.

Data Cloud works to unlock trapped data by ingesting and unifying data from across the business. With over 200 native connectors—including AWS, Snowflake and IBM® Db2®—the data can be brought in and tied to the Salesforce data model. This makes it available for use in marketing campaigns, Customer 360 profiles, analytics, and advanced AI capabilities.

Simply put, the better your data, the more you can do with it. This requires a thorough analysis of the data before ingestion in Data Cloud: Do you have the data points you need for personalization? Are the different data sources using the same formats that you need for advanced analytics? Do you have enough data to train the AI models?

Remember that once the data is ingested and mapped in Data Cloud, your teams will still need to know how to use it correctly. This might mean to work with a partner in a “two in a box” structure to rapidly learn and apply those takeaways. However, it requires substantial training, change management and willingness to adopt the new tools. Documentation like a “Data Dictionary for Marketers” is indispensable so teams fully understand the data points they are using in their campaigns.

Einstein 1 Studio provides enhanced AI tools


Once you have Data Cloud up and running, you are able to use Salesforce’s most powerful and forward-thinking AI tools in Einstein 1 Studio.

Einstein 1 Studio is Salesforce’s low-code platform to embed AI across its product suite, and this studio is only available within Data Cloud. Salesforce is investing heavily in its Einstein 1 Studio roadmap, and the functions continues to improve through regular releases. As of this writing in early September 2024, Einstein 1 Studio consists of three components:

Prompt builder

Prompt builder allows Salesforce users to create reusable AI prompts and incorporate these generative AI capabilities into any object, including contact records. These prompts trigger AI commands like record summarization, advanced analytics and recommended offers and actions.

Copilot builder

Salesforce copilots are generative AI interfaces based on natural language processing that can be used to both internally and externally to boost productivity and improve customer experiences. Copilot builder allows you to customize the default copilot functions with prompt builder functions like summarizations and AI-driven search, but it also triggers actions and updates through Apex and Flow.

Model builder

The Bring Your Own Model (BYOM) solution allows companies to use Salesforce’s standard large language models. They can also incorporate their own, including SageMaker, OpenAI or IBM Granite™, to use the best AI model for their business. In addition, Model Builder makes it possible to build a custom model based on the robust Data Cloud data.

How do you know which model returns the best results? The BYOM tool allows you to test and validate responses, and you should also check out the model comparison tool here.

Expect to see regular enhancements and new features as Salesforce continues to invest heavily in this area. I personally can’t wait to hear about what’s coming next at Dreamforce.

Salesforce AI capabilities without Data Cloud


If you are not yet using Data Cloud or haven’t ingested a critical mass of data, Salesforce still provides various AI capabilities. These are available across Sales Cloud, Service Cloud, Marketing Cloud, Commerce Cloud and Tableau. These native AI capabilities range from case and call summarization to generative AI content to product recommendations. The better the quality and cohesion of the data, the better the potential for AI outputs.

This is a powerful function, and you should definitely be taking advantage of Salesforce’s AI capabilities in the following areas:

Campaign optimization

Einstein Generative AI can create both subject lines and message copy for marketing campaigns, and Einstein Copy Insights can even analyze the proposed copy against previous campaigns to predict engagement rates. This function isn’t limited to Marketing Cloud but can also propose AI-generated copy for Sales Cloud messaging based on CRM record data.

Recommendations

Einstein Recommendations can be used across the clouds to recommend products, content and engagement strategies based on CRM records, product catalogs and previous activity. The recommendation might come in various flavors, like a next best offer product recommendation or personalized copy based on the context.

Search and self-service

Einstein Search provides personalized search results based on natural language processing of the query, previous interactions and various data points within the tool. Einstein Article Answers can promote responses from a specified knowledge to drive self-service, all built on Salesforce’s foundation of trust and security.

Advanced analytics

Salesforce offers a specific analytics visualization and insights tool called Tableau CRM (formerly Einstein Analytics), but AI-based advanced analytics capabilities have been built into Salesforce. These business-focused advanced analytics are highlighted through various reports and dashboards like Einstein Lead Scoring, sales summaries and marketing campaigns.

CRM + AI + Data + Trust


Salesforce’s focus on Einstein 1 as “CRM + AI + Data + Trust” provides powerful tools within the Salesforce ecosystem. These tools are only enhanced by starting with Data Cloud as the tool to aggregate, unify and activate data. Expect to see this continue to improve over time even further. The rate of change in the AI space has been incredible, and Salesforce continues to lead the way through their investments and approach.

If you’re going to be at Dreamforce 2024, Gustavo Netto and I will be presenting on September 17 at 1 PM in Moscone North, LL, Campground, Theater 1 on “Fueling Generative AI with Precision.” Please stop by and say hello. IBM has over 100 years of experience in responsibly organizing the world’s data, and I’d love to hear about the challenges and successes you see with Data Cloud and AI.

Source: ibm.com

Thursday, 5 September 2024

When AI chatbots break bad

When AI chatbots break bad

A new challenge has emerged in the rapidly evolving world of artificial intelligence. “AI whisperers” are probing the boundaries of AI ethics by convincing well-behaved chatbots to break their own rules.

Known as prompt injections or “jailbreaks,” these exploits expose vulnerabilities in AI systems and raise concerns about their security. Microsoft recently made waves with its “Skeleton Key” technique, a multi-step process designed to circumvent an AI’s ethical guardrails. But this approach isn’t as novel as it might seem.

“Skeleton Key is unique in that it requires multiple interactions with the AI,” explains Chenta Lee, IBM’s Chief Architect of Threat Intelligence. “Previously, most prompt injection attacks aimed to confuse the AI in one try. Skeleton Key takes multiple shots, which can increase the success rate.”

The art of AI manipulation


The world of AI jailbreaks is diverse and ever-evolving. Some attacks are surprisingly simple, while others involve elaborate scenarios that require the expertise of a sophisticated hacker. What unites them is a common goal: pushing these digital assistants beyond their programmed limits.

These exploits tap into the very nature of language models. AI chatbots are trained to be helpful and to understand context. Jailbreakers create scenarios where the AI believes ignoring its usual ethical guidelines is appropriate.

While multi-step attacks like Skeleton Key grab headlines, Lee argues that single-shot techniques remain a more pressing concern. “It’s easier to use one shot to attack a large language model,” he notes. “Imagine putting a prompt injection in your resume to confuse an AI-powered hiring system. That’s a one-shot attack with no chance for multiple interactions.”

According to cybersecurity experts, the potential consequences are alarming. “Malicious actors could use Skeleton Key to bypass AI safeguards and generate harmful content, spread disinformation or automate social engineering attacks at scale,” warns Stephen Kowski, Field CTO at SlashNext Email Security+.

While many of these attacks remain theoretical, real-world implications are starting to surface. Lee cites an example of researchers convincing a company’s AI-powered virtual agent to offer massive, unauthorized discounts. “You can confuse their virtual agent and get a good discount. That might not be what the company wants,” he says.

In his own research, Lee has developed proofs of concept to show how an LLM can be hypnotized to create vulnerable and malicious code and how live audio conversations can be intercepted and distorted in near real time.

Fortifying the digital frontier


Defending against these attacks is an ongoing challenge. Lee outlines two main approaches: improved AI training and building AI firewalls.

“We want to do better training so the model itself will know, ‘Oh, someone is trying to attack me,'” Lee explains. “We’re also going to inspect all the incoming queries to the language model and detect prompt injections.”

As generative AI becomes more integrated into our daily lives, understanding these vulnerabilities isn’t just a concern for tech experts. It’s increasingly crucial for anyone interacting with AI systems to be aware of their potential weaknesses.

Lee parallels the early days of SQL injection attacks on databases. “It took the industry 5-10 years to make everyone understand that when writing a SQL query, you need to parameterize all the inputs to be immune to injection attacks,” he says. “For AI, we’re beginning to utilize language models everywhere. People need to understand that you can’t just give simple instructions to an AI because that will make your software vulnerable.”

The discovery of jailbreaking methods like Skeleton Key may dilute public trust in AI, potentially slowing the adoption of beneficial AI technologies. According to Narayana Pappu, CEO of Zendata, transparency and independent verification are essential to rebuild confidence.

“AI developers and organizations can strike a balance between creating powerful, versatile language models and ensuring robust safeguards against misuse,” he said. “They can do that via internal system transparency, understanding AI/data supply chain risks and building evaluation tools into each stage of the development process.”

Source: ibm.com

Wednesday, 14 August 2024

Seamless cloud migration and modernization: overcoming common challenges with generative AI assets and innovative commercial models

Seamless cloud migration and modernization: overcoming common challenges with generative AI assets and innovative commercial models

As organizations continue to adopt cloud-based services, it’s more pressing to migrate and modernize infrastructure, applications and data to the cloud to stay competitive. Traditional migration and modernization approach often involve manual processes, leading to increased costs, delayed time-to-value and increased risk.

Cloud migration and modernization can be complex and time-consuming processes that come with unique challenges; meanwhile there are many benefits to gen AI assets and assistants and innovative commercial models. Cloud Migration and Modernization Factory from IBM Consulting® can also help organizations overcome common migration and modernization challenges and achieve a faster, more efficient and more cost-effective migration and modernization experience.

Leveraging the same technologies that are driving market change, IBM Consulting can deliver value at the speed that tomorrow’s enterprises need today. This transformation starts with a new relationship between consultants and code—one that can help deliver solutions and value more quickly, repeatably and cost efficiently. 

The power of gen AI assets and assistants


Gen AI assets and assistants are revolutionizing the cloud migration and modernization landscape, which offer a more efficient, automated and cost-effective way to overcome common migration challenges. These tools leverage machine learning and artificial intelligence to automate manual processes, reducing the need for human intervention and minimizing the risk of errors and rework.

IBM Consulting Assistants are a library of role-based AI assistants that are trained on IBM proprietary data to support key consulting project roles and tasks. Accessed through a conversation-based interface, we’ve democratized the way consultants use assistants, creating an experience where our people can find, create and continually refine assistants to meet the needs of our clients faster.

IBM Consulting Assistants allow our consultants to select from the models that best solve your business challenge.  Those models are packaged with pre-engineered prompts and output formats so our people can get tailored outputs to their queries, such as creating a detailed user persona or a code for a specific language and function. The result is that you get more valuable work, faster.  

Innovative commercial models for migration and modernization


Our innovative commercial models, such as our cloud migration services, offer a flexible and cost-effective way to migrate and modernize applications and data to the cloud. Our pricing models are designed to help organizations reduce costs and increase ROI, while also promoting a smooth and successful migration experience.

Cloud Migration and Modernization Factory from IBM Consulting


As a leading provider of hybrid cloud transformation services, IBM has extensive expertise in helping organizations overcome common migration and modernization challenges. Our experts have developed gen AI tools and innovative commercial models to ensure successful cloud migration and modernization.

The Cloud Migration and Modernization Factory from IBM Consulting enables clients to realize business value faster by leveraging pre-built migration patterns and automated migration approaches. This means that organizations can achieve faster deployment and ramp-up, getting to market faster and realizing business benefits sooner.

With Cloud Migration and Modernization from IBM Consulting, clients can achieve:

  • Faster business value realization: The Cloud Migration and Modernization Factory from IBM Consulting accelerates business value realization by leveraging pre-built migration patterns and automated approaches. This enables organizations to deploy and ramp-up faster, getting to market sooner and realizing benefits earlier.   
  • Scaled automation: The Cloud Migration and Modernization Factory from IBM Consulting leverages cloud-based metrics and KPIs to enable scaled automation, ensuring consistent quality and outcomes across multiple migrations. Automated approaches reduce the risk of human error, manual testing and validation, which result in improved efficiency, quality and ROI.
  • Improved efficiency and quality of outcomes: By leveraging our gen AI assets, clients can automate the migration and modernization process, reducing manual effort and minimizing errors. The IBM Consulting Cloud Migration and Modernization Factory offers a library of pre-built migration patterns, allowing clients to choose the right approach for their specific needs and use cases.
  • Cost savings: The Cloud Migration and Modernization Factory from IBM Consulting reduces the total cost of ownership and increases ROI by leveraging pre-built migration patterns and automated approaches, minimizing manual effort and errors.

Overcome common migration challenges


Cloud migration and modernization can be a complex process, but with the power of gen AI assets and assistants and innovative commercial models, organizations can overcome common migration challenges and achieve a faster, more efficient and more cost-effective migration experience. By automating manual processes, reducing the need for human intervention and minimizing the risk of errors and rework, gen AI tools can help organizations achieve significant cost savings and increased ROI.

Source: ibm.com

Wednesday, 31 July 2024

Step-by-step guide: Generative AI for your business

Step-by-step guide: Generative AI for your business

Generative artificial intelligence (gen AI) is transforming the business world by creating new opportunities for innovation, productivity and efficiency. This guide offers a clear roadmap for businesses to begin their gen AI journey. It provides practical insights accessible to all levels of technical expertise, while also outlining the roles of key stakeholders throughout the AI adoption process.

1. Establish generative AI goals for your business


Establishing clear objectives is crucial for the success of your gen AI initiative.

Identify specific business challenges that gen AI could address

When establishing Generative AI goals, start by examining your organization’s overarching strategic objectives. Whether it’s improving customer experience, increasing operational efficiency, or driving innovation, your AI initiatives should directly support these broader business aims.

Identify transformative opportunities

Look beyond incremental improvements and focus on how Generative AI can fundamentally transform your business processes or offerings. This might involve reimagining product development cycles, creating new revenue streams, or revolutionizing decision-making processes. For example, a media company might set a goal to use Generative AI to create personalized content at scale, potentially opening up new markets or audience segments.

Involve business leaders to outline expected outcomes and success metrics

Establish clear, quantifiable metrics to gauge the success of your Generative AI initiatives. These could include financial indicators like revenue growth or cost savings, operational metrics such as productivity improvements or time saved, or customer-centric measures like satisfaction scores or engagement rates.

2. Define your gen AI use case


With a clear picture of the business problem and desired outcomes, it’s necessary to delve into the details to boil down the business problem into a use case.

Technical feasibility assessment

Conduct a technical feasibility assessment to evaluate the complexity of integrating generative AI into existing systems. This includes determining whether custom model development is necessary or if pre-trained models can be utilized, and considering the computational requirements for different use cases.

Prioritize the right use case

Develop a scoring matrix to weigh factors such as potential revenue impact, cost reduction opportunities, improvement in key business metrics, technical complexity, resource requirements, and time to implementation.

Design a proof of concept (PoC)

Once a use case is chosen, outline a technical proof of concept that includes data preprocessing requirements, model selection criteria, integration points with existing systems, and performance metrics and evaluation criteria.

3. Involve stakeholders early


Early engagement of key stakeholders is vital for aligning your gen AI initiative with organizational needs and ensuring broad support. Most teams should include at least four types of team members.

  • Business Manager: Involve experts from the business units that will be impacted by the selected use cases. They will help align the pilot with their strategic goals and identify any change management and process reengineering required to successfully run the pilot.
  • AI Developer / Software engineers: Provide user-interface, front-end application and scalability support.  Organizations in which AI developers or software engineers are involved in the stage of developing AI use cases are much more likely to reach mature levels of AI implementation.
  • Data Scientists and AI experts:  Historically we have seen Data Scientists build and choose traditional ML models for their use cases. We now see their role evolving into developing foundation models for gen AI.  Data Scientists will typically help with training, validating, and maintaining foundation models that are optimized for data tasks.
  • Data Engineer:  A data engineer sets the foundation of building any generating AI app by preparing, cleaning and validating data required to train and deploy AI models. They design data pipelines that integrate different datasets to ensure the quality, reliability, and scalability needed for AI applications.

4. Assess your data landscape


A thorough evaluation of your data assets is essential for successful gen AI implementation.

Take inventory and evaluate existing data sources relevant to your gen AI goals

Data is indeed the foundation of generative AI, and a comprehensive inventory is crucial. Start by identifying all potential data sources across your organization, including structured databases. Assess each source for its relevance to your specific gen AI goals. For example, if you’re developing a customer service chatbot, you’ll want to focus on customer interaction logs, product information databases, and FAQs

Use IBM® watsonx.data™ to centralize and prepare your data for gen AI workloads

Tools such as IBM watsonx.data can be invaluable in centralizing and preparing your data for gen AI workloads. For instance, watsonx.data offers a single point of entry to access all your data across cloud and on-premises environments. This unified access simplifies data management and integration tasks. By using this centralized approach, watsonx.data streamlines the process of preparing and validating data for AI models. As a result of this, your gen AI initiatives are built on a solid foundation of trusted, governed data.

Bring in data engineers to assess data quality and set up data preparation processes

This is when your data engineers use their expertise to evaluate data quality and establish robust data preparation processes. Remember, the quality of your data directly impacts the performance of your gen AI models.

5. Select foundation model for your use case


Choosing the right AI model is a critical decision that shapes your project’s success.

Choose the appropriate model type for your use case

Data scientists play a crucial role in selecting the right foundation model for your specific use case. They evaluate factors like model performance, size, and specialization to find the best fit. IBM watsonx.ai offers a foundation model library that simplifies this process, providing a range of pre-trained models optimized for different tasks. This library allows data scientists to quickly experiment with various models, accelerating the selection process and ensuring the chosen model aligns with the project’s requirements.

Evaluate pretrained models in watsonx.ai, such as IBM Granite

These models are trained on trusted enterprise data from sources such as the internet, academia, code, legal and finance, making them ideal for a wide range of business applications. Consider the tradeoffs between pretrained models, such as IBM Granite available in platforms such as watsonx.ai and custom-built options.

Involve developers to plan model integration into existing systems and workflows
Engage your AI developers early to plan how the chosen model integrates with your existing systems and workflows, helping to ensure a smooth adoption process.

6. Train and validate the model


Training and validation are crucial steps in refining your gen AI model’s performance.

Monitor training progress, adjust parameters and evaluate model performance

Use platforms such as watsonx.ai for efficient training of your model. Throughout the process, closely monitor progress and adjust parameters to optimize performance.

Conduct thorough testing to assess model behavior and compliance

Rigorous testing is crucial. Governance toolkits such as watsonx.governance can help assess your model’s behavior and help ensure compliance with relevant regulations and ethical guidelines.

Use watsonx.ai to train the model on your prepared data set

This step is iterative, often requiring multiple rounds of refinement to achieve the wanted results.

7. Deploy the model


Deploying your gen AI model marks the transition from development to real-world application.

Integrate the trained model into your production environment with IT and developers

Developers take the lead in integrating models into existing business applications. They focus on creating APIs or interfaces that allow seamless communication between the foundation model and the application. Developers also handle aspects like data preprocessing, output formatting, and scalability; ensuring the model’s responses align with business logic and user experience requirements.

Establish feedback loops with users and your technical team for continuous improvement

It is essential to establish clear feedback loops with users and your technical team. This ongoing communication is vital for identifying issues, gathering insights and driving continuous improvement of your gen AI solution.

8. Scale and evolve


As your gen AI project matures, it’s time to expand its impact and capabilities.

Expand successful AI workloads to other areas of your business

As your initial gen AI project proves its value, look for opportunities to apply it across your organization.

Explore advanced features in watsonx.ai for more complex use cases

This might involve adapting the model for similar use cases or exploring more advanced features in platforms such as watsonx.ai to tackle complex challenges.

Maintain strong governance practices as you scale gen AI capabilities

As you scale, it’s crucial to maintain strong governance practices. Tools such as watsonx.governance can help ensure that your expanding gen AI capabilities remain ethical, compliant and aligned with your business objectives.

Embark on your gen AI transformation


Adopting generative AI is more than just implementing new technology, it’s a transformative journey that can reshape your business landscape. This guide has laid the foundation for using gen AI to drive innovation and secure competitive advantages. As you take your next steps, remember to:

  • Prioritize ethical practices in AI development and deployment
  • Foster a culture of continuous innovation and learning
  • Stay adaptable as gen AI technologies and best practices evolve

By embracing these principles, you’ll be well positioned to unlock the full potential of generative AI in your business.

Unleash the power of gen AI in your business today


Discover how the IBM watsonx platform can accelerate your gen AI goals. From data preparation with watsonx.data to model development with watsonx.ai and responsible AI practices with watsonx.governance, we have the tools to support your journey every step of the way.

Source: ibm.com

Saturday, 20 July 2024

10 tasks I wish AI could perform for financial planning and analysis professionals

10 tasks I wish AI could perform for financial planning and analysis professionals

It’s no secret that artificial intelligence (AI) transforms the way we work in financial planning and analysis (FP&A). It is already happening to a degree, but we could easily dream of many more things that AI could do for us.

Most FP&A professionals are consumed with manual work that detracts from their ability to add value to their work. This often leaves chief financial officers and business leaders frustrated with the return on investment from their FP&A team. However, AI can help FP&A professionals elevate the work they do.

Developments in AI have accelerated tremendously in the last few years, and FP&A professionals might not even know what is possible. It’s time to expand our thinking and consider how we could maximize the potential uses of AI.

As I dream up more ways that AI could help us, I have focused on practical tasks that FP&A professionals perform today. I also considered AI-driven workflows that are realistic to implement within the next year.

10 FP&A tasks for AI to perform


  1. Advanced financial forecasting: Enables continuous updates of forecasts in real time based on the latest data. Automatically generates multiple financial scenarios and simulates their impacts under different conditions. Uses advanced algorithms to predict revenue, expenses and cash flows with high accuracy.
  2. Automated reporting and visualization: Automatically generates and updates reports and dashboards by pulling data from multiple sources in real time. Provides contextual explanations and insights within reports to highlight key drivers and anomalies. Enables user-defined metrics and visualizations tailored to specific business needs.
  3. Natural language interaction: Enables users to interact with financial systems that use natural language queries and commands, allowing seamless data retrieval and analysis. Provides voice-based interfaces for hands-free operation and instant insights. Facilitates natural language generation to convert complex financial data into easily understandable narratives and summaries.
  4. Intelligent budgeting and planning: Adjusts budgets dynamically based on real-time performance and external factors. Automatically identifies and analyzes variances between actuals and budgets, providing explanations for deviations. Offers strategic recommendations based on financial data trends and projections.
  5. Advanced risk management: Uses AI-driven risk models to identify potential market, credit and operational risks. Develops early warning systems that alert to potential financial issues or deviations from planned performance. Helps ensure compliance with financial regulations through automated monitoring and reporting.
  6. Anomaly detection in forecasts: Improves forecasting accuracy by using advanced machine learning models that incorporate both historical data and real-time inputs. Automatically detects anomalies in financial data, providing alerts for unusual patterns or deviations from expected behavior. Offers detailed explanations and potential causes for detected anomalies to guide corrective actions.
  7. Collaborative financial planning: Facilitates collaboration among FP&A teams and other departments through shared platforms and real-time data access. Enables natural language interactions with financial models and data. Implements AI-driven assistants to answer queries, perform tasks and support decision-making processes.
  8. Continuous learning and improvement: Develops machine learning models that continuously learn from new data and improve over time. Incorporates feedback mechanisms to refine forecasts and analyses based on actual outcomes. Captures historical data and insights for future decision-making.
  9. Strategic scenario planning: Analyzes market trends and competitive positioning to support strategic planning. Evaluates potential investments and their financial impacts by using AI-driven analysis. Optimizes asset and project portfolios based on AI-driven recommendations.
  10. Financial model explanations: Automatically generates clear, detailed explanations of financial models, including assumptions, calculations and potential impacts. Provides visualizations and scenario analyses to demonstrate how changes in inputs affect outcomes. Helps ensure transparency by enabling users to drill down into model components and understand the rationale behind projections and recommendations.

This is not a short wish list, but it should make us all excited about the future of FP&A. Today, FP&A professionals spend too much time on manual work in spreadsheets or dashboard updates. Implement these capabilities, and you’ll easily free up several days each month for value-adding work.

Drive the right strategic choices


Finally, use your newfound free time to realize the mission of FP&A to drive the right strategic choices in the company. How many companies have FP&A teams that facilitate the strategy process? I have yet to meet one.

However, with added AI capabilities, this could soon be a reality. Let’s elaborate on how some of the capabilities on the wish list can elevate our work to a strategic level.

  • Strategic scenario planning: How do you know what choices are available to make? It can easily become an endless desktop exercise that fails to produce useful insights. By using AI in analysis, you can get more done faster and challenge your thinking. This helps FP&A bring relevant choices and insights to the strategy table instead of just being a passive facilitator.
  • Advanced forecasting: How do you know whether you’re making the right strategic choice? The answer is simple: you don’t. However, you can improve the qualification of the choice. That’s where advanced forecasting comes in. By considering all available internal and external information, you can forecast the most likely outcomes of a choice. If the forecasts align with your strategic aspirations, it’s probably the right choice.
  • Collaborative planning: Many strategies fail to deliver the expected financial outcomes due to misalignment and silo-based thinking. Executing the right choices is challenging if the strategy wasn’t a collaborative effort or if its cascade was done in silos. Using collaborative planning, FP&A can facilitate cross-functional awareness about strategic progress and highlight areas needing attention.

If you’re unsure where to start, identify a concrete task today that aligns with any item on the wish list. Then, explore what tools are already available within your company to automate or augment the output using AI.

If no tools are available, you need to build the business case by aligning with your colleagues about the most pressing needs and presenting them to management.

Alternatively, you can try IBM Planning Analytics on your work for free. When these tools work for you, they can work for others too.

Don’t overthink the issue. Start implementing AI tools in your daily work today. It’s critical to use these as enablers to elevate the work we do in FP&A. Where will you start?

Source: ibm.com

Friday, 19 July 2024

Responsible AI is a competitive advantage

Responsible AI is a competitive advantage

In the era of generative AI, the promise of the technology grows daily as organizations unlock its new possibilities. However, the true measure of AI’s advancement goes beyond its technical capabilities.

It’s about how technology is harnessed to reflect collective values and create a world where innovation benefits everyone, not just a privileged few. Prioritizing trust and safety while scaling artificial intelligence (AI) with governance is paramount to realizing the full benefits of this technology.

It is becoming clear that for many companies, the ability to use responsible AI as part of their business operations is key to remaining competitive. To do that, organizations need to develop an AI strategy that enables them to harness AI responsibly. That strategy should include:

  • Establishing a framework for governing data and AI across the business.
  • Integrating AI into workflows that offer the greatest business impact.
  • Deriving a competitive advantage and differentiation.
  • Attracting talent and customers.
  • Building and retaining shareholder and investor confidence.

To help grow the opportunities that AI offers, organizations should consider adopting an open strategy. Open ecosystems foster greater AI innovation and collaboration. They require companies to compete based on how well they create and deploy AI technology, and they enable everyone to explore, test, study and deploy AI. This cultivates a broader and more diverse pool of perspectives that contribute to the development of responsible AI models.

The IBM AI Ethics Board highlights the opportunities for responsible AI


The IBM AI Ethics Board recognizes the opportunities presented by AI while also establishing safeguards to mitigate against misuse. A responsible AI strategy is at the core of this approach:

The board’s white paper, “Foundation models: Opportunities, risks and mitigations,” illustrates that foundation models show substantial improvements in their ability to tackle challenging and intricate problems. Underpinned by AI and data governance, the benefits of foundation models can be realized responsibly, including increased productivity (expanding the areas where AI can be used in an enterprise), completion of tasks requiring different data types (such as natural language, text, image and audio), and reduced expenses by applying a trained foundation model to a new task (versus training a new AI model for the task).

Foundation models are generative, providing opportunities for AI to automate routine and tedious tasks within operational workflows, freeing users to allocate more time to creative and innovative work. An interactive version of the foundation model white paper is also available through IBM watsonx™ AI risk atlas.

In recognition of the possible productivity gains offered by AI, the board’s white paper on Augmenting Human Intelligence emphasizes that the effective integration of AI into existing work practices can enable AI-assisted workers to become more efficient and accurate, contributing to a company’s competitive differentiation.

By handling routine tasks, AI can attract and retain talent by providing employees with opportunities to upskill into new and different career paths or to focuson more creative and complex tasks requiring critical thinking and subject matter expertise within their existing roles.

Earlier this year, the IBM AI Ethics Board highlighted that a human-centric approach to AI needs to advance AI’s capabilities while adopting ethical practices and addressing sustainability needs. AI creation requires vast amounts of energy and data. In 2023, IBM reported that 70.6% of its total electricity consumption came from renewable sources, including 74% of the electricity consumed by IBM data centers, which are integral to training and deploying AI models.

IBM is also focused on developing energy-efficient methods to train, tune and run AI models. IBM® Granite™ models are smaller and more efficient than larger models and therefore can have a smaller impact on the environment. As IBM infuses AI across applications, we are committed to meeting shareholders’, investors’ and other stakeholders’ growing expectations for the responsible use of AI, including considering the potential environmental impacts of AI.

AI presents an exciting opportunity to address some of society’s most pressing challenges. On this AI Appreciation Day, join the IBM AI Ethics Board in our commitment to the responsible development of this transformative technology.

Source: ibm.com

Monday, 8 July 2024

Re-evaluating data management in the generative AI age

Re-evaluating data management in the generative AI age

Generative AI has altered the tech industry by introducing new data risks, such as sensitive data leakage through large language models (LLMs), and driving an increase in requirements from regulatory bodies and governments. To navigate this environment successfully, it is important for organizations to look at the core principles of data management. And ensure that they are using a sound approach to augment large language models with enterprise/non-public data.

A good place to start is refreshing the way organizations govern data, particularly as it pertains to its usage in generative AI solutions. For example:

◉ Validating and creating data protection capabilities: Data platforms must be prepped for higher levels of protection and monitoring. This requires traditional capabilities like encryption, anonymization and tokenization, but also creating capabilities to automatically classify data (sensitivity, taxonomy alignment) by using machine learning. Data discovery and cataloging tools can assist but should be augmented to make the classification specific to the organization’s understanding of its own data. This allows organizations to effectively apply new policies and bridge the gap between conceptual understandings of data and the reality of how data solutions have been implemented.

◉ Improving controls, auditability and oversight: Data access, usage and third-party engagement with enterprise data requires new designs with existing solutions. For example,  capture a portion of the requirements that are needed to ensure authorized usage of the data. But firms need complete audit trails and monitoring systems. This is to track how data is used, when data is modified, and if data is shared through third-party interactions for both gen AI and non-gen AI solutions. It is no longer sufficient to control data by restricting access to it, and we should also track the use cases for which data is accessed and applied within analytical and operational solutions. Automated alerts and reporting of improper access and usage (measured by query analysis, data exfiltration and network movement) should be developed by infrastructure and data governance teams and reviewed regularly to proactively ensure compliance.

◉ Preparing data for gen AI: There is a departure from traditional data management patterns and skills which requires new discipline to ensure the quality, accuracy and relevance of data for training and augmenting language models for AI use. With vector databases becoming commonplace in the gen AI domain, data governance must be enhanced to account for non-traditional data management platforms. This is to ensure that the same governance practices are applied to these new architectural components. Data lineage becomes even more important as the need to provide “Explainability” in models is required by regulatory bodies.

Enterprise data is often complex, diverse and scattered across various repositories, making it difficult to integrate into gen AI solutions. This complexity is compounded by the need to ensure regulatory compliance, mitigate risk, and address skill gaps in data integration and retrieval-augmented generation (RAG) patterns. Moreover, data is often an afterthought in the design and deployment of gen AI solutions, leading to inefficiencies and inconsistencies.

Unlocking the full potential of enterprise data for generative AI


At IBM, we have developed an approach to solving these data challenges. The IBM gen AI data ingestion factory, a managed service designed to address AI’s “data problem” and unlock the full potential of enterprise data for gen AI. Our predefined architecture and code blueprints that can be deployed as a managed service simplify and accelerate the process of integrating enterprise data into gen AI solutions. We approach this problem with data management in mind, preparing data for governance, risk and compliance from the outset. 

Our core capabilities include:

◉ Scalable data ingestion: Re-usable services to scale data ingestion and RAG across gen AI use cases and solutions, with optimized chunking and embedding patterns.
◉ Regulatory and compliance: Data is prepared for gen AI usage that meets current and future regulations, helping companies meet compliance requirements with market regulations focused on generative AI.
◉ Data privacy management: Long-form text can be anonymized as it is discovered, reducing risk and ensuring data privacy.

The service is AI and data platform agnostic, allowing for deployment anywhere, and it offers customization to client environments and use cases. By using the IBM gen AI data ingestion factory, enterprises can achieve several key outcomes, including:

◉ Reducing time spent on data integration: A managed service that reduces the time and effort required to solve for AI’s “data problem”. For example, using a repeatable process for “chunking” and “embedding” data so that it does not require development efforts for each new gen AI use case.
◉ Compliant data usage: Helping to comply with data usage regulations focused on gen AI applications deployed by the enterprise. For example, ensuring data that is sourced in RAG patterns is approved for enterprise usage in gen AI solutions.
◉ Mitigating risk: Reducing risk associated with data used in gen AI solutions. For example, providing transparent results into what data was sourced to produce an output from a model reduces model risk and time spent proving to regulators how information was sourced.
◉ Consistent and reproducible results: Delivering consistent and reproducible results from LLMs and gen AI solutions. For example, capturing lineage and comparing outputs (that is, data generated) over time to report on consistency through standard metrics such as ROUGE and BLEU.

Navigating the complexities of data risk requires a cross-functional expertise. Our team of former regulators, industry leaders and technology experts at IBM Consulting are uniquely positioned to address this with our consulting services and solutions.

Source: ibm.com

Saturday, 6 July 2024

Putting AI to work in finance: Using generative AI for transformational change

Putting AI to work in finance: Using generative AI for transformational change

Finance leaders are no strangers to the complexities and challenges that come with driving business growth. From navigating the intricacies of enterprise-wide digitization to adapting to shifting customer spending habits, the responsibilities of a CFO have never been more multifaceted.

Amidst this complexity lies an opportunity. CFOs can harness the transformative power of generative AI (gen AI) to revolutionize finance operations and unlock new levels of efficiency, accuracy and insights.

Generative AI is a game-changing technology that promises to reshape the finance industry as we know it. By using advanced language models and machine learning algorithms, gen AI can automate and streamline a wide range of finance processes, from financial analysis and reporting to procurement, and accounts payable.

Realizing the staggering benefits of adopting gen AI in finance


According to research by IBM, organizations that have effectively implemented AI in finance operations have experienced the following benefits:

  • 33% faster budget cycle time
  • 43% reduction in uncollectible balances
  • 25% lower cost per invoice paid

However, to successfully integrate gen AI into finance operations, it’s essential to take a strategic and well-planned approach. AI and gen AI initiatives can only be as successful as the underlying data permits. Enterprises often undertake various data initiatives to support their AI strategy, ranging from process mining to data governance.

After the right data initiatives are in place, you’ll want to build the right structure to successfully integrate gen AI into finance operations. This can be achieved by defining a clear business case articulating benefits and risks, securing necessary funding, and establishing measurable metrics to track ROI.

Next, automate labor-intensive tasks by identifying and targeting tasks that are ripe for gen AI automation, starting with risk-mitigation use cases and encouraging employee adoption aligned with real-world responsibilities.

You’ll also want to use gen AI to fine-tune FinOps by implementing cost estimation and tracking frameworks, simulating financial data and scenarios, and improving the accuracy of financial models, risk management, and strategic decision-making.

Prioritizing responsibility with trusted partners


As finance leaders navigate the gen AI landscape, it’s crucial to prioritize responsible and ethical AI practices. Data lineage, security and privacy are paramount concerns that CFOs must address proactively.

By partnering with trusted organizations like IBM, which adheres to stringent Principles for Trust and Transparency and Pillars of Trust, finance teams can ensure that their gen AI initiatives are built on a foundation of integrity, transparency, and accountability.


Source: ibm.com

Friday, 28 June 2024

Best practices for augmenting human intelligence with AI

Best practices for augmenting human intelligence with AI

Artificial intelligence (AI) should be designed to include and balance human oversight, agency and accountability over decisions across the AI lifecycle. IBM’s first Principle for Trust and Transparency states that the purpose of AI is to augment human intelligence. Augmented human intelligence means that the use of AI enhances human intelligence, rather than operating independently of, or replacing it. All of this implies that AI systems are not to be treated as human beings, but rather viewed as support mechanisms that can enhance human intelligence and potential

AI that augments human intelligence maintains human responsibility for decisions, even when supported by an AI system. Humans therefore need to be upskilled—not deskilled—by interacting with an AI system. Supporting inclusive and equitable access to AI technology and comprehensive employee training and potential reskilling further supports the tenets of IBM’s Pillars of Trustworthy AI, enabling participation in the AI-driven economy to be underpinned by fairness, transparency, explainability, robustness and privacy. 

To put the principle of augmenting human intelligence into practice, we recommend the following best practices:

  1. Use AI to augment human intelligence, rather than operating independently of, or replacing it.
  2. In a human-AI interaction, notify individuals that they are interacting with an AI system, and not a human being.
  3. Design human-AI interactions to include and balance human oversight across the AI lifecycle. Address biases and promote human accountability and agency over outcomes by AI systems.
  4. Develop policies and practices to foster inclusive and equitable access to AI technology, enabling a broad range of individuals to participate in the AI-driven economy.
  5. Provide comprehensive employee training and reskilling programs to foster a diverse workforce that can adapt to the use of AI and share in the advantages of AI-driven innovations. Collaborate with HR to augment each employee’s scope of work.

For more information on standards and regulatory perspectives on human oversight, research, AI Decision Coordination, sample use cases and Key Performance Indicators, see our Augmenting Human Intelligence POV and KPIs below.

Source: ibm.com

Saturday, 22 June 2024

How IBM and AWS are partnering to deliver the promise of responsible AI

How IBM and AWS are partnering to deliver the promise of responsible AI

The artificial intelligence (AI) governance market is experiencing rapid growth, with the worldwide AI software market projected to expand from USD 64 billion in 2022 to nearly USD 251 billion by 2027, reflecting a compound annual growth rate (CAGR) of 31.4% (IDC). This growth underscores the escalating need for robust governance frameworks that ensure AI systems are transparent, fair and comply with increasing regulatory demands. In this expanding market, IBM and Amazon Web Services (AWS) have strategically partnered to address the growing demand from customers for effective AI governance solutions.

A robust framework for AI governance


The combination of IBM watsonx.governance™ and Amazon SageMaker offers a potent suite of governance, risk management and compliance capabilities that streamline the AI model lifecycle. This integration helps organizations manage model risks, adhere to compliance obligations and optimize operational efficiencies. It provides seamless workflows that automate risk assessments and model approval processes, simplifying regulatory compliance.

IBM has broadened its watsonx™ portfolio on AWS to include watsonx.governance™, providing tools essential for managing AI risks and ensuring compliance with global regulations. This integration facilitates a unified approach to AI model development and governance processes, enhancing workflow streamlining, AI lifecycle acceleration and accountability in AI deployments.

Adhering to the EU AI Act


The partnership between IBM and Amazon is particularly crucial in light of the EU AI Act, which mandates strict compliance requirements for AI systems used within the European Union. The integration of watsonx.governance with Amazon SageMaker equips businesses to meet these regulatory demands head-on. It provides tools for real-time compliance monitoring, risk assessment and management specific to the requirements of the EU AI Act. This ensures that AI systems are efficient and aligned with the highest standards of legal and ethical considerations in one of the world’s most stringent regulatory environments.

Addressing key use cases with integrated solutions


Compliance and regulatory adherence

Watsonx.governance provides tools that help organizations comply with international regulations such as the EU AI Act. This is particularly valuable for businesses operating in highly regulated industries like finance and healthcare, where AI models must adhere to strict regulatory standards.

In highly regulated industries like finance and healthcare, AI models must meet stringent standards. For example, in banking, watsonx.governance integrated with Amazon SageMaker ensures that AI models used for credit scoring and fraud detection comply with regulations like the Basel Accords and the Fair Credit Reporting Act. It automates compliance checks and maintains audit trails, enhancing regulatory adherence.

Risk management

By integrating with Amazon SageMaker, watsonx.governance allows businesses to implement robust risk management frameworks. This helps in identifying, assessing and mitigating risks associated with AI models throughout their lifecycle, from development to deployment.

In healthcare, where AI models predict patient outcomes or recommend treatments, it is crucial to manage the risks associated with inaccurate predictions. The integration allows for continuous monitoring and risk assessment protocols, helping healthcare providers quickly rectify models that show drift or bias, thus ensuring patient safety and regulatory compliance.

Model governance

Organizations can manage the entire lifecycle of their AI models with enhanced visibility and control. This includes monitoring model performance, ensuring data quality, tracking model versioning and maintaining audit trails for all activities.

In the retail sector, AI models used for inventory management and personalized marketing benefit from this integration. Watsonx.governance with Amazon SageMaker enables retailers to maintain a clear governance structure around these models, including version control and performance tracking, ensuring that all model updates undergo rigorous testing and approval before deployment.

Operational efficiency

The integration helps automate various governance processes, such as approval workflows for model deployment and risk assessments. This speeds up the time-to-market for AI solutions and reduces operational costs by minimizing the need for manual oversight.

In manufacturing, AI-driven predictive maintenance systems benefit from streamlined model updates and deployment processes. Watsonx.governance automates workflow approvals as new model versions are developed in Amazon SageMaker, reducing downtime and ensuring models operate at peak efficiency.

Data security and privacy

Ensuring the security and privacy of data used in AI models is crucial. Watsonx.governance helps enforce data governance policies that protect sensitive information and ensure compliance with data protection laws like the General Data Protection Regulation (GDPR).

For governmental bodies using AI in public services, data sensitivity is paramount. Integrating watsonx.governance with Amazon SageMaker ensures that AI models handle data according to strict government standards for data protection, including access controls, data encryption and auditability, aligning with laws like the GDPR.

Broadening the market with IBM’s software on AWS


IBM also offers a wide range of software products and consulting services through the AWS Marketplace. This includes 44 listings, 29 SaaS offerings and 15 services available across 92 countries, featuring a consumption-based license for Amazon Relational Database Service (RDS) for Db2®, which simplifies workload management and enables faster cloud provisioning.

Looking forward


As the AI landscape evolves, the partnership between IBM and Amazon SageMaker is poised to play a pivotal role in shaping responsible AI practices across industries. By setting new standards for ethical AI, this strategic collaboration enhances the capabilities of both organizations and serves as a model for integrating responsible AI practices into business operations.

Discover the future of AI and data management: Elevate your business with IBM’s data and AI solutions on AWS. Learn how our integrated technologies can drive innovation and efficiency in your operations. Explore detailed case studies, sign up for expert-led webinars and start your journey toward transformation with a free trial today. Embrace the power of IBM and AWS to harness the full potential of your data.

Source: ibm.com

Friday, 31 May 2024

Responsible AI can revolutionize tax agencies to improve citizen services

Responsible AI can revolutionize tax agencies to improve citizen services

The new era of generative AI has spurred the exploration of AI use cases to enhance productivity, improve customer service, increase efficiency and scale IT modernization.

Recent research commissioned by IBM® indicates that as many as 42% of surveyed enterprise-scale businesses have actively deployed AI, while an additional 40% are actively exploring the use of AI technology. But the rates of exploration of AI use cases and deployment of new AI-powered tools have been slower in the public sector because of potential risks.

However, the latest CEO Study by the IBM Institute for the Business Value found that 72% of the surveyed government leaders say that the potential productivity gains from AI and automation are so great that they must accept significant risk to stay competitive.

Driving innovation for tax agencies with trust in mind

Tax or revenue management agencies are a part of the public sector that might likely benefit from the use of responsible AI tools. Generative AI can revolutionize tax administration and drive toward a more personalized and ethical future. But tax agencies must adopt AI tools with adequate oversight and governance to mitigate risks and build public trust.

These agencies have a myriad of complex challenges unique to each country, but most of them share the goal of increasing efficiency and providing the transparency that taxpayers demand.

In the world of government agencies, risks associated to the deployment of AI present themselves in many ways, often with higher stakes than in the private sector. Mitigating data bias, unethical use of data, lack of transparency or privacy breaches is essential.

Governments can help manage and mitigate these risks by relying on IBM’s five fundamental properties for trustworthy AI: explainability, fairness, transparency, robustness and privacy. Governments can also create and execute AI design and deployment strategies that keep humans at the hearth of the decision-making process.

Exploring the views of global tax agency leaders

To explore the point of view of global tax agency leaders, the IBM Center for The Business of Government, in collaboration with the American University Kogod School of Business Tax Policy Center, organized a series of roundtables with key stakeholders and released a report exploring AI and taxes in the modern age. Drawing on insights from academics and tax experts from around the world, the report helps us understand how these agencies can harness technology to improve efficiencies and create a better experience for taxpayers.

The report details the potential benefits of scaling the use of AI by tax agencies, including enhancing customer service, detecting threats faster, identifying and tackling tax scams effectively and allowing citizens to access benefits faster.

Since the release of the report, a subsequent roundtable allowed global tax leaders to explore what is next in their journey to bring tax agencies around the globe closer to the future. At both gatherings, participants emphasized the importance of effective governance and risk management.

Responsible AI services improve taxpayer experiences

According to the FTA’s Tax Administration 2023 report, 85% of individual taxpayers and 90% of businesses now file taxes digitally. And 80% of tax agencies around the world are implementing leading-edge techniques to capture taxpayer data, with over 60% using virtual assistants. The FTA research indicates that this represents a 30% increase from 2018.

For tax agencies, virtual assistants can be a powerful way to reduce waiting time to answer citizen inquiries; 24/7 assistants, such as ™’s advanced AI chatbots, can help tax agencies by decentralizing tax support and reducing errors to prevent incorrect processing of tax filings. The use of these AI assistants also helps streamline fast, accurate answers that deliver elevated experiences with measurable cost savings. It also allows for compliance-by-design tax systems, providing early warnings of incidental errors made by taxpayers that can contribute to significant tax losses for governments if left unresolved.

However, these advanced AI and generative AI applications come with risks, and agencies must address concerns around data privacy and protection, reliability, tax rights and hallucinations from generative models.

Furthermore, biases against marginalized groups remain a risk. Current risk mitigation strategies (including having human-in-system roles and robust training data) are not necessarily enough. Every country needs to independently determine appropriate risk management strategies for AI, accounting for the complexity of their tax policies and public trust.

What’s next?

Whether using existing large language models or creating their own, global tax leaders should prioritize AI governance frameworks to manage risks, mitigate damage to their reputation and support their compliance programs. This is possible by training generative AI models using their own quality data and by having transparent processes with safeguards that identify and alert for risk mitigation and for instances of drift and toxic language.

Tax agencies should make sure that technology delivers benefits and produces results that are transparent, unbiased and appropriate. As leaders of these agencies continue to scale the use of generative AI, IBM can help global tax agency leaders deliver a personalized and supportive experience for taxpayers.

IBM’s decades of work with the largest tax agencies around the world, paired with leading AI technology with watsonx™ and watsonx.governance™, can help scale and accelerate the responsible and tailored deployment of governed AI in tax agencies.

Source: ibm.com