Showing posts with label AI Governance. Show all posts
Showing posts with label AI Governance. Show all posts

Tuesday, 3 October 2023

What can AI and generative AI do for governments?

What can AI and generative AI do for governments?

Few technologies have taken the world by storm the way artificial intelligence (AI) has over the past few years. AI and its many use cases have become a topic of public discussion no longer relegated to tech experts. AI—generative AI, in particular—has tremendous potential to transform society as we know it for good, boost productivity and unlock trillions in economic value in the coming years.

AI’s value is not limited to advances in industry and consumer products alone. When implemented in a responsible way—where the technology is fully governed, privacy is protected and decision making is transparent and explainable—AI has the power to usher in a new era of government services. Such services can empower citizens and help restore trust in public entities by improving workforce efficiency and reducing operational costs in the public sector. On the backend, AI likewise has the potential to supercharge digital modernization in by, for example, automating the migration of legacy software to more flexible cloud-based applications, or accelerating mainframe application modernization.

Despite the many potential advantages, many government agencies are still grasping on how to implement AI and generative AI in particular In many cases, government agencies around the globe face a choice. They can either embrace AI and its advantages, tapping into the technology’s potential to help improve the lives of the citizens they serve. Or they can stay on the sidelines and risk missing out on AI’s ability to help agencies more effectively meet their objectives.

Government agencies early to adopt solutions leveraging AI and automation offer concrete insights into the technology’s public sector benefits—whether modernizing the US Internal Revenue Service (IRS) tax return processing or using automation to greatly improve the efficiency of the U.S. Agency for International Development’s Global Health Supply Chain Program. Other successful AI deployments reach citizens directly, including a virtual assistants like the one created by the Ukranian Embassy in the Czech Republic to provides information to Ukrainian citizens. The new wave of AI, with foundational models provided by generative AI, could represent the new major opportunity to put AI to work for governments.

Three main areas of focus 


Getting there, however, requires government agencies to focus on the main areas where AI use cases can benefit their agencies, and its customers the most. In our view, there are three main areas.

The first is workforce transformation, or digital labor. At all levels of governments, from national entities to local governments, public employees must be ready for this new AI era. While that can mean hiring new talent like data scientists and software programmers, it should also mean providing existing workers with the training they need to manage AI-related projects. With this can come improved productivity, as technologies such as natural language processing (NLP) hold the promise of relieving the need for heavy text data reading and analysis. The goal is to free up time for public employees to engage in high value meetings, creative thinking and meaningful work.  

The second major focus must be citizen support. For AI to truly benefit society, the public sector needs to prioritize use cases that directly benefit citizens. There is potential for a variety of uses in the future—whether it’s providing information in real time, personalizing services based on a citizen’s particular needs, or hastening processes that have a reputation for being slow. For example, anyone who has ever had to file paperwork, or file a claim knows the feeling all too well: Sitting in an office for hours, waiting while employees click through endless screens, hunting and pecking for information stored in different databases. What if AI’s ability to access, organize and leverage data could create new possibilities for improving government offerings, even those already available online, by unlocking data across agencies to deliver information and services more intuitively and proactively?

Third, AI is also becoming a crucial component of the public sector’s digital transformation efforts. Governments are regularly held back from true transformation by legacy systems with tightly coupled workflow rules that require substantial effort and significant cost to modernize. For example, public sector agencies can make better use of data by migrating certain technology systems to the cloud and infuse it with AI. AI-powered tools hold the potential to help with pattern detection in large stores of data, and also be able to write computer programs. This could benefit cost optimization and also strengthen cybersecurity, as it can help detect threats quickly. This way, instead of seeking hard-to-find skills, agencies can reduce their skills gap and tap into evolving talent. 

Commitment to responsible AI 


Last but not least, in IBM’s view, no discussion of responsible AI in the public sector is complete without emphasizing the importance of the ethical use of the technology throughout its lifecycle of design, development, use, and maintenance—something in which IBM has promoted in the industry for years. Along with healthcare organizations and financial services entities, government and public sector entities must strive to be seen as the most trusted institutions. That means humans should continue to be at the heart of the services delivered by government while monitoring for responsible deployment by relying on the five fundamental properties for trustworthy AI: explainability, fairness, transparency, robustness and privacy.

  • Explainability: An AI system’s ability to provide a human-interpretable explanation for its predictions and insights to the public in a way that does not hide behind technical jargon.
  • Fairness: An AI system’s ability to treat individuals or groups equitably, depending on the context in which the AI system is used, countering biases and addressing discrimination related to protected characteristics, such as gender, race, age, and veteran status.
  • Transparency: An AI system’s ability to include and share information on how it has been designed and developed and what data from which sources have fed the system.
  • Robustness: An AI system’s ability to effectively handle exceptional conditions, such as abnormalities in input to guarantee consistent outputs.
  • Privacy: An AI system’s ability to prioritize and safeguard consumers’ privacy and data rights and address existing regulations in the data collection, storage and access and disclosure.

As long as AI is implemented in a way that includes all the traits mentioned above, it can help both governments and citizens alike in new ways. Perhaps the biggest benefit to AI and foundational models is its range: It can extend to even the smallest of agencies. It can be used even in state and local governmental projects, such as using models to improve how employees and citizens search databases to find out more about policies or government-issued benefits. By staying informed, responsible, and well-equipped on AI, the public sector has the ability to help shape a brighter and better future for all.  

IBM is committed to unleashing the transformative potential of foundation models and generative AI to help address high-stakes challenges. We provide open and targeted value creating AI solutions for businesses and public sector institutions. IBM watsonx, our integrated AI and data platform, embodies these principles, offering a seamless, efficient, and responsible approach to AI deployment across a variety of environments. IBM stands ready to empower governmental organizations in the age of AI. Let’s embrace the age of AI value creation together.

Source: ibm.com

Wednesday, 16 August 2023

Take advantage of AI and use it to make your business better

IBM Exam, IBM Exam Study, IBM Career, IBM Skill, IBM Certification, IBM Tutorial and Materials

Artificial intelligence (AI) adoption is here. Organizations are no longer asking whether to add AI capabilities, but how they plan to use this quickly emerging technology. In fact, the use of artificial intelligence in business is developing beyond small, use-case specific applications into a paradigm that places AI at the strategic core of business operations. By offering deeper insights and eliminating repetitive tasks, workers will have more time to fulfill uniquely human roles, such as collaborating on projects, developing innovative solutions and creating better experiences.

This advancement does not come without its challenges. While 42% of companies say they are exploring AI technology, the failure rate is high; on average, 54% of AI projects make it from pilot to production. To overcome these challenges will require a shift in many of the processes and models that businesses use today: changes in IT architecture, data management and culture. Here are some of the ways organizations today are making that shift and reaping the benefits of AI in a practical and ethical way.

How companies use artificial intelligence in business


Artificial intelligence in business leverages data from across the company as well as outside sources to gain insights and develop new business processes through the development of AI models. These models aim to reduce rote work and complicated, time-consuming tasks, as well as help companies make strategic changes to the way they do business for greater efficiency, improved decision-making and better business outcomes.

A common phrase you’ll hear around AI is that artificial intelligence is only as good as the data foundation that shapes it. Therefore, a well-built AI for business program must also have a good data governance framework. It ensures the data and AI models are not only accurate, providing a higher-quality outcome, but that the data is being used in a safe and ethical way.

Why we’re all talking about AI for business


It’s hard to avoid conversations about artificial intelligence in business today. Healthcare, retail, financial services, manufacturing—whatever the industry, business leaders want to know how using data can give them a competitive advantage and help address the post-COVID challenges they face each day.

Much of the conversation has been focused on generative AI capabilities and for good reason. But while this groundbreaking AI technology has been the focus of media attention, it only tells part of the story. Diving deeper, the potential of AI systems is also challenging us to go beyond these tools and think bigger: How will the application of AI and machine learning models advance big-picture, strategic business goals?

Artificial intelligence in business is already driving organizational changes in how companies approach data analytics and cybersecurity threat detection. AI is being implemented in key workflows like talent acquisition and retention, customer service, and application modernization, especially paired with other technologies like virtual agents or chatbots.

Recent AI developments are also helping businesses automate and optimize HR recruiting and professional development, DevOps and cloud management, and biotech research and manufacturing. As these organizational changes develop, businesses will begin to switch from using AI to assist in existing business processes to one where AI is driving new process automation, reducing human error, and providing deeper insights. It’s an approach known as AI first or AI+.

Building blocks of AI first


What does building a process with an AI first approach look like? Like all systemic change, it is a step-by-step process—a ladder to AI—that lets companies create a clear business strategy and build out AI capabilities in a thoughtful, fully integrated way with three clear steps.  

Configuring data storage specifically for AI

The first step toward AI first is modernizing your data in a hybrid multicloud environment. AI capabilities require a highly elastic infrastructure to bring together various capabilities and workflows in a team platform. A hybrid multicloud environment offers this, giving you choice and flexibility across your enterprise.

Building and training foundation models

Creating foundations models starts with clean data. This includes building a process to integrate, cleanse, and catalog the full lifecycle of your AI data. Doing so allows your organization the ability to scale with trust and transparency.

Adopting a governance framework to ensure safe, ethical use

Proper data governance helps organizations build trust and transparency, strengthening bias detection and decision making When data is accessible, trustworthy and accurate, it also enables companies to better implement AI throughout the organization. 

What are foundation models and how are they changing the game for AI?


Foundation models are AI models trained with machine learning algorithms on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning. The model can apply information it’s learned about one situation to another using self-supervised learning and transfer learning. For example, ChatGPT is built upon the GPT-3.5 and GPT-4 foundation models created by OpenAI.

Well-built foundation models offer significant benefits; the use of AI can save businesses countless hours building their own models. These time-saving advantages are what’s attracting many businesses to wider adoption. IBM expects that in two years, foundation models will power about a third of AI within enterprise environments.

From a cost perspective, foundation models require significant upfront investment; however, they allow companies to save on the initial cost of model building since they are easily scaled to other uses, delivering higher ROI and faster speed to market for AI investments.


To that end, IBM is building a set of domain-specific foundation models that go beyond natural language learning models and are trained on multiple types of business data, including code, time-series data, tabular data, geospatial data, semi-structured data, and mixed-modality data such as text combined with images. The first of which, Slate, was recently released.

AI starts with data

To launch a truly effective AI program for your business, you must have clean quality datasets and an adequate data architecture for storing and accessing it. The digital transformation of your organization must be mature enough to ensure data is collected at the needed touchpoints across the organization and the data must be accessible to whoever is doing the data analysis.

Building an effective hybrid multicloud model is essential for AI to manage the massive amounts of data that must be stored, processed and analyzed. Modern data architectures often employ a data fabric architectural approach, which simplifies data access and makes self-service data consumption easier. Adopting a data fabric architecture also creates an AI-ready composable architecture that offers consistent capabilities across hybrid cloud environments.

Governance and knowing where your data come from

The importance of accuracy and the ethical use of data makes data governance an important piece in any organization’s AI strategy. This includes adopting governance tools and incorporating governance into workflows to maintain consistent standards. A data management platform also enables organizations to properly document the data used to build or fine-tune models, providing users insight into what data was used to shape outputs and regulatory oversight teams the information they need to ensure safety and privacy.

Key considerations when building an AI strategy


Companies that adopt AI first to effectively and ethically use AI to drive revenue and improve operations will have the competitive advantage over those companies that fail to fully integrate AI into their processes. As you build your AI first strategy, here are some critical considerations:

How will AI deliver business value?

The first step when integrating AI into your organization is to identify the ways various AI platforms and types of AI align with key goals. Companies should not only discuss how AI will be implemented to achieve these goals, but also the desired outcomes.

For example, data opens opportunities for more personalized customer experiences and, in turn, a competitive edge. Companies can create automated customer service workflows with customized AI models built on customer data. More authentic chatbot interactions, product recommendations, personalized content and other AI functionality have the potential to give customers more of what they want. In addition, deeper insights on market and consumer trends can help teams develop new products.

For a better customer experience—and operational efficiency—focus on how AI can optimize critical workflows and systems, such as customer service, supply chain management and cybersecurity.

How will you empower teams to make use of your data?

One of the key elements in data democratization is the concept of data as a product. Your company data is spread across on-premises data centers, mainframes, private clouds, public clouds and edge infrastructure. To successfully scale your AI efforts, you will need to successfully use your data “product.”

A hybrid cloud architecture enables you to use data from disparate sources seamlessly and scale effectively throughout the business. Once you have a grasp on all your data and where it resides, decide which data is the most critical and which offers the strongest competitive advantage.

How will you ensure AI is trustworthy?

With the rapid acceleration of AI technology, many have begun to ask questions about ethics, privacy and bias. To ensure AI solutions are accurate, fair, transparent and protect customer privacy, companies must have well-structured data management and AI lifecycle systems in place.

Regulations to protect consumers are ever expanding; In July 2023, the EU Commission proposed new standards of GDPR enforcement and a data policy that would go into effect in September. Without proper governance and transparency, companies risk reputational damage, economic loss and regulatory violations.

Examples of AI being used in the workplace


Whether using AI technology to power chatbots or write code, there are countless ways deep learning, generative AI, natural language processing and other AI tools are being deployed to optimize business operations and customer experience. Here are some examples of business applications of artificial intelligence:

Coding and application modernization

Companies are using AI for application modernization and enterprise IT operations, putting AI to work automating coding, deploying and scaling. For example, Project Wisdom lets developers using Red Hat Ansible input a coding command as a straightforward English sentence through a natural-language interface and get automatically generated code. The project is the result of an IBM initiative called AI for Code and the release of IBM Project CodeNet, the largest dataset of its kind aimed at teaching AI to code.

Customer service

AI is effective for creating personalized experiences at scale through chatbots, digital assistants and other customer interfaces. McDonald’s, the world’s largest restaurant company, is building customer care solutions with IBM Watson AI technology and natural language processing (NLP) to accelerate the development of its automated order taking (AOT) technology. Not only will this help scale the AOT tech across markets, but it will also help tackle integrations including additional languages, dialects and menu variations.

Optimizing HR operations

When IBM implemented IBM watsonx Orchestrate as part of a pilot program for IBM Consulting in North America, the company saved 12,000 hours in one quarter on manual promotion assessment tasks, reducing a process that once took 10 weeks down to five. The pilot also made it easier to gain important HR insights. Using its digital worker tool, HiRo, IBM’s HR team now has a clearer view of each employee up for promotion and can more quickly assess whether key benchmarks have been met.

The future of AI in business


AI in business holds the potential to improve a wide range of business processes and domains, especially when the organization takes an AI first approach.

In the next five years, we will likely see businesses scale AI programs more quickly by looking to areas where AI has begun to make recent advancements, such as digital labor, IT automation, security, sustainability and application modernization.

Ultimately, success with new technologies in AI will rely on the quality of data, data management architecture, emerging foundation models and good governance. With these elements—and with business-driven, practical objectives—businesses can make the most out of AI opportunities.

Source: ibm.com

Tuesday, 9 May 2023

The risks and limitations of AI in insurance

IBM, IBM Exam Study, IBM Exam Preparation, IBM Exam Learning, IBM Career, IBM Jobs, IBM Skills, IBM Certification

Artificial intelligence (AI) is polarizing. It excites the futurist and engenders trepidation in the conservative. In my previous post, I described the different capabilities of both discriminative and generative AI, and sketched a world of opportunities where AI changes the way that insurers and insured would interact. This blog continues the discussion, now investigating the risks of adopting AI and proposes measures for a safe and judicious response to adopting AI.

Risk and limitations of AI


The risk associated with the adoption of AI in insurance can be separated broadly into two categories—technological and usage.

Technological risk—data confidentiality

The chief technological risk is the matter of data confidentiality. AI development has enabled the collection, storage, and processing of information on an unprecedented scale, thereby becoming extremely easy to identify, analyze, and use personal data at low cost without the consent of others. The risk of privacy leakage from interaction with AI technologies is a major source of consumer concern and mistrust.

The advent of generative AI, where the AI manipulates your data to create new content, provides an additional risk to corporate data confidentiality. For example, feeding a generative AI system such as Chat GPT with corporate data to produce a summary of confidential corporate research would mean that a data footprint would be indelibly left on the external cloud server of the AI and accessible to queries from competitors.

Technological risk—security

AI algorithms are the parameters that optimizes the training data that gives the AI its ability to give insights. Should the parameters of an algorithm be leaked, a third party may be able to copy the model, causing economic and intellectual property loss to the owner of the model. Additionally, should the parameters of the AI algorithm model may be modified illegally by a cyber attacker, it will cause the performance deterioration of the AI model and lead to undesirable consequences.

Technological risk—transparency

The black-box characteristic of AI systems, especially generative AI, renders the decision process of AI algorithms hard to understand. Crucially, the insurance sector is a financially regulated industry where the transparency, explainability and auditability of algorithms is of key importance to the regulator.

Usage risk—inaccuracy

The performance of an AI system heavily depends on the data from which it learns. If an AI system is trained on inaccurate, biased, or plagiarized data, it will provide undesirable results even if it is technically well-designed.

Usage risk—abuse

Though an AI system may be operating correctly in its analysis, decision-making, coordination, and other activities, it still has the risk of abuse. The operator use purpose, use method, use range, and so on, could be perverted or deviated, and meant to cause adverse effects. One example of this is facial recognition being used for the illegal tracking of people’s movement.

Usage risk—over-reliance

Over-reliance on AI occurs when users start accepting incorrect AI recommendations—making errors of commission. Users have difficulty determining appropriate levels of trust because they lack awareness of what the AI can do, how well it can perform, or how it works. A corollary to this risk is the weakened skill development of the AI user. For instance, a claims adjuster whose ability to handle new situations, or consider multiple perspectives, is deteriorated or restricted to only cases to which the AI also has access.

Mitigating the AI risks


The risks posed by AI adoption highlights the need to develop a governance approach to mitigate the technical and usage risk that comes from adopting AI.

Human-centric governance

To mitigate the usage risk a three-pronged approach is proposed:

1. Start with a training program to create mandatory awareness for staff involved in developing, selecting, or using AI tools to ensure alignment with expectations.

2. Then conduct a vendor assessment scheme to assess robustness of vendor controls and ensure appropriate transparency codified in contracts.

3. Finally, establish policy enforcement measure to set the norms, roles and accountabilities, approval processes, and maintenance guidelines across AI development lifecycles.

Technology-centric governance

To mitigate the technological risk, the IT governance should be expanded to account for the following:

1. An expanded data and system taxonomy. This is to ensure the AI model captures data inputs and usage patterns, required validations and testing cycles, and expected outputs. You should host the model on internal servers.

2. A risk register, to quantify the magnitude of impact, level of vulnerability, and extent of monitoring protocols.


3. An enlarged analytics and testing strategy to execute testing on a regular basis to monitor risk issues that related to AI system inputs, outputs, and model components.

AI in insurance—Exacting and inevitable


AI’s promise and potential in insurance lies in its ability to derive novel insights from ever larger and more complex actuarial and claims datasets. These datasets, combined with behavioral and ecological data, creates the potential for AI systems querying databases to draw erroneous data inferences, portending to real-world insurance consequences.

Efficient and accurate AI requires fastidious data science. It requires careful curation of knowledge representations in database, decomposition of data matrices to reduce dimensionality, and pre-processing of datasets to mitigate the confounding effects of missing, redundant and outlier data. Insurance AI users must be aware that input data quality limitations have insurance implications, potentially reducing actuarial analytic model accuracy. 

As AI technologies continues to mature and use cases expand, insurers should not shy from the technology. But insurers should contribute their insurance domain expertise to AI technologies development. Their ability to inform input data provenance and ensure data quality will contribute towards a safe and controlled application of AI to the insurance industry.

As you embark on your journey to AI in insurance, explore and create insurance cases. Above all, put in a robust AI governance program.

Source: ibm.com