At the end of 2023, a survey conducted by the IBM Institute for Business Value (IBV) found that respondents believe government leaders often overestimate the public’s trust in them. They also found that, while the public is still wary about new technologies like artificial intelligence (AI), most people are in favor of government adoption of generative AI.
The IBV surveyed a diverse group of more than 13,000 adults across nine countries including the US, Canada, the UK, Australia and Japan. All respondents had at least a basic understanding of AI and generative AI.
The survey was designed to gain an understanding of individual perspectives on generative AI and its use by companies and governments, along with their expectations and intent in using this technology at work and in their personal lives. Respondents answered questions about their levels of trust in governments and their views on governments adopting and leveraging generative AI to deliver government services.
These findings reveal the complex nature of public trust in institutions and provide key insights for government decision-makers as they adopt generative AI on a global scale.
An overestimation of public trust: Discrepancies in perception
Trust is one of the main pillars of public institutions. According to Cristina Caballe Fuguet, Global Government Leader at IBM Consulting, “Trust is at the core of the government’s ability to perform their duties effectively. Citizens’ trust in governments, from local representatives to the highest posts in the national government, depends on multiple elements, including the delivery of public services.”
Trust is essential as governments take the lead on critical issues like climate change, public health and the safe and ethical integration of emerging technologies into societies. The current digital age demands more integrity, openness, trust and security as key pillars to building trust.
According to another recent study by the IBV, the IBM Institute for the Business of Government and the National Academy of Public Administration (NAPA), most government leaders understand building trust requires focus and commitment to collaboration, transparency and competence in execution. However, the most recent IBV research indicates trust in governments among constituents is in decline.
Respondents indicate their trust in federal and central governments has declined most since the start of the pandemic, with 39% of respondents indicating that their level of trust in their country’s government organizations is very low or extremely low, compared to 29% prior to the pandemic.
This contrasts with the perceptions of surveyed government leaders in the same study, as they indicate they are confident they have established and effectively grown trust in their organizations among constituents since the COVID-19 pandemic. This discrepancy in the perception of trust indicates that government leaders must find a way to better understand their constituents and reconcile their views on how the public sector institutions are performing with how they are perceived by constituents.
The study also found that building trust in AI-powered tools and citizen services will be a challenge for governments. Nearly half of respondents indicate that they trust more traditional human-assisted services, and only about 1 in 5 indicate they trust AI-powered services more.
Open and transparent AI implementation is the key to trust
This year, more than 60 countries and the EU (representing almost half of the population of the world) will head to the polls to elect their representatives. Governments leaders face myriad challenges, including ensuring that technologies work for—and not against—democratic principles, institutions and societies.
According to David Zaharchuck, Research Director, Thought Leadership for the IBV, “Ensuring the safe and ethical integration of AI into our societies and the global economy will be one of the greatest challenges and opportunities for governments over the next quarter century.”
Most surveyed individuals indicate they have concerns about the potential negative impacts of generative AI. This shows that most of the public is still wrapping their mind around this technology and how it can be designed and deployed by organizations in a trusted a responsible way, adhering to strict security and regulatory requirements.
The IBV study revealed that people still have a level of concern when it comes to the adoption of this emerging technology and the impact that it can have on issues like decision-making, privacy and data security or job security.
Despite their general lack of trust in the government and in emerging technologies, most surveyed individuals agree with government use of generative AI for customer service and believe the rate of adoption for generative AI by governments is appropriate. Less than 30% of those surveyed believe the pace of adoption in the public and private sectors is too fast. Most believe it is just right, and some even think it is too slow.
When it comes to specific use cases of generative AI, survey respondents have mixed views about using generative AI for various citizen services; however, a majority agree with governments using generative AI for customer service, tax and legal advisory services, and for educational purposes.
These finding show that citizens see the value in governments leveraging AI and generative AI. However, trust is an issue. If citizens don’t trust governments now, they certainly won’t if governments make mistakes as they adopt AI. Implementing generative AI in an open and transparent ways enables governments to build trust and capability at the same time.
According to Casey Wreth, Global Government Industry Leader at IBM Technology, “The future of generative AI in the public sector is promising, but the technology brings new complexities and risks that must be proactively addressed. Government leaders need to implement AI governance to manage risks, support their compliance programs and most importantly gain public trust on its wider use.”
Integrated AI governance helps ensure trustworthy AI
“As the adoption of generative AI continues to increase this year, it’s vital that citizens have access to transparent and explainable AI workflows that bring light to the black box of what’s generated using AI with tools like watsonx.governance. In this way, governments can be stewards of the responsible implementation of this groundbreaking technology,” says Wreth.
IBM watsonx™, an integrated AI, data and governance platform, embodies five fundamental pillars to help ensure trustworthy AI: fairness, privacy, explainability, transparency and robustness.
This platform offers a seamless, efficient and responsible approach to AI development across various environments. More specifically, the recent launch of IBM watsonx.governance helps public sector teams automate and address these areas, enabling them to direct, manage and monitor their organization’s AI activities.
In essence, this tool opens the black box about where and how any AI model gets the information for its outputs, similar to the function of a nutrition label, facilitating government transparency. This tool also facilitates clear processes so organizations can proactively detect and mitigate risks while supporting their compliance programs for internal AI policies and industry standards.
As the public sector continues to embrace AI and automation to solve problems and improve efficiency, it is crucial to maintain trust and transparency in any AI solution. Governments must comprehend and manage the full AI lifecycle effectively, and leaders should be able to easily explain what data was used to train and fine-tune models, as well as how the models reached their outcomes. Proactively adopting responsible AI practices is an opportunity for all of us to improve, and it is an opportunity for governments to lead with transparency as they harness AI for good.
Source: ibm.com
0 comments:
Post a Comment