Monday, 28 October 2024

CIOs must prepare their organizations today for quantum-safe cryptography

CIOs must prepare their organizations today for quantum-safe cryptography

Quantum computers are emerging from the pure research phase and becoming useful tools. They are used across industries and organizations to explore the frontiers of challenges in healthcare and life sciences, high energy physics, materials development, optimization and sustainability. However, as quantum computers scale, they will also be able to solve certain hard mathematical problems on which today’s public key cryptography relies. A future cryptographically relevant quantum computer (CRQC) might break globally used asymmetric cryptography algorithms that currently help ensure the confidentiality and integrity of data and the authenticity of systems access.

The risks imposed by a CRQC are far-reaching: possible data breaches, digital infrastructure disruptions and even widescale global manipulation. These future quantum computers will be among the biggest risks to the digital economy and pose a significant cyber risk to businesses.

There is already an active risk today. Cybercriminals are collecting encrypted data today with the goal of decrypting this data later when a CRQC is at their disposal, a threat known as “harvest now, decrypt later.” If they have access to a CRQC, they can retroactively decrypt the data, gaining unauthorized access to highly sensitive information.

Post-quantum cryptography to the rescue

Fortunately, post-quantum cryptography (PQC) algorithms, capable of protecting today’s systems and data, have been standardized. The National Institute of Standards and Technology (NIST) recently released the first set of three standards:

  • ML-KEM: a key encapsulation mechanism selected for general encryption, such as for accessing secured websites
  • ML-DSA: a lattice-based algorithm chosen for general-purpose digital signature protocols
  • SLH-DSA: a stateless hash-based digital signature scheme

Two of the standards (ML-KEM and ML-DSA) were developed by IBM® with external collaborators, and the third (SLH-DSA) was co-developed by a scientist who has since joined IBM.

Those algorithms will be adopted by governments and industries around the world as part of security protocols such as “Transport Layer Security” (TLS) and many others.

The good news is that these algorithms are at our disposal to protect against the quantum risk. The bad news is that enterprises must migrate their estate to adopt these new PQC standards.

Previous cryptography algorithm migration programs took years to complete. Ask yourself as an organization: how long was your SHA1 to SHA2 migration program? What about your public key infrastructure (PKI) upgrade program where you increased the PKI trust chain key size from 1024-bit to 2048-bit keys or 3072-bits or 4096-bit keys? How long did all that take to roll out across your complex enterprise environment? Several years?

The impact from quantum computing and the implementation of the PQC standards is vast, covering a comprehensive estate of your organization. The quantum computing risk affects many more systems, security tools and services, applications and network infrastructure. Your organization needs to immediately transition toward PQC standards to safeguard your assets and data.

Start adopting quantum-safe cryptography today

To protect your organization against “harvest now, decrypt later” risks, we advise you to run a quantum-safe transformation program. Start adopting tools and use services that allow you to roll out the recently announced PQC encryption standards.

IBM has developed a comprehensive quantum-safe program methodology, which is currently running across dozens of clients, spread across key industries and dozens of countries, including national governments.

We advise clients to adopt a program with the following key phases:

  • Phase 1: Prepare your cyber teams by delivering quantum risk awareness and identifying your priorities across the organization.
  • Phase 2: Prepare and transform your organization for migration to PQC.
  • Phase 3: Run your organization’s migration to PQC.

Phase 1: Prepare your teams

In phase 1 of the program journey, focus on key areas, such as creating an awareness campaign across the organization to educate stakeholders and security subject matter experts (SMEs) on the quantum risk. Establish quantum-safe “ambassadors” or “champions” who stay on top of the quantum risk and quantum-safe evolution and act as central contact for the program and help shape the enterprise strategy.

Next conduct risk assessments regarding the quantum risk against your organization’s cryptographically relevant business assets—which is any asset that uses or relies on cryptography in general.* For example, your risk and impact assessment should assess the business relevance of the asset, its environment complexity and migration difficulty, among other areas of assessment. Identify vulnerabilities within the business assets, including any urgent actions, and produce a report highlighting the findings to key stakeholders, helping them understand the organizational quantum risk posture. This can also serve as a baseline for developing your enterprise’s cryptography inventory.

Phase 2: Prepare your organization

In phase 2, guide your stakeholders on how to address the identified priority areas and potential cryptographic weaknesses and quantum risks. Then, detail remediation actions, such as highlighting systems that might not be able to support PQC algorithms. Finally, express the objectives of the migration program.

In this stage, IBM helps clients outline a quantum-safe migration roadmap that details the quantum-safe initiatives required for your organization to reach its objectives.

As we advise our clients: Consider critical initiatives in your roadmaps, such as developing a governance framework for cryptography, prioritizing systems and data for PQC migration. Update your secure software development practices and guidelines to use PQC by design and produce Cryptography Bills of Material (CBOMs). Work with your suppliers to understand third-party dependencies and cryptography artifacts. Update your procurement processes to focus on solutions and services that support PQC to prevent the creation of new cryptographic debt or new legacy.

One of the key required capabilities is ‘cryptographic observability,’ a cryptographic inventory that allows stakeholders to monitor the progress of adoption of PQC throughout your quantum-safe journey. Such an inventory should be supported by automatic data gathering, data analysis and risk and compliance posture management.

Phase 3: Run your migration

In phase 3, your organization runs the quantum-safe migration program by implementing initiatives based on priority systems/risk/cost, strategic objectives, delivery capacity, etc. Develop a quantum-safe strategy enforced through your organizational information security standards and policies.

Run the technology migration by using standardized, tested and proven reference architectures and migration patterns, journeys and blueprints.

Include the enablement of cryptographic agility within the development and migration solutions and implement cryptographic decoupling by abstracting local cryptography processing to centralized, governed and easily adaptable platform services.

Include in your program a feedback loop with lessons learned. Allow for the innovation and rapid testing of new approaches and solutions to support the migration program in the years ahead.

Challenges to expect during your PQC transition

Many elements are challenging to migrate. For example, fundamental components of internet infrastructure, such as wide area networks (WANs), local area networks (LANs), VPN concentrators and Site-2-Site links, will be more complex to migrate. Therefore, these elements require more attention than those that have limited use within the organization. Core cryptography services such as PKI, key management systems, secure payment systems, cryptography applications or backends such as HSMs, link encryptors and mainframes are all complex to migrate. You need to consider the dependencies on different applications and hardware, including technology interoperability issues.

You should also consider performance testing the PQC standards against your in-house systems and data workflows to help ensure compatibility and performance acceptability and identify any concerns. For example, PQC sometimes requires longer key sizes, ciphertext or signature sizes compared to currently used algorithms, which will need to be accounted for in integration and performance testing. Some organization-critical technologies still rely on legacy cryptography and might find it difficult or even impossible to migrate to PQC standards. Application refactoring and redesign might be required.

Other challenges include lack of skills or lack of documentations, which have created knowledge gaps within your enterprise. Hardcoded information within systems/config files/scripts, etc., will make it even more complex to migrate.

Make sure that your encryption keys and digital certificates are accurately tracked and managed. Poor management will further complicate the migration.

Not all use cases will be tested by international PQC working groups. There will be many combinations or configuration of technologies unique to your organizations, and you need to thoroughly test your systems from an end-to-end workflow perspective.

Don’t wait for regulations to catch up

Now that NIST has released a first set of PQC standards, we need to anticipate that regulation outside of the US will follow quickly. Examples in the context of the financial industry are:

  • In the EU, the Digital Operations Resilience Act (DORA) explicitly mentions quantum risks in a regulatory technical standard in the context of ICT risk management.
  • The Monetary Authority of Singapore (MAS) has called out a need that “senior management and relevant third-party vendors understand the potential threats of quantum technology.” It also mentions the need for “identifying and maintaining an inventory of cryptographic solutions.”
  • The Payment Card Industry Data Security Standard (PCI DSS) v4.0.1 now contains a control point that requires “an up-to-date inventory of all cryptographic cipher suites and protocols in use, including purpose and where used.”

Therefore, we advise you to focus on developing your cryptography governance framework, which includes the development of a quantum-safe strategy for your organization. It should be aligned to your business strategic goals and vision and target timescales. A center of excellence should support and advise as part of the transformation program. The governance framework should focus on core pillars such as your organization’s regulatory oversight, cryptographic assurance and risk management, delivery capacity building and PQC education. It should support adoption of best practices within your application development and supply security architecture patterns and technical design review boards.

The transformation program is going to be long and complex. It requires numerous cross-departmental engagement and a wide range of skills. Ensure you manage and observe team morale and consider your organization’s working culture and change management practices to help ensure program cohesion across the many years of delivery.

Also, consider partnership development, as many organizations depend on many vendors specific to their industry and ecosystem. Collaborate with others within your industry to learn and share ideas to address the quantum risk and PQC migration together through working groups and user groups.

From an operational perspective, help ensure you have a traceability catalog of key enterprise and business services mapped to regulations and laws and start planning a timeline for transition around each.

How IBM helps organizations with their quantum-safe journey

IBM helps implement quantum-safe migration for clients in financial services, insurance, telecommunication, retail, energy and other industries. We help clients understand their quantum risks, improving their cryptographic maturity and agility, defining their quantum-safe targets and implementing various transformation initiatives, supported by a broad set of technology assets.

At the same time, we are helping to start industry consortia to drive adoption of quantum-safe cryptography, such as:

Now that the first set of PQC standards have been released, organizations are expected to have a proper quantum-safe migration program in place. A solid program should include thorough risk and impact assessments, quantum-safe objectives and the right level of stakeholder attention. Prepare now for the adoption of quantum-safe standards and use technology to accelerate your journey.

Source: ibm.com

Wednesday, 16 October 2024

How well do you know your hypervisor and firmware?

How well do you know your hypervisor and firmware?

IBM Cloud Virtual Private Cloud (VPC) is designed for secured cloud computing, and several features of our platform planning, development and operations help ensure that design. However, because security in the cloud is typically a shared responsibility between the cloud service provider and the customer, it’s essential for you to fully understand the layers of security that your workloads run on here with us. That’s why here, we detail a few key security components of IBM Cloud VPC that aim to provide secured computing for our virtual server customers.

Let’s start with the hypervisor


The hypervisor, a critical component of any virtual server infrastructure, is designed to provide a secure environment on which customer workloads and a cloud’s native services can run. The entirety of its stack—from hardware and firmware to system software and configuration—must be protected from external tampering. Firmware and hypervisor software are the lowest layers of modifiable code and are prime targets of supply chain attacks and other privileged threats. Kernel-mode rootkits (also known as bootkits) are a type of privileged threat and are difficult to uncover by endpoint protection systems, such as antivirus and endpoint detection and response (EDR) software. They run before any protection system with the ability to obscure their presence and thus hide themselves. In short, securing the supply chain itself is crucial.

IBM Cloud VPC implements a range of controls to help address the quality, integrity and supply chain of the hardware, firmware and software we deploy, including qualification and testing before deployment.

IBM Cloud VPC’s 3rd generation solutions leverage pervasive code signing to protect the integrity of the platform. Through this process, firmware is digitally signed at the point of origin and signatures are authenticated before installation. At system start-up, a platform security module then verifies the integrity of the system firmware image before initialization of the system processor. The firmware, in turn, authenticates the hypervisor, including device software, thus establishing the system’s root of trust in the platform security module hardware.

Device configuration and verification


IBM Cloud Virtual Servers for VPC provide a wide variety of profile options (vCPU + RAM + bandwidth provisioning bundles) to help meet customers’ different workload requirements. Each profile type is managed through a set of product specifications. These product specifications outline the physical hardware’s composition, the firmware’s composition and the configuration for the server. The software includes, but is not limited to, the host firmware and component devices. These product profiles are developed and overseen by a hardware leadership team and are versioned for use across our fleet of servers.

As new hardware and software assets are brought into our IBM Cloud VPC environment, they’re also mapped to a product specification, which outlines their intended configuration. The intake verification process then validates that the server’s actual physical composition matches that of the specification before its entry into the fleet. If there’s a physical composition that doesn’t match the specification, the server is cordoned off for inspection and remediation. 

The intake verification process also verifies the firmware and hardware. 

There are two dimensions of this verification:

1. Firmware is signed by an approved supplier before it can be installed on an IBM Cloud Virtual Server for VPC system. This helps ensure only approved firmware is applied to the servers. IBM Cloud works with several suppliers to help ensure firmware is signed and components are configured to reject unauthorized firmware.

2. Only firmware that is approved through the IBM Cloud governed specification qualifies for installation. The governed specification is updated cyclically to add newly qualified firmware versions and remove obsolete versions. This type of firmware verification is also performed as part of the server intake process and before any firmware update.

Server configuration is also managed through the governed product specifications. Certain solutions might need custom unified extensible firmware interface (UEFI) configurations, certain features enabled or restrictions put in place. The product specification manages the configurations, which are applied through automation on the servers. Servers are scanned by IBM Cloud’s monitoring and compliance framework at run time.

Specification versioning and promotion


As mentioned earlier, the core components of the IBM Cloud VPC virtual server management process are the product specifications. Product specifications are definition files that contain the configurations for all server profiles maintained and are reviewed by the corresponding IBM Cloud product leader and governance-focused leadership team. Together, they control and manage the server’s approved components, configuration and firmware levels to be applied. The governance-focused leadership team strives for commonality where needed, whereas the product leaders focus on providing value and market differentiation.

It’s important to remember that specifications don’t stand still. These definition files are living documents that evolve as new firmware levels are released or the server hardware grows to support extra vendor devices. Because of this, the IBM Cloud VPC specification process is versioned to capture changes throughout the server’s lifecycle. Each server deployment captures the version of the specification that it was deployed with and provides identification of the intended versus actual state as well.

Promotion of specifications is also necessary. When a specification is updated, it doesn’t necessarily mean it’s immediately effective across the production environments. Instead, it moves through the appropriate development, integration and preproduction (staging) channels before moving to production. Depending on the types of devices or fixes being addressed, there might even be a varying schedule for how quickly the rollout occurs.

How well do you know your hypervisor and firmware?
Figure 1: IBM Cloud VPC specification promotion process

Firmware on IBM Cloud VPC is typically updated in waves. Where possible, it might be updated live, although some updates require downtime. Generally, this is unseen by our customers due to live migration. However, as the firmware updates roll through production, they can take time to move customers around. So, when a specification update is promoted through the pipeline, it then starts the update through the various runtime systems. The velocity of the update is generally dictated by the severity of the change.

How IBM Cloud VPC virtual servers set up a hardware root of trust


IBM Cloud Virtual Servers for VPC include root of trust hardware known as the platform security module. Among other functions, the platform security module hardware is designed to verify the authenticity and integrity of the platform firmware image before the main processor can boot. It verifies the image authenticity and signature using an approved certificate. The platform security module also stores copies of the platform firmware image. If the platform security module finds that the firmware image installed on the host was not signed with the approved certificate, the platform security module replaces it with one of its images before initializing the main processor.

Upon initialization of the main processor and loading of the system firmware, the firmware is then responsible for authenticating the hypervisor’s bootloader as part of a process known as secure boot, which aims to establish the next link in a chain of trust. The firmware verifies that the bootloader was signed using an authorized key before it was loaded. Keys are authorized when their corresponding public counterparts are enrolled in the server’s key database. Once the bootloader is cleared and loaded, it validates the kernel before the latter can run. Finally, the kernel validates all modules before they’re loaded onto the kernel. Any component that fails the validation is rejected, causing the system boot to halt.

The integration of secure boot with the platform security module aims to create a line of defense against the injection of unauthorized software through supply chain attacks or privileged operations on the server. Only approved firmware, bootloaders, kernels and kernel modules signed with IBM Cloud certificates and those of previously approved operating system suppliers can boot on IBM Cloud Virtual Servers for VPC.

The firmware configuration process described above includes the verification of firmware secure boot keys against the list of those initially approved. These consist of boot keys in the authorized keys database, the forbidden keys, the exchange key and the platform key.

Secure boot also includes a provision to enroll additional kernel and kernel module signing keys into the first stage bootloader (shim), also known as the machine owner key (mok). Therefore, IBM Cloud’s operating system configuration process is also designed so that only approved keys are enrolled in the mok facility.

Once a server passes all qualifications and is approved to boot, an audit chain is established that’s rooted in the hardware of the platform security module and extends to modules loaded into the kernel.

How well do you know your hypervisor and firmware?
Figure 2: IBM Cloud VPC secure boot audit chain

How do I use verified hypervisors on IBM Cloud VPC virtual servers?


Good question. Hypervisor verification is on by default for supported IBM Cloud Virtual Servers for VPC. Choose a generation 3 virtual server profile (such as bx3d, cx3d, mx3d or gx3), as shown below, to help ensure your virtual server instances run on hypervisor-verified supported servers. These capabilities are readily available as part of existing offerings and customers can take advantage by deploying virtual servers with a generation 3 server profile.

How well do you know your hypervisor and firmware?
Figure 3: IBM Cloud Virtual Servers for VPC, Generation 3

IBM Cloud continues to evolve its security architecture and enhances it by introducing new features and capabilities to help support our customers.

Source: ibm.com

Friday, 11 October 2024

How a solid generative AI strategy can improve telecom network operations

How a solid generative AI strategy can improve telecom network operations

Generative AI (gen AI) has transformed industries with applications such as document-based Q&A with reasoning, customer service chatbots and summarization tasks. These use cases have demonstrated the impressive capabilities of large language models (LLMs) in understanding and generating human-like responses, particularly in fields requiring nuanced language understanding and inferencing.

However, in the realm of telecom network operations, the data is different. The observability data comes from proprietary sources and encompasses a wide variety of formats, including alarms, performance metrics, probes and ticketing systems capturing incidents, defects and changes. This data, whether structured or unstructured, is deeply embedded in a domain-specific language. This includes terms and concepts from technologies like 5G, IP-MPLS and other network protocols.

A notable challenge arises from the fact that standard foundational LLMs are not typically trained on this highly specialized and technical data. This needs a careful strategy for integrating gen AI into the telecom operations domain, where operational efficiencies and accuracy are paramount.

Successfully using gen AI for network operations requires tailoring the models to this niche context while addressing unique challenges around data specificity and system integration.

How generative AI addresses network operations challenges

The complexity and diversity of network data, along with rapidly changing technologies, presents several challenges for network operations. Gen AI offers efficient solutions where traditional methods are costly or impractical.

  • Time-consuming processes: Switching between multiple systems (such as alarms, performance or traces) delays problem resolution. Generative AI centralizes data into one interface providing natural language experience, speeding up issue resolution by reducing system toggling.
  • Data fragmentation: Scattered data across platforms prevents a cohesive view of issues. Generative AI consolidates data from various sources based on the training. It can correlate and present data in a unified view, enhancing issue comprehension.
  • Complex interfaces: Engineers spend extra time adapting to various system interfaces (such as UIs, scripts and reports). Generative AI provides a natural language interface, simplifying navigation across complex systems.
  • Human error: Manual data consolidation leads to misdiagnoses due to data fragmentation challenges. AI-driven data analysis reduces errors, helping ensure accurate diagnosis and resolution.
  • Inconsistent data formats: Varying data formats make analysis difficult. Gen AI model training can provide standardized data output, improving correlation and troubleshooting.

Challenges in applying generative AI in network operations

While gen AI offers transformative potential in network operations, several challenges must be addressed to help ensure effective implementation:

  • Relevance and contextual precision: General-purpose language models perform well in nontechnical contexts, but in network-specific use cases, models need to be fine-tuned with domain-specific terminology to deliver relevant and precise results.
  • AI guardrails and hallucinations: In network operations, outputs must be grounded in technical accuracy, not just linguistic sense. Strong AI guardrails are essential to prevent incorrect or misleading results.
  • Chain-of-thought (CoT) loops: Network use cases often involve multistep reasoning across multiple data sources. Without proper control, AI agents can enter endless loops, leading to inefficiencies due to incomplete or misunderstood data.
  • Explainability and transparency: In critical network operations, engineers must understand how AI-derived decisions are made. AI systems must provide clear and transparent reasoning to build trust and help ensure effective troubleshooting, avoiding “black box” situations.
  • Continuous model enhancements: Constant feedback from technical experts is crucial for model improvement. This feedback loop should be integrated into model training to keep pace with the evolving network environment.

Implementing a workable strategy to maximize business benefits

Key design principles can help ensure the successful implementation of gen AI in network operations. These include:  

  • Multilayer agent architecture: A supervisor/worker model offers modularity, making it easier to integrate legacy network interfaces while supporting scalability.
  • Intelligent data retrieval: Using Reflective Retrieval-Augmented Generation (RAG) with hallucination safeguards helps ensure reliable, relevant data processing.
  • Directed chain of thought: This pattern helps guide AI reasoning to deliver predictable outcomes and avoid deadlocks in decision-making.
  • Transactional-level traceability: Every AI decision should be auditable, ensuring accountability and transparency at a granular level.
  • Standardized tooling: Seamless integration with various enterprise data sources is crucial for broad network compatibility.
  • Exit prompt tuning: Continuous model improvement is enabled through prompt tuning, ensuring that it adapts and evolves based on operational feedback.

How a solid generative AI strategy can improve telecom network operations

Implementing a gen AI strategy in network operations can lead to significant performance improvements, including:

  • Faster mean time to repair (MTTR): Achieve a 30-40% reduction in MTTR, resulting in enhanced network uptime.
  • Reduced average handle time (AHT): Decrease the time network operations center (NOC) technicians expenditure addressing field technician queries by 30-40%.
  • Lower escalation rates: Reduce the percentage of tickets escalated to L3/L4 by 20-30%.

Beyond these KPIs, gen AI can enhance the overall quality and efficiency of network operations, benefiting both staff and processes.

IBM Consulting, as part of its telecommunications solution offerings, provides reference implementation of the above strategy, helping our clients in applying gen AI-based solutions successfully in their network operations.

Source: ibm.com

Tuesday, 8 October 2024

New IBM study: How business leaders can harness the power of gen AI to drive sustainable IT transformation

New IBM study: How business leaders can harness the power of gen AI to drive sustainable IT transformation

As organizations strive to balance productivity, innovation and environmental responsibility, the need for sustainable IT practices is even more pressing. A new global study from the IBM Institute for Business Value reveals that emerging technologies, particularly generative AI, can play a pivotal role in advancing sustainable IT initiatives. However, successful transformation of IT systems demands a strategic and enterprise-wide approach to sustainability.

The power of generative AI in sustainable IT

Generative AI is creating new opportunities to transform IT operations and make them more sustainable. Teams can use this technology to quickly translate code into more energy-efficient languages, develop more sustainable algorithms and software and analyze code performance to optimize energy consumption. 27% of organizations surveyed are already applying generative AI in their sustainable IT initiatives, and 63% of respondents plan to follow suit by the end of 2024. By 2027, 89% are expecting to be using generative AI in their efforts to reduce the environmental impact of IT.

Despite the growing interest in using generative AI for sustainability initiatives, leaders must first consider its broader implications, particularly energy consumption.

64% say they are using generative AI and large language models, yet only one-third of those report having made significant progress in addressing its environmental impact. To bridge this gap, executives must take a thoughtful and intentional approach to generative AI, asking questions like, “What do we need to achieve?” and “What is the smallest model that we can use to get there?”

A holistic approach to sustainability

To have a lasting impact, sustainability must be woven into the very fabric of an organization, breaking free from traditional silos and incorporating it into every aspect of operations. Leading organizations are already embracing this approach, integrating sustainable practices across their entire operations, from data centers to supply chains, to networks and products. This enables operational efficiency by optimizing resource allocation and utilization, maximizing output and minimizing waste.

The results are telling: 98% of surveyed organizations that take a holistic, enterprise-wide approach to sustainable IT report seeing benefits in operational efficiency—compared to 50% that do not. The leading organizations also attribute greater reductions in energy usage and costs to their efforts. Moreover, they report impressive environmental benefits, with two times greater reduction in their IT carbon footprint.

Hybrid cloud and automation: key enablers of sustainable IT

Many organizations are turning to hybrid cloud and automation technologies to help reduce their environmental footprint and improve business performance. By providing visibility into data, workloads and applications across multiple clouds and systems, a hybrid cloud platform enables leaders to make data-driven decisions. This allows them to determine where to run their workloads, thereby reducing energy consumption and minimizing their environmental impact.

In fact, one quarter (25%) of surveyed organizations are already using hybrid cloud solutions to boost their sustainability and energy efficiency. Nearly half (46%) of those report a substantial positive impact on their overall IT sustainability. Automation is also playing a key role in this shift. With 83% of leading organizations harnessing its power to dynamically adjust IT environments based on demand.

Sustainable IT strategies for a better tomorrow

The future of innovation is inextricably linked to a deep commitment to sustainability. As business leaders harness the power of technology to drive impact, responsible decision-making is crucial, particularly in the face of emerging technologies such as generative AI. To better navigate this intersection of IT and sustainability, here are a few actions to consider: 

1. Actively manage the energy consumption associated with AI: Optimize the value of generative AI while minimizing its environmental footprint by actively managing energy consumption from development to deployment. For example, choose AI models that are designed for speed and energy efficiency to process information more effectively while reducing the computational power required.

2. Identify your environmental impact drivers: Understand how different elements of your IT estate influence environmental impacts and how this can change as you scale new IT efforts.

3. Embrace sustainable-by-design principles: Embed sustainability assessments into the design and planning stages of every IT project, by using a hybrid cloud platform to centralize control and gain better visibility across your entire IT estate.

Source: ibm.com

Saturday, 5 October 2024

Using AI to conserve the endangered African forest elephant

IBM Exam, IBM Study Guides, IBM Prep, IBM Learing, IBM Certification, IBM Preparation

In the Congo Basin, the second-largest rainforest in the world, the African forest elephant population has been in drastic decline for decades. This decline is the result of habitat loss caused by deforestation and climate change, along with rampant poaching.

We can observe the beneficial environmental effects of these species starting to disappear. As a keystone species in the habitat, the dwindling presence of the elephants has major implications you might not imagine. African forest elephants have been shown to increase carbon storage in their habitats. They are “ecosystem engineers” according to the World Wide Fund for Nature, clearing out lesser vegetation and making room for stronger, more resilient, flora to thrive.

While we know these changes will occur as the elephant population shrinks, actually seeing it happen presents challenges. The World Wide Fund for Nature-Germany aims to track and identify individual elephants to count them. With help from IBM, the WWF will be able to use a system of camera traps connected to software that enables automatic tracking as opposed to manual tracking.

Augmenting our vision with tech

That is where computer vision can serve as a fresh set of eyes. IBM announced earlier this year that it would team with WWF to pair camera traps with IBM Maximo® Visual Inspection (MVI) to help monitor and track individual elephants as they pass by the camera traps.

“MVI’s AI-powered visual inspection and modeling capabilities allow for head- and tusk-related image recognition of individual elephants similar to the way we identify humans via fingerprints,” explained Kendra DeKeyrel, Vice President ESG and Asset Management Product Leader at IBM. 

These capabilities allow for not only counting or spotting the individual elephants, but also tracking some of their behaviors to better understand their movement patterns and impact in the ecosystem. MVI particularly offers help in automating the process of identifying these elephants instead of having staff manually look at the images. Additionally, the AI’s advanced visual recognition capabilities can pull the identity of an elephant from an image that is blurry or incomplete.

“Counting African forest elephants is both difficult and costly,” Dr. Thomas Breuer, WWF’s African Forest Elephant Coordinator, said. “The logistics are complex and the resulting population numbers are not precise. Being able to identify individual elephants from camera trap images with the help of AI has the potential to be a game-changer.”

Strengthening our connection to the natural world

As more about the movement and migration of the African forest elephant is gleaned, more additional information can be pulled from our increased understanding of how the species is behaving and interacting with its environment. “IBM is exploring how to leverage IBM Environmental Intelligence above ground biomass estimates to better predict elephants’ future locations and migration patterns, as well as their impact on a specific forest,” DeKeyrel said.

That includes determining how much the African forest elephants can help with mitigating climate change. It’s understood that the presence of elephants helps to increase the carbon storage capacity of the forest. “African forest elephants play a crucial role in influencing the shape of the forest structure, including helping increase the diversity, density, and abundance of plant and tree species,” Oday Abbosh, IBM Global Sustainability Services Leader, explained. It’s estimated that one forest elephant can increase the net carbon capture capacity of the forest by almost 250 acres, the equivalent of removing a full year’s worth of emissions from 2,047 cars from the atmosphere.

Having a more accurate image of the elephant population allows for performance-based conservation payments, such as wildlife credits. In the future, this could help enable organizations to better assess the financial value of nature’s contributions to people (NCP) provided by African forest elephants, such as carbon sequestration services.

We know the animal kingdom is constantly shaping the planet, and being affected by our own activity even when we can’t see it. Due to continuing breakthroughs in technology, we’re increasingly getting a clearer picture of the world of wildlife that was previously difficult to capture. When we can see it, we can react to it, helping to protect species that need help and strengthening our connection to the natural world.

“Our collaboration with WWF marks a significant step forward in this effort,” Abbosh said, “By combining our expertise in technology and sustainability with WWF’s conservation expertise, we aim to leverage the power of technology to create a more sustainable future.”

Source: ibm.com