Showing posts with label Sustainability strategy. Show all posts
Showing posts with label Sustainability strategy. Show all posts

Saturday, 5 October 2024

Using AI to conserve the endangered African forest elephant

IBM Exam, IBM Study Guides, IBM Prep, IBM Learing, IBM Certification, IBM Preparation

In the Congo Basin, the second-largest rainforest in the world, the African forest elephant population has been in drastic decline for decades. This decline is the result of habitat loss caused by deforestation and climate change, along with rampant poaching.

We can observe the beneficial environmental effects of these species starting to disappear. As a keystone species in the habitat, the dwindling presence of the elephants has major implications you might not imagine. African forest elephants have been shown to increase carbon storage in their habitats. They are “ecosystem engineers” according to the World Wide Fund for Nature, clearing out lesser vegetation and making room for stronger, more resilient, flora to thrive.

While we know these changes will occur as the elephant population shrinks, actually seeing it happen presents challenges. The World Wide Fund for Nature-Germany aims to track and identify individual elephants to count them. With help from IBM, the WWF will be able to use a system of camera traps connected to software that enables automatic tracking as opposed to manual tracking.

Augmenting our vision with tech

That is where computer vision can serve as a fresh set of eyes. IBM announced earlier this year that it would team with WWF to pair camera traps with IBM Maximo® Visual Inspection (MVI) to help monitor and track individual elephants as they pass by the camera traps.

“MVI’s AI-powered visual inspection and modeling capabilities allow for head- and tusk-related image recognition of individual elephants similar to the way we identify humans via fingerprints,” explained Kendra DeKeyrel, Vice President ESG and Asset Management Product Leader at IBM. 

These capabilities allow for not only counting or spotting the individual elephants, but also tracking some of their behaviors to better understand their movement patterns and impact in the ecosystem. MVI particularly offers help in automating the process of identifying these elephants instead of having staff manually look at the images. Additionally, the AI’s advanced visual recognition capabilities can pull the identity of an elephant from an image that is blurry or incomplete.

“Counting African forest elephants is both difficult and costly,” Dr. Thomas Breuer, WWF’s African Forest Elephant Coordinator, said. “The logistics are complex and the resulting population numbers are not precise. Being able to identify individual elephants from camera trap images with the help of AI has the potential to be a game-changer.”

Strengthening our connection to the natural world

As more about the movement and migration of the African forest elephant is gleaned, more additional information can be pulled from our increased understanding of how the species is behaving and interacting with its environment. “IBM is exploring how to leverage IBM Environmental Intelligence above ground biomass estimates to better predict elephants’ future locations and migration patterns, as well as their impact on a specific forest,” DeKeyrel said.

That includes determining how much the African forest elephants can help with mitigating climate change. It’s understood that the presence of elephants helps to increase the carbon storage capacity of the forest. “African forest elephants play a crucial role in influencing the shape of the forest structure, including helping increase the diversity, density, and abundance of plant and tree species,” Oday Abbosh, IBM Global Sustainability Services Leader, explained. It’s estimated that one forest elephant can increase the net carbon capture capacity of the forest by almost 250 acres, the equivalent of removing a full year’s worth of emissions from 2,047 cars from the atmosphere.

Having a more accurate image of the elephant population allows for performance-based conservation payments, such as wildlife credits. In the future, this could help enable organizations to better assess the financial value of nature’s contributions to people (NCP) provided by African forest elephants, such as carbon sequestration services.

We know the animal kingdom is constantly shaping the planet, and being affected by our own activity even when we can’t see it. Due to continuing breakthroughs in technology, we’re increasingly getting a clearer picture of the world of wildlife that was previously difficult to capture. When we can see it, we can react to it, helping to protect species that need help and strengthening our connection to the natural world.

“Our collaboration with WWF marks a significant step forward in this effort,” Abbosh said, “By combining our expertise in technology and sustainability with WWF’s conservation expertise, we aim to leverage the power of technology to create a more sustainable future.”

Source: ibm.com

Tuesday, 16 July 2024

Global effort produces first-ever decline in harmful HCFC levels

Global effort produces first-ever decline in harmful HCFC levels

As much of the world’s nations struggle to make sufficient progress on reducing carbon emissions, new research has emerged showing that global collaboration can in fact reverse some of the harmful effects of human activity. A study published in the journal Nature Climate Change documented the first significant drop in atmospheric levels of hydrochlorofluorocarbons (HCFCs), harmful gases known to deplete the planet’s ozone layer.


The study from researchers at the University of Bristol found a 1% drop in HCFC emissions between 2021 and 2023. While the drop-off might seem small, it marks the first time ever a decline in the compound’s presence has been detected. Even better, the findings suggest that HCFC usage peaked in 2021, nearly five years ahead of schedule.

A brief history on HCFCs


HCFCs are human-made compounds containing hydrogen, chlorine, fluorine and carbon, and are commonly used in refrigerants, aerosol sprays, and packaging foam. They were used as a replacement for chlorofluorocarbons (CFCs), more commonly known as Freon.

CFCs were widely believed to be harmless—they are nontoxic, nonflammable and don’t have any unstable reactions with other common chemicals. But, in the 1970s, scientists Mario Molina and F. Sherwood Rowland managed to link the depletion of the ozone layer to the use of these chemical compounds.

That discovery was foundational to the Montreal Protocol, an international treaty signed by 198 countries seeking to phase out the use of substances that harm the ozone layer, the planet’s shield against ultraviolet radiation from the Sun. The agreement set forth a number of goals that would lead to the reduction and eventual elimination of ozone-depleting substances.

The first stage of the Montreal Protocol was the elimination of CFCs, and proved to be wildly successful. A 2022 report from the United Nations found that nearly 99% of all CFCs had been phased out. The report estimates that ditching CFCs, which are also greenhouse gases that trap heat in the Earth’s atmosphere, managed to avoid an increase of 0.5 to 1 degrees Fahrenheit to the planet’s temperature by 2100.

Promising results in the fight against ozone depletion


The success of the treaty now appears to be extending to HCFCs. The Freon replacement took off as a sort of harm-reduction strategy because it provided similar functionality as CFCs while doing less damage to the ozone. But, like CFCs before them, HCFCs are a greenhouse gas and contribute to planetary warming. The Montreal Treaty mandated a ban on this compound by 2020 for developed nations, and the latest study suggests the restrictions are working.

“The results are very encouraging. They underscore the great importance of establishing and sticking to international protocols,”  Dr. Luke Western, Marie Curie Research Fellow at the University of Bristol School of Chemistry and lead author on the paper, said in a statement. “Without the Montreal Protocol, this success would not have been possible, so it’s a resounding endorsement of multilateral commitments to combat stratospheric ozone depletion, with additional benefits in tackling human-induced climate change.”

The success of the Montreal Protocol isn’t just seen in the dwindling levels of harmful chemicals in the atmosphere, but can also be seen in the slow but steady decrease in the hole in the ozone layer. According to the UN, the ozone layer is expected to recover to 1980 levels by 2040 for most of the world. That would mark a return to health for the protective part of the stratosphere that would match levels before holes in the shield were first discovered.

As nations continue debating the best way to reduce carbon emissions and combat climate change, the Montreal Protocol offers a proof of concept for global cooperation. A concerted effort toward a common goal can make a difference.

Source: ibm.com

Friday, 24 May 2024

An integrated asset management data platform

An integrated asset management data platform

Part 2 of this four-part series discusses the complex tasks energy utility companies face as they shift to holistic grid asset management to manage through the energy transition. The first post of this series addressed the challenges of the energy transition with holistic grid asset management. In this part, we discuss the integrated asset management platform and data exchange that unite business disciplines in different domains in one network.

The asset management ecosystem


The asset management network is complex. No single system can manage all the required information views to enable end-to-end optimization. The following figure demonstrates how a platform approach can integrate data flows.

An integrated asset management data platform

Asset data is the basis for the network. Enterprise asset management (EAM) systems, geographic information systems and enterprise resource planning systems share technical, geographic and financial asset data, each with their respective primary data responsibility. The EAM system is the center for maintenance planning and execution via work orders. The maintenance, repair and overhaul (MRO) system provides necessary spare parts to carry out work and maintains an optimum stock level with a balance of stock out risk and part holding costs.

The health, safety and environment (HSE) system manages work permits for safe work execution and tracks and investigates incidents. The process safety management (PSM) system controls hazardous operations through safety practices, uses bow-tie analysis to define and monitor risk barriers, and manages safety and environmental critical elements (SECE) to prevent primary containment loss. Monitoring energy efficiency and greenhouse gas or fugitive emissions can directly contribute to environmental, social and governance (ESG) reporting, helping to manage and reduce the carbon footprint.

Asset performance management (APM) strategy defines the balance between proactive and reactive maintenance tasks. Asset criticality defines whether a preventive or predictive task is justified in terms of cost and risk. The process of defining the optimum maintenance strategy is called reliability-centered maintenance. The mechanical integrity of hazardous process assets, such as vessels, reactors or pipelines, requires a deeper approach to define the optimum risk-based inspection intervals. For process safety devices, a safety instrumented system approach determines the test frequency and safety integrity level for alarm functions.

Asset data APM collects real-time process data. Asset health monitoring  and predictive maintenance  functions receive data via distributed control systems or supervisory control and data acquisition systems (SCADA). Asset health monitoring defines asset health indexes to rank the asset conditions based on degradation models, failures, overdue preventive work and any other relevant parameters that reflect the health of the assets. Predict functionality builds predictive models to predict imminent failures and calculate assets’ remaining useful life. These models often incorporate machine learning and AI algorithms to detect the onset of degradation mechanisms in an early stage.

In the asset performance management and optimization (APMO) domain, the team collects and prioritizes asset needs resulting from asset strategies based on asset criticality. They optimize maintenance and replacement planning against the constraints of available budget and resource capacity. This method is useful for regulated industries such as energy transmission and distribution, as it allows companies to remain within the assigned budget for an arbitrage period of several years. The asset replacement requirements enter the asset investment planning (AIP) process, combining with new asset requests and expansion or upgrade projects. Market drivers, regulatory requirements, sustainability goals and resource constraints define the project portfolio and priorities for execution. The project portfolio management function manages the project management aspects of new build and replacement projects to stay within budget and on time. Product lifecycle management covers the stage-gated engineering process to optimize the design of the assets against the lowest total cost of ownership within the boundaries of all other stakeholders.

An industry-standard data model

A uniform data model is necessary to get a full view of combined systems with information flowing across the ecosystem. Technical, financial, geographical, operational and transactional data attributes are all parts of a data structure. In the utilities industry, the common information model offers a useful framework to integrate and orchestrate the ecosystem to generate optimum business value.

The integration of diverse asset management disciplines in one provides a full 360° view of assets. This integration allows companies to target the full range of business objectives and track performance across the lifecycle and against each stakeholder goal.

Source: ibm.com

Saturday, 4 May 2024

How generative AI will revolutionize supply chain

How generative AI will revolutionize supply chain

Unlocking the full potential of supply chain management has long been a goal for businesses that seek efficiency, resilience and sustainability. In the age of digital transformation, the integration of advanced technologies like generative artificial intelligence brings a new era of innovation and optimization. AI tools help users address queries and resolve alerts by using supply chain data, and natural language processing helps analysts access inventory, order and shipment data for decision-making.

A recent IBM Institute of Business Value study, The CEO’s guide to generative AI: Supply chain, explains how the powerful combination of data and AI will transform businesses from reactive to proactive. Generative AI, with its ability to autonomously generate solutions to complex problems, will revolutionize every aspect of the supply chain landscape. From demand forecasting to route optimization, inventory management and risk mitigation, the applications of generative AI are limitless. 

Here are some ways generative AI is transforming supply chain management: 

Sustainability


How generative AI will revolutionize supply chain
Generative AI helps to optimize companies’ supply chains for sustainability by identifying opportunities to reduce carbon emissions, minimize waste and promote ethical sourcing practices through scenario analysis and optimization algorithms. For example, combining generative AI with technologies such as blockchain helps to keep data about the material-to-product transformation unchangeable across different entities, providing clear visibility into products’ origin and carbon footprint. This allows companies proof of sustainability to drive customer loyalty and comply with regulations. 

Inventory management


Generative AI models can continuously generate optimized replenishment plans based on real-time demand signals, supplier lead times and inventory levels. This helps maintain optimal stock levels that minimize carrying costs and can improve customer satisfaction through accurate available-to-promise (ATP) calculations and AI-driven fulfillment optimization. 

Supplier relationship management


Generative AI can analyze supplier performance data and market conditions to identify potential risks and opportunities, recommend alternative suppliers and negotiate favorable terms, enhancing supplier relationship management. 

Risk management


Generative AI models can simulate various risk scenarios, such as supplier disruptions, natural disasters, weather events or even geopolitical events, allowing companies to proactively identify vulnerabilities or react to disruptions with agility. AI-supported what-if modeling helps develop contingency plans such as inventory, supplier or distribution center reallocation. 

Route optimization


Generative AI algorithms can dynamically optimize transportation routes based on factors like traffic conditions, weather forecasts and delivery deadlines, reducing transportation costs and improving delivery efficiency. 

Demand forecasting


Generative AI can analyze historical data and market trends to generate accurate demand forecasts, which helps companies optimize inventory levels and minimize stockouts or overstock situations. Users can predict outcomes by quickly analyzing large-scale, fine-grain data for what-if scenarios in real time, allowing companies to pivot quickly. 

The integration of generative AI in supply chain management holds immense promise for businesses seeking to transform their operations. By using generative AI, companies can enhance efficiency, resilience and sustainability while staying ahead in today’s dynamic marketplace. 

Source: ibm.com

Thursday, 24 August 2023

Will generative AI make the digital twin promise real in the energy and utilities industry?

IBM, IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Guides, IBM Certification, IBM Learning

A digital twin is the digital representation of a physical asset. It uses real-world data (both real time and historical) combined with engineering, simulation or machine learning (ML) models to enhance operations and support human decision-making.

Overcome hurdles to optimize digital twin benefits


To realize the benefits of a digital twin, you need a data and logic integration layer, as well as role-based presentation. As illustrated in Figure 1, in any asset-intensive industry, such as energy and utilities, you must integrate various data sets, such as:

◉ OT (real-time equipment, sensor and IoT data)
◉ IT systems such as enterprise asset management (for example, Maximo or SAP)
◉ Plant lifecycle management systems
◉ ERP and various unstructured data sets, such as P&ID, visual images and acoustic data

IBM, IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Guides, IBM Certification, IBM Learning
Figure 1. Digital twins and integrated data

For the presentation layer, you can leverage various capabilities, such as 3D modeling, augmented reality and various predictive model-based health scores and criticality indices. At IBM, we strongly believe that open technologies are the required foundation of the digital twin.

When leveraging traditional ML and AI modeling technologies, you must carry out focused training for siloed AI models, which requires a lot of human supervised training. This has been a major hurdle in leveraging data—historical, current and predictive—that is generated and maintained in the siloed process and technology.

As illustrated in Figure 2, the use of generative AI increases the power of the digital twin by simulating any number of physically possible and simultaneously reasonable object states and feeding them into the networks of the digital twin.

IBM, IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Guides, IBM Certification, IBM Learning
Figure 2. Traditional AI models versus foundation models

These capabilities can help to continuously determine the state of the physical object. For example, heat maps can show where in the electricity network bottlenecks may occur due to an expected heat wave caused by intensive air conditioning usage (and how these could be addressed by intelligent switching). Along with the open technology foundation, it is important that the models are trusted and targeted to the business domain.

Generative AI and digital twin use cases in asset-intensive industries


Various use cases come into reality when you leverage generative AI for digital twin technologies in an asset-intensive industry such as energy and utilities. Consider some of the examples of use cases from our clients in the industry:

1. Visual insights. By creating a foundational model of various utility asset classes—such as towers, transformers and lines—and by leveraging large scale visual images and adaptation to the client setup, we can utilize the neural network architectures. We can use this to scale the use of AI in identification of anomalies and damages on utility assets versus manually reviewing the image.

2. Asset performance management. We create large-scale foundational models based on time series data and its co-relationship with work orders, event prediction, health scores, criticality index, user manuals and other unstructured data for anomaly detection. We use the models to create individual twins of assets which contain all the historical information accessible for current and future operation.

3. Field services. We leverage retrieval-augmented generation tasks to create a question-answer feature or multi-lingual conversational chatbot (based on a documents or dynamic content from a broad knowledge base) that provides field service assistance in real time. This functionality can dramatically impact field services crew performance and increase the reliability of the energy services by answering asset-specific questions in real time without the need to redirect the end user to documentation, links or a human operator.

Generative AI and large language models (LLMs) introduce new hazards to the field of AI, and we do not claim to have all the answers to the questions that these new solutions introduce. IBM understands that driving trust and transparency in artificial intelligence is not a technological challenge, but a socio-technological challenge.

We a see large percentage of AI projects get stuck in the proof of concept, for reasons ranging from misalignment to business strategy to mistrust in the model’s results. IBM brings together vast transformation experience, industry expertise and proprietary and partner technologies. With this combination of skills and partnerships, IBM Consulting™ is uniquely suited to help businesses build the strategy and capabilities to operationalize and scale trusted AI to achieve their goals.

Currently, IBM is one of few in the market that both provides AI solutions and has a consulting practice dedicated to helping clients with the safe and responsible use of AI. IBM’s Center of Excellence for Generative AI helps clients operationalize the full AI lifecycle and develop ethically responsible generative AI solutions.

The journey of leveraging generative AI should: a) be driven by open technologies; b) ensure AI is responsible and governed to create trust in the model; and c) should empower those who use your platform. We believe that generative AI can make the digital twin promise real for the energy and utilities companies as they modernize their digital infrastructure for the clean energy transition. By engaging with IBM Consulting, you can become an AI value creator, which allows you to train, deploy and govern data and AI models.


Source: ibm.com

Tuesday, 1 August 2023

10 ways the oil and gas industry can leverage digital twin technology

IBM, IBM Exam, IBM Exam Prep, IBM Tutorial and Materials, IBM Career, IBM Jobs, IBM Skills

The oil and gas industries have been the backbone of the global economy for decades. However, market volatility, environmental concerns and operational inefficiencies have also challenged these industries to adapt and innovate. The use of digital twins is one such innovation.

In the era of digital transformation, digital twins are emerging as a potent solution to energy production challenges. Digital twin technology, an advancement stemming from the Industrial Internet of Things (IIoT), is reshaping the oil and gas landscape by helping providers streamline asset management, optimize performance and reduce operating costs and unplanned downtime.

What is a digital twin?


A digital twin is a dynamic, virtual representation of a physical object or system that simulates its behavior in real-time. By integrating real-time operational data, historical information and advanced algorithms into a comprehensive digital model, a digital twin can predict future behavior, refine operational efficiency and enable unprecedented insights into the real-world counterpart’s behavior. 

Digital twins in oil and gas


Digital twins were first used by NASA to prepare space missions, but their use cases run the gamut, especially for oil and gas operators. Digital twin technology facilitates the following:

1. Predictive maintenance: One of the most beneficial applications of digital twins in the oil and gas industry is predictive maintenance (PdM). In this context, a maintenance team would create a digital twin of a piece of equipment or machinery. The twin will continuously collect data from the physical asset and use predictive analytics and machine learning (ML) algorithms to predict future performance. By constantly monitoring equipment performance and comparing it to virtual counterparts, operators can predict potential failures or breakdowns.

2. Efficient and safe operations: Digital twin technology can significantly improve operational efficiency. A digital twin can simulate various operational scenarios, helping teams understand how different operating parameters affect performance. They can then use the information to optimize operations, improve efficiency and boost productivity. For example, a digital twin of an oil extraction process can help operators identify bottlenecks and optimize extraction rates.     

3. Asset optimization: Digital twins allow operators to fully leverage critical oil and gas assets. A digital twin of an oil reservoir can help operators better understand reservoir behavior and plan extraction strategies more effectively. This approach results in higher extraction rates and increased profitability for businesses.

4. Safety and emergency preparedness: Safety is a significant concern in the oil and gas industry, but digital twins can enhance safety in myriad ways. Digital twins will simulate a range of scenarios to help operators optimize operational procedures and mitigate potential hazards. For example, a digital twin of an oil pipeline system can help foresee potential leaks or ruptures, enabling operators to repair the pipeline before a dangerous malfunction. Digital twins can also be used for employee training, realistically simulating dangerous situations in a risk-free environment so that staff can learn new skills and procedures and know how to respond to safety emergencies.        

5. Sustainability: The oil and gas industry is under increasing pressure to reduce its environmental impact. Digital twins are an invaluable tool in this effort. By simulating operations and their environmental impact, businesses can develop strategies to reduce emissions, manage waste and comply with environmental regulations. Digital twins can also simulate the impact of new regulations and/or technologies, helping the industry continue to adapt as technology advances and proliferates.   

6. Drilling operations optimization: Drilling operations are complex and costly. Digital twins can help streamline these operations by simulating various drilling scenarios and providing insights into the best strategies. A digital twin of a drilling operation, for instance, can identify the optimal drilling speed and direction, improving overall drilling accuracy.

7. Reservoir management: By creating a digital twin of an oil reservoir, operators can visualize reservoir behavior, optimize drilling strategies and maximize extraction. This not only optimizes extraction rates but also prolongs the reservoir’s lifecycle.

8. Supply chain optimization: Oil and gas supply chains are very complex. Maintenance teams can use digital twins to simulate the entire supply chain, providing in-depth visibility into operations and logistics and identifying potential bottlenecks.

9. New system design and testing: Designing and testing new equipment and systems can be a costly, labor-intensive process. With digital twins, engineers can design, test and perfect new systems in a virtual environment before they spend time and money building them. Using digital twins in this way can significantly shorten the development cycle and improve the final product’s performance.

10. Training and skill development: Digital twins can serve as a practical training tool for industry personnel. For example, a digital twin of a complex oil refinery—when integrated with VR technology—can provide a realistic environment for personnel to train and hone their skills, enhancing safety protocols and the overall quality of products and processes.

The future of digital twins in the oil and gas industry


As the industry continues to embrace digitalization, the role of digital twins is expected to grow exponentially. And the increasing adoption of technologies like artificial intelligence (AI), machine learning and IoT will only further enhance the capabilities of digital twins. Moreover, with the advent of cloud computing—which provides the benefits of digital twin technology without the substantial upfront investment in IT infrastructure—implementation of digital twins is becoming more feasible for a broader range of companies.

Looking forward, we know that digital twins will play a crucial role in process automation. Integrating digital twins with robotics and autonomous systems, as one example, could lead to fully automated drilling operations. Similarly, digital twins could enable the development of smart grids in gas distribution networks, resulting in more efficient and reliable supply chains.

As the industry grapples with increasingly urgent calls for sustainable environmental practices, digital twins could also underpin the transition to cleaner, more renewable sources of energy. Digital twins of wind turbines, solar panels or entire renewable energy grids have the potential to improve overall performance, making these energy sources more competitive with fossil fuels.

And in the realm of exploration, digital twins can revolutionize the way companies search for new oil and gas reserves. With a digital twin of the Earth’s subsurface, oil and gas companies could accurately predict the location of new natural gas and oil fields, reducing the costs and risks associated with exploration. 

To fully realize these possibilities, however, the industry must overcome several barriers. Companies will need to invest in digital skill development for their workforce and navigate complex issues around data ownership, privacy and security. They will also need to foster a culture of innovation and agility, as the digital revolution will bring significant changes to traditional ways of working.

Despite these challenges, the potential benefits of digital twins are too significant to ignore. As the oil and gas industry navigates the path toward digitalization and sustainability, digital twins promise to be a powerful tool in its arsenal. By providing a window into the future, they can help the industry anticipate, prepare for and shape the changes that lie ahead.

Use IBM Maximo Application Suite to help you manage digital twin technologies


The era of digital twins in the oil and gas industry is just beginning. By providing a real-time link between the physical and digital worlds, they enable operators to understand, predict and produce systems like never before.

However, managing digital twins can be a complex process, requiring advanced software solutions like IBM Maximo Application Suite. IBM Maximo is an integrated platform that helps service providers optimize asset performance and streamline day-to-day operations. Using an integrated AI-powered, cloud-based platform, Maximo offers CMMS, EAM and APM capabilities that produce advanced data analytics and enable smarter, more data-driven decision-making in oil and gas production facilities.   

As exciting digital technologies continue to evolve, digital twins are set to redefine the industry’s future. With IBM Maximo, your business can leverage digital twin technology to build a more efficient, sustainable and prosperous tomorrow.

Source: ibm.com

Saturday, 22 July 2023

OEE vs. TEEP: What’s the difference?

IBM, IBM Exam, IBM Exam Prep, IBM Tutorial and Materials, IBM Certification, IBM Exam

Breakdowns, equipment failure, outages and other shop floor disruptions can result in big losses for an organization. Production managers are tasked with ensuring that factories and other production lines are getting the most value out of their equipment and systems.

Overall equipment effectiveness (OEE) and total effective equipment performance (TEEP) are two related KPIs that are used in manufacturing and production environments to help prevent losses by measuring and improving the performance of equipment and production lines.

What is overall equipment effectiveness (OEE)?


OEE is a metric used to measure the effectiveness and performance of manufacturing processes or any individual piece of equipment. It provides insights into how well equipment is utilized and how efficiently it operates in producing goods or delivering services.

OEE measures the equipment efficiency and effectiveness based on three factors. The OEE calculation is simple: availability x performance x quality.

What is total effective equipment performance (TEEP)?


TEEP is also a metric used in manufacturing and production environments to measure the overall efficiency and effectiveness of equipment or a production line. It includes all the potential production time, including planned and unplanned downtime.

TEEP is calculated by multiplying four factors: availability x performance x quality x utilization.

How are OEE and TEEP different?


The main difference between these two metrics is that while OEE measures the percentage of planned production time that is productive, TEEP measures the percentage of all time that is productive. 

It’s important when making these calculations of time to use the right terminology. Here are a few common ways to measure time within a production context:

  • Unscheduled time: Time when production is not scheduled to produce anything (as opposed to “scheduled time”).
  • Calendar time: The amount of time spent on a job order up to its completion.
  • Total operations time: The total amount of time that a machine is available to manufacture products.
  • Ideal cycle time: The theoretical fastest possible time to manufacture one unit.
  • Run time: The time when the manufacturing process is scheduled for production and running.
OEE primarily focuses on the utilization of available time and identifies losses due to availability, performance and quality issues. It helps identify areas for improvement and efficiency optimization.

TEEP, on the other hand, provides a broader perspective by considering all potential production time, including planned downtime for preventive maintenance or changeovers. It aims to measure the maximum potential of the equipment or production line. 

OEE is typically used to measure the performance of a specific piece of equipment or a machine. It helps you understand how effectively equipment is being utilized during actual production time. OEE is commonly used as a benchmarking tool to track and improve equipment performance over time. It helps identify bottlenecks, areas for optimization and improvement initiatives.

TEEP is used to measure the overall performance of an entire production line or multiple pieces of equipment working together. It provides a holistic view of the effectiveness of the entire system. If you are interested in understanding the maximum potential performance of your production line, including planned downtime for maintenance, changeovers or other scheduled events, TEEP is the performance metric to use. TEEP can be helpful in production capacity planning and determining the capabilities of your equipment or production line.

How can OEE and TEEP be used together?


1. Start with OEE analysis: Begin by calculating the OEE for individual machines or equipment within your production line. OEE analysis helps pinpoint the causes of losses and inefficiencies at the equipment level. A digital asset management platform can provide real-time data to help with this calculation.

2. Identify bottlenecks: Use OEE data to identify bottlenecks or areas where equipment performance is suboptimal. Look for machines with lower OEE scores and investigate the underlying issues. This can help you prioritize improvement efforts and target specific machines or processes that have the most significant impact on overall performance.

3. Evaluate TEEP for the entire line: Once you have assessed the OEE for individual machines, calculate the TEEP for your entire production line. TEEP takes into account all potential operating time—including planned and unplanned downtime—providing a broader perspective on the overall performance of the line.

4. Compare OEE and TEEP: Compare the OEE and TEEP data to gain insights into the gap between actual performance and the maximum potential performance of the production line. Identify the factors contributing to the difference between the two metrics, such as scheduled maintenance, changeovers or other planned downtime. This comparison can help you understand the overall efficiency and effectiveness of the production line.

5. Address common issues: Analyze common issues identified through OEE and TEEP analysis and devise strategies to address them. This may involve improving machine reliability, procuring new equipment, integrating continuous improvement methodologies, reducing setup or changeover times, enhancing product quality or optimizing maintenance management. Implementing targeted improvement initiatives can help bridge the performance gap and maximize the overall equipment performance.

6. Track progress over time: Continuously monitor and track both OEE and TEEP metrics over time to assess the effectiveness of your improvement efforts. Regularly evaluating these metrics allows you to measure the impact of implemented changes and identify new areas for optimization.

By combining OEE and TEEP, you can conduct a comprehensive analysis of current equipment performance at both the individual-machine and production-line levels. This integrated approach provides a deeper understanding of performance factors, helps prioritize improvement efforts, and maximizes the overall effectiveness and efficiency of your manufacturing operations, allowing production managers to achieve higher throughput and maximum uptime.

World-class observability with IBM Maximo


IBM Maximo is enterprise asset management software that delivers a predictive solution for the maximization of equipment effectiveness. Maximo is a single, integrated cloud-based platform that uses AI, IoT and analytics to optimize performance, extend the lifecycle of assets and reduce the costs of outages. 

Take a tour to see how Maximo can achieve OEE improvement while reducing the operations costs of overtime, material waste, spare parts and emergency maintenance.

Source: ibm.com

Saturday, 1 July 2023

CMMS vs. EAM: Two asset management tools that work great together

IBM, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Tutorial and Materials, IBM Guides, IBM Tutorial and Materials

Most organizations can’t run without physical assets. Machinery, equipment, facilities and vehicles provide economic value or benefit operations. In most cases, they are fundamental to the performance of the organization, regardless of whether they are small-scale laptop portfolios or vast transportation networks. Energy companies rely on uninterrupted power supplies, airlines aim to ensure passenger safety, hospitals must provide quality patient care, haulage companies need up-to-date data on spare parts to maintain service levels.

Organizations can’t work effectively if they don’t invest to keep their assets running cost-effectively throughout their lifecycle. To do that, technicians, facility managers, maintenance teams, reliability engineers and project managers need accurate, real-time information at their fingertips.

◉ The growing complexity and scale of operations across industries and the need to track, monitor and manage assets, have been driving the evolution of advanced asset management software. Organizations are modernizing their enterprise apps, deploying more modular, intelligent systems and AI-enhanced workflows as part of broader digital transformation.

Asset management is no exception—according to IDC, 30% of organizations are strategically addressing digital transformation in enterprise asset management (EAM) solutions with a view to longer-term change management.

Two commonly used asset management and maintenance solutions are computerized maintenance management systems (CMMS) and enterprise asset management (EAM):

Computerized maintenance management system (CMMS)—also called CMMS software, platforms or solutions—focuses predominantly on maintenance—helping to manage assets, schedule maintenance and track work orders.

◉ Enterprise asset management (EAM) is an asset lifecycle management solution focused on optimizing the overall lifetime performance of assets from acquisition to end-of-life.

Depending on variables like asset type, business size and the scale of operations, each solution provides different functionalities and benefits to match an organization’s maintenance requirements. Let’s explore these in more depth.

What is CMMS and what does it do?


A computerized maintenance management system (CMMS) is a type of maintenance management software that centralizes maintenance information and facilitates and documents maintenance operations. A CMMS automates critical asset management workflows and makes them accessible and auditable.

Core to CMMS is a central database that organizes and communicates information about assets and maintenance tasks to maintenance departments and teams to help them do their jobs more effectively. They typically include modules for tracking employees and equipment certifications (resource and labor management), data storage on individual assets like serial numbers and warranties (asset registry), and task-related activities like work order numbers and preventive maintenance schedules (work order management). Other features like vendor and inventory management, reporting, analysis (e.g., KPI dashboards or MRO inventory optimization) and audit trails are also included in CMMS software solutions.

CMMS evolved in the 1960s when the growing complexity of operations in large companies started to expose the limitations and inadequacies of manual and paper-based management. Data was siloed, hidden away in a multitude of spreadsheets and filing cabinets, and carrying out tasks manually was time-consuming.

During the 1980s, 1990s and 2000s, as technology became more affordable and connected, CMMS functionality expanded to include work order management, where companies assign, monitor and complete work orders and inspection checklists in one place. Other features, such as project management and spare parts purchasing, were also added as solutions advanced. Many industries depend on CMMS to improve asset and workflow visibility, streamline operations and maintenance, manage mobile field workforces, and ensure compliance in, for example, auditing and health, safety and environment reporting.

What is EAM and what does it do?


Enterprise asset management (EAM) software provides a holistic and comprehensive overview of a company’s physical assets and is used to maintain and control operational assets and equipment throughout their entire lifecycle, regardless of location.

Typically, EAM solutions cover work orders, contract and labor management, asset maintenance, planning and scheduling, condition monitoring, reliability analysis, asset performance optimization, supply chain management, and environmental, health and safety (EHS) applications. They store large amounts of data that can be analyzed and tracked, with organizations customizing their KPIs and metrics according to their specific needs. EAM solutions can also link into other enterprise management systems and workflows like enterprise resource planning (ERP), providing a single source of asset intelligence.

EAM emerged from CMMS early in the 1990s, bringing together maintenance planning and execution with skills, materials and other information spanning asset design through to decommissioning. This broadening of scope has especially benefitted industries heavily reliant on physical assets or with complex asset infrastructures where asset management effectiveness and ROI are major contributors to the bottom line.

In the oil and gas or mining industries, for example, there is a strong need to bring safety, reliability and compliance information into workflows. In defense, there are strict regulations around tracking potentially dangerous assets, and the safety of military operations depends on the operational readiness of multiple assets in disparate locations.

Organizations use EAM to save money from being wasted on preventable problems and unnecessary downtime and to enhance the efficiency, performance and lifespan of assets. Through a combination of maintenance strategies, automation and technologies like the Internet of Things (IOT) and artificial intelligence (AI), EAM can use preventive and predictive maintenance to monitor and resolve issues before they happen, maximize the use of assets, consolidate operational applications and provide in-depth cost analyses. The result is that asset management professionals make better decisions, work more efficiently and maximize investments in physical assets.

CMMS vs EAM: What’s the difference?


CMMS and EAM software have a similar purpose—to prolong and improve asset performance, boost operational efficiency and reliability, and reduce costs through more productive uptime, less downtime and longer asset lifespans. Despite some overlap, they are not the same and have key differences in functionality, approach and business context, offering different management tools and resources. In general, while most EAM systems have CMMS capabilities, only the more advanced CMMS solutions have some EAM functionality. Some of the main differences are outlined below but the extent of these varies by provider.

CMMS is dedicated to MRO (maintenance, repair and operations) for physical assets and equipment, tracking a company’s asset maintenance activities, scheduling and costs once an asset has been installed. EAM, on the other hand, provides a greater understanding of lifecycle cost and value of assets by managing the entire asset lifecycle from beginning to end. Being able to track assets, assess and monitor them, manage and optimize their quality and reliability, and gauge where inefficiencies are occurring means a business can get the most out of its assets and avoid unnecessary disruptions that could impact the smooth running of its operations.

EAM also provides data on lifetime costs—such as purchasing, maintenance, repair and servicing—which help businesses understand the total cost of ownership of individual assets. Although CMMS solutions are becoming more sophisticated, they don’t typically include additional features like high-level financial accounting or costs associated with procurement or decommissioning.

EAM also differs from CMMS in that EAM provides multi-site support across multiple worksites and geographies. Most CMMS solutions only provide single-site or limited multi-site support. That can be a substantial advantage for industries like power or mass transportation that manage vastly distributed asset portfolios.

EAM covers a wider variety of business functions than CMMS; features like contract management, fleet management, schematics, warranty tracking, energy monitoring and industry-specific apps are not typically covered in CMMS systems. EAM can also work with a broader range of other enterprise software, such as financial analysis, supply chain management and procurement, risk and compliance and sustainability. CMMS only tends to integrate with other systems to automate repetitive tasks, though a few do include purchasing capabilities.

That said, EAM can cost more to implement than CMMS in the first instance, largely because of its greater complexity and the additional setup costs stemming from integration with other business functions. SaaS models are changing this, bringing CMMS and EAM costs closer together, which, coupled with the additional benefits of EAM, is making it an increasingly cost-effective option.

Although modern CMMS systems can offer more than just maintenance and the line between CMMS and EAM is blurring, they remain distinct solutions. CMMS can be viewed as a subset of EAM and potentially an avenue to large-scale and more robust enterprise asset management. The two are often used together or CMMS may suffice for companies with small asset portfolios and maintenance teams. When companies are looking to scale and consolidate systems across multiple departments, however, the limitations of CMMS can impact its overall value.

Ultimately, the choice of software depends on many factors, but in general, if you are looking to understand and manage high numbers of assets in multiple locations across their entire lifecycle and incorporate other business functions like HR and finance, EAM is likely to be the way to go.

IBM Maximo Application Suite


Look no further. IBM Maximo Application Suite combines the world’s leading enterprise asset management system with all the benefits of CMMS.

Get the most value from your enterprise assets with Maximo Application Suite. It’s a market-leading single, integrated cloud-based platform that uses AI, IoT and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. It gives you configurable CMMS and EAM applications that can help you manage your company assets, processes and people.

Source: ibm.com

Thursday, 22 June 2023

What is data center management?

Data Center Management, IBM Exam Certification, IBM Exam Skills, IBM Exam Job, IBM Exam Prep, IBM Exam Preparation, IBM Tutorial and Materials

To provide stakeholders with vital IT services, organizations need to keep their private data centers operational, secure and compliant. Data center management encompasses the tasks and management tools necessary for doing so. A person responsible for carrying out these tasks is known as a data center manager.

What is the role of a data center manager?


Either physically onsite or remotely, a data center manager performs general maintenance—such as software and hardware upgrades, general cleaning or deciding the physical arrangement of servers—and takes proactive or reactive measures against any threat or event that harms data center performance, security and compliance.

The typical responsibilities of a data center manager include the following:

  • Performing lifecycle tasks like installing and decommissioning equipment
  • Maintaining service level agreements (SLAs)
  • Ensuring licensing and contractual obligations are met
  • Identifying and resolving IT problems like connection issues between edge computing devices and the data center
  • Securing data center networks and ensuring backup systems and processes are in place for disaster recovery
  • Monitoring the data center environment’s energy efficiency (e.g., lighting, cooling, etc.)
  • Managing and allocating resources to maximize budgetary spending and performance
  • Determining optimal server arrangement and cabling organization
  • Planning emergency contingencies in case of natural disaster or other unplanned downtime
  • Making necessary updates and repairs to systems while minimizing downtime and impact to IT operations and business functions (also known as change management)

Certification programs exist for IT students and professionals who want to acquire or enhance the skills and knowledge necessary to succeed in data center management.

Common challenges of data center management


Navigating complexity

By nature, asset management within an enterprise data center is complex. A data center is often comprised of hardware and software from multiple vendors, including numerous applications and tools. A data center environment can also co-exist and interact with private cloud environments from multiple cloud service providers. Each hardware component, software instance and cloud-based environment can have its own contractual terms, warranty, user interface or licensing permissions. Every element of a data center also has unique processes and procedures to follow when implementing patches or upgrades. While a challenge in its own right, complexity is also a contributing factor (if not a direct cause) of many other challenges faced when managing a data center.

Meeting SLAs

Because of a data center’s complex multi-vendor environment, it can be difficult for data center managers to ensure all SLAs are being upheld. These SLAs can span:

  • Application availability
  • Data retention
  • Recovery speed
  • Network uptime and availability

Tracking warranties

Data center managers can struggle in a complex environment to track which warranties have expired or what each warranty covers. Without visibility over warranty information, money may be needlessly spent on components that may have otherwise been covered.

Costs

For private data centers, IT staff, energy and cooling costs can consume much of the limited budget allocated to what’s typically deemed a non-value-added cost to the organization.

Monitoring

Data center managers may be forced to use insufficient or outdated equipment to monitor their complex data center operations. This can result in gaps in performance visibility and inefficient workload distribution. Capacity planning is also negatively impacted, since data center managers reliant on disparate or outdated equipment may not have the accurate metrics needed to assess how well a data center is meeting current demands.

Limited resources

Data center managers often work with limited staff, power and space due to budgetary constraints. In many cases, they also lack the proper tools needed to manage these limited resources effectively. Limited resources can hinder service management, resulting in the delivery of delayed or inadequate IT resources to business users and other stakeholders across an organization.

Meeting sustainability goals

Many organizations are working to reduce their carbon footprint, which means finding ways to reduce the energy consumption of their data centers and transition to green energy sources. Data center managers are tasked with implementing the hardware and procedures that reduce their environment’s carbon footprint while simultaneously dealing with existing data center complexity and limited resources.

How to overcome the challenges of data center management


DCIM software

Data center managers can use a data center infrastructure management (DCIM) solution to simplify their management tasks and achieve IT performance optimization. DCIM software provides a centralized platform where data center managers can monitor, measure, manage and control all elements of a data center in real-time—from on-premises IT components to data center facilities like heating, cooling and lighting.

With a DCIM solution, data center managers gain a single, streamlined view over their data center and can better understand what’s happening throughout the IT infrastructure.

A DCIM solution provides visibility into the following:

  • Power and cooling status
  • Which IT equipment and software components are ready for upgrading
  • Licensing/contractual terms and SLAs for all components
  • Device health and security status
  • Energy consumption and power management
  • Network bandwidth and server capacity
  • Use of floor space
  • Location of all physical data center assets

A DCIM solution can also help data center managers adopt virtualization to combine and better manage their data center’s IT resources. More advanced DCIM solutions can even automate tasks and eliminate manual steps, freeing up the data center manager’s time and reducing costs.

Colocation data centers

A colocation data center is a third-party service that provides an organization with physical space and facility management for their private servers and associated IT assets. While the organization is still responsible for staffing and for managing their data center components, a colocation service offloads the burden and costs associated with building, running and maintaining a physical space.

Hardware, hybrid cloud and AI solutions for sustainability

There are hardware, hybrid cloud and AI solutions available to help data center managers reach their organization’s sustainability goals while maximizing data center performance. For example, the right server can greatly reduce energy consumption and free up physical space—in some cases, up to 75% and 67%, respectively.

Data center management and IBM


With electricity, you want enough capacity to get the job done, but not so much that you’re wasting it when not using it. Use hybrid cloud and AI to streamline operations, save energy and increase performance, making sustainability a true business driver—while delivering a return on your investment.

Reduce your footprint:  IBM LinuxONE Rockhopper 4 servers can reduce energy consumption by 75% and space by 67% (when compared to the same workloads on x86 servers with similar conditions and location).  With energy-efficient data centers, consolidated workloads and improved infrastructure, you can save money and lessen your footprint.

Automate your energy use:  With IBM Turbonomic, automate your energy use to improve energy efficiency. Measure, analyze and intelligently manage resources to ensure applications always consume exactly what they need.

Simplify data management: Get market-leading performance and efficiency from the unified IBM Storage FlashSystem platform family, allowing you to streamline administration and operational complexity across on-premises, hybrid cloud and containerized environments.

Source: ibm.com

Tuesday, 30 May 2023

What is smart transportation?

Smart Transportation, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Learning, IBM Certification, IBM Preparation

Every day, people encounter multiple obstacles while traveling to their intended destinations. Sitting in traffic, waiting for the bus to arrive 15 minutes later than scheduled, driving around for 30 minutes to find a parking spot—the modern world is full of inconveniences due to underlying inefficiencies in our transportation systems.

However, stalled cars and harried people waiting for public transportation aren’t just an individual nuisance. A less-than-optimal transportation infrastructure affects the economy, hastens environmental impact and lowers the overall quality of living. Making transportation work quicker and for more people keeps city planners up at night.

The good news is that new technologies and approaches to transportation management systems allow us to start addressing these inconveniences and make other downstream transportation improvements. The solution is smart transportation.

The rise of interconnected technologies like the Internet of Things (IoT), electric vehicles, geolocation and mobile technology have made it possible to orchestrate how people and goods flow from one place to another, especially in densely-packed urban areas.

Several global cities, including London, Paris, Amsterdam and Rio De Janeiro, have invested in smart transportation as a key component of their smart-city initiatives. There are now universities that study smart transportation use cases (e.g., Carnegie Mellon, New York University, NJIT and many more). The whole world, it seems, is fixated on solving transportation issues and increasing mobility because it produces so many benefits for citizens and the economy.

The rise of smart cities


Many cities claim to be the first “smart city” in the world. While one can debate what exactly turns a mere city into a smart one, there’s no denying that the rise of the Internet and mobile technology has generated widespread interest in building the next generation of smart cities.

Every time a city improves upon its existing structures to incorporate more data-driven or connected technologies, it becomes more intelligent. Examples of smart-city enhancements beyond smart transportation include sensors to monitor air quality and temperature fluctuations, IoT technology in public buildings to conserve energy and data-driven waste pick-up management. But the crown jewel of any smart city is how smart transportation can revolutionize how cities operate and how people move within them.

A smart transportation overview


Also known as smart mobility, the rise of ubiquitous data collection and automation has led local governments to embrace smart transportation. It is made possible by the fact that virtually every citizen and commuter has a smartphone that can transmit and receive messages and data.

In addition, it’s never been easier and cheaper to create public Wi-Fi networks, creating many new opportunities for governments can implement smart transportation initiatives.

Smart transportation usually includes public-private partnerships that can positively affect several issues with transportation, such as pollution deriving from car emissions, congestion and the importance of public transportation to the needy and elderly.

Several smart transportation solutions have existed for some time—for example, a city department of transportation providing real-time arrival data of buses and trains, electronic toll collection, bike sharing, dynamic pricing on cars entering the city and public transportation smart cards. But several disparate technologies do not make an intelligent transportation system. It requires a comprehensive strategy and multiple smart technologies working in tandem.

Smart transportation helps better allocate resources so cities can do more with less and avoid unnecessary energy consumption and resource costs.

Cities and states prioritizing smart transportation provide a more inclusive and equitable living experience for all of their citizens.

Smart transportation benefits governments and citizens alike


The following are some examples of smart transportation and how they can benefit a city:

Parking

Every driver has had the experience of searching for parking for 30 minutes or more, convinced that every open spot is filled right before they get to it. It’s a vexing problem that has an obvious solution: adding sensors to parking spots. That way, drivers can find an open spot ahead of time and use their smartphones and/or dashboard consoles to go directly to the spot, instead of aimlessly wandering.

Intelligent transportation networks

Many local and national transportation departments are now broadcasting real-time mass transit schedule updates and maintenance interruptions through centralized control systems. Citizens and commuters can access this information on their smartphones, tablets and computers through applications, social media or browsers, but that should just be table stakes.

The next generation of smart transportation systems will be able to communicate when parts on trains or buses are likely to fail, allowing operators to take fleet vehicles out of service to fix them before they break down with passengers on board. Investing in transportation networks also includes the building of high-speed rails that can transport more people from destination to destination, ameliorating traffic and the environmental impact of individuals driving cars.

Better traffic management

Traffic congestion results from many separate issues, such as vehicle accidents, rigid traffic grids, poor weather, population growth and substandard infrastructure. While each has a fix (of varying levels of complexity), smart transportation can address them all:

◉ Vehicle accidents: Connected vehicles with sensors can prevent accidents from occurring, often before the driver even knows something bad is about to happen.

◉ Traffic control: Historically, traffic lights changed based on pre-determined time windows regardless of any unforeseen impacts. While the time windows for traffic signals to change may differ during different times of day (e.g., to account for rush hour on a busy street), they rarely change based on the specific flow of traffic. In the rare major metropolitan areas where traffic light times can change based on that data, it’s usually done manually by human intervention. The future of traffic management involves smart traffic lights connected to real-time traffic flow data that incorporates machine learning and artificial intelligence that can change lights at intersections based on thousands of variables.

◉ Real-time information on road conditions and accidents: Like traffic, road conditions can create bottlenecks in travel patterns. While map applications accessible on smartphones increasingly provide real-time updates on traffic conditions, they’re often provided by citizen reporting. Public-private partnerships can boost this information by investing in vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technology so that every car provides information automatically, identifying issues before they create traffic jams.

Smart public transportation

How many job seekers have lost out on the job of their dreams because of backed-up or stalled trains? How quickly do printed schedules on the bus stop become irrelevant every day? In every city, thousands to millions of people depend on public transportation daily; they are lifelines to the elderly, frontline workers and people with disabilities. It makes a world of difference when cities can connect those critical vehicles to a smart grid to ensure that citizens have real-time information about when bus services and other forms of public transit will pick them up and take them where they need to go.

Support for electric vehicles

Leaders who want to make their cities hospitable and attract electric car drivers must install electric charging stations in high-traffic areas, where drivers can stop for a bit and walk around or get some food while the car charges. Not only does that provide a service to the driver, but it also helps area businesses capture some new business. The important thing to remember about smart transportation is you’re also building for the future. While autonomous vehicles are not ready for mass deployment yet, many expect they will become a reality in the future. So any meaningful smart transportation plans have plans for future proofing as vehicle technologies expand how we can move around without human intervention.

IBM Maximo is helping advance the journey to smart transportation


Virtually every major city has incorporated some smart transportation technologies into their overall offering to citizens and commuters, but now is the time to establish a holistic smart transportation strategy that helps people get to their destination quicker, more safely and with less environmental impact. As the hybrid work movement enables more employees to work in a city of their choosing, the local governments that offer a truly smart city with a comprehensive smart transportation system will be able to attract more residents at the expense of those cities that fail to adapt.

The good news is that solutions now exist to help governments create a more comprehensive smart transportation framework that uses a full suite of solutions for operations, maintenance, monitoring, quality and reliability. IBM Maximo helps metro services serving 4.7 billion riders, 73% of the busiest airports and 75% of the largest automotive companies transform the intelligence of their systems to improve customer satisfaction and increase efficiency.

Source: ibm.com

Thursday, 16 March 2023

CFOs are the new ESG storytellers

IBM Exam, IBM Exam Prep, IBM Career, IBM Jobs, IBM Tutorial and Material, IBM ESG, IBM Certification, IBM Study, IBM ESG

Ready or not, here they come. Mandatory regulatory disclosures regarding environmental, social and governance (ESG) factors are on the horizon for businesses everywhere. Governments worldwide have already adopted reporting requirements aligned with recommendations by the Task Force on Climate-Related Financial Disclosures (TCFD), including those in Canada, Brazil, the EU, Hong Kong, Japan, New Zealand, Singapore and Switzerland. Additionally, the German Supply Chain Due Diligence Act went into effect in January 2023. In the US, the Securities and Exchange Commission (SEC) is considering new requirements for public companies that could take effect as early as January 2024.

ESG disclosures require companies to analyze and report on a wide and evolving range of factors. The SEC regulations would require public companies to disclose greenhouse gas emissions (in the near term for their own operations and utilities, and in the future for their entire supply chain), as well as their targets and transition plans for reducing emissions. And companies would need to report potential exposure to extreme weather events, including hurricanes, heatwaves, wildfires and drought, and also identify how they’re assessing those risks.

In the EU, the Corporate Sustainability Reporting Directive (CSRD), which will take effect in 2024, demands even more detailed disclosures about how a company’s business model and activities affect sustainability factors like environmental justice and human rights. “You can’t wait for the regulations to kick in,” says Adam Thompson, Global Sustainable Finance and ESG Reporting Offerings Lead, IBM Consulting. “You need to be on this journey now.”

CFOs at the helm of transformation


CFOs are now responsible for transparent communication about how a company’s sustainability performance and sustainability metrics are tied to financial disclosures. That speaks to a larger evolution in the role of the finance function in business. “Finance leaders have gone from being bean counters to storytellers,” says Monica Proothi, Global Finance Transformation Lead for IBM Consulting. “They are not merely informing the business, but partnering with the business to transform data into insights that drive strategic ambitions, including leadership agenda around sustainability.”

The pressure is on for CFOs to build the capability for high-quality ESG reporting and communications into their finance departments. Underperforming in this capacity comes with material risk. “Access to capital is going to change,” Thompson says. A 2022 study by IBM’s Institute for Business Value found that the majority of CEOs surveyed were under intense pressure from investors to improve transparency around sustainability factors such as emissions, resource use, fair labor and ethical sourcing. As ESG reporting becomes more prevalent, details like these will be used to determine the quality of an investment. “You’re going to have an ESG risk rating, just like you have a credit rating,” Thompson says.

It’s important that we continue to make sustainable finance and ESG reporting efforts relevant for the CFO. The increasing focus on data transformation must integrate sustainable finance efforts in the process. We know that the CFO will be at the helm of this transformation too, yet some currently view sustainability more narrowly, as an operational or compliance issue only.

How CFOs can keep pace with high-quality ESG reporting


Thompson warns that many companies are taking shortcuts, cherry-picking data for reports or relying on estimates and secondary sources. It amounts to greenwashing, which is a reputational risk, and it won’t prepare businesses for increasingly comprehensive disclosure requirements. High-quality ESG reporting requires real visibility, not only into a company’s own operations but into those of its suppliers.

Thompson calls this “getting under the hood,” and while it requires new technologies and capabilities, it will ultimately drive value and unlock opportunities. CFOs must rise to new challenges and expectations brought on by rapidly evolving regulations.

Take it step by step

Proothi advises CFOs to “think big, start small and act fast.” Quick wins will generate value that can be reinvested into larger initiatives, and breaking transformation into small steps helps employees adopt change. Proothi says it’s often helpful to set up a dedicated transformation office to orchestrate the process. That might involve managing upskilling, coordinating the cadence of new initiatives, evaluating employees’ experiences and ensuring progress isn’t derailed by “change fatigue.”

Untangle your processes

Process mining is one of the first steps Proothi and Thompson recommend in financial transformation. “It shows the optimal path, and then all of the variations that clients are seeing, which ends up being this big ball of spaghetti,” Proothi says. Optimizing processes gives your workforce the bandwidth it needs to tackle the high-value work of data orchestration and interpretation. One approach Proothi recommends is using solutions provided by Celonis, which can operationalize sustainability by giving real-time schematics of how a business actually works. This helps to identify supply chain bottlenecks, recalibrate workflows and spotlight hidden inefficiencies.

Build a data foundation

Enterprise Resource Planning (ERP) systems are a good tool for collecting quality data, “but they don’t have any de facto sustainability or data objects built in,” Thompson says. “Look at how you can extend, enhance or augment your ERP.” And legacy data storage will no longer suffice. “Data lakes are more like data swamps now,” Proothi says. Data mesh and data fabric solutions help ensure data is clean, current and accessible.

Build diverse teams

The evolving needs of the finance department require a range of skills that many finance professionals don’t have yet. “You’re never going to find a unicorn of a person who has all the skills required,” Proothi says. Combining individuals who have traditional accounting skills with people experienced in sustainability and communications will empower teams to handle new responsibilities as they help one another upskill.

Leverage technology to increase visibility

Integrated AI-powered platforms like IBM Envizi ESG Suite encompass asset management and supply chain management solutions to help organizations collect and compile a wide range of environmental data. Perhaps more importantly for CFOs, they transform that data into legible outputs that provide the visibility required to make effective decisions about sustainability risks and opportunities. IBM also partners with FRDM, a platform that uses data science, machine learning and AI to highlight ESG risks across global supply chains and provide real transparency, even when it comes to complex issues like fair labor.

Gone are the days when the finance department was just responsible for closing the books each quarter. “The role of the CFO has completely changed,” Proothi says. What was historically an accounting role now holds opportunities for strategic leadership and balancing sustainability and profitability. CFOs of the future can reshape financial functions to drive performance and tell compelling stories to align an enterprise’s operations with its high-level goals and values.

Source: ibm.com