Thursday 30 December 2021

Ford presents its prestigious IT Innovation Award to IBM

IBM Exam Prep, IBM Exam Preparation, IBM Leaning, IBM Career, IBM Exam Study

Over 100 years ago, Henry Ford said, “Quality means doing it right when no one is looking.” Ensuring quality by meticulously inspecting work and materials was less complicated when Ford produced a single vehicle model in one color — black. Flash forward to 2020: The Ford Motor Company produced 1.7 million vehicles across multiple models with dozens of options packages. You can imagine the challenges to “looking” at every facet of the manufacturing process.

That’s one of the reasons why Ford technical experts engaged with IBM to deploy the IBM Maximo Visual Inspection platform, an AI-powered computer-vision solution. The Maximo Visual Inspection platform can help reduce defects and downtime, as well as enable quick action and issue resolution. Ford deployed the solution at several plants and embedded it into multiple inspection points per plant. The goal was to help detect and correct automobile body defects during the production process. These defects are often hard to spot and represent risks to customer satisfaction.

Although computer vision for quality has been around for 30 years, the lightweight and portable nature of our solution — which is based on a standard iPhone and makes use of readily available hardware — really got Ford’s attention. Any of their employees can use the solution, anywhere, even while objects are in motion.

Ford found the system easy to train and deploy, without needing data scientists. The system learned quickly from images of acceptable and defective work, so it was up and running within weeks, and the implementation costs were lower than most alternatives. The ability to deliver AI-enabled automation using an intuitive process, in their plants, with approachable technology, will allow Ford to scale out rapidly to other facilities. Ford immediately saw measurable success in the reduction of defects.

IBM wins the IT Innovation Award from Ford

Voted on by Ford’s technical community leaders, the Ford IT Innovation award is granted once a year to the technology they believe has delivered the greatest breakthrough innovation driving value for the company. This year, Mike Amend, Ford Chief Digital and Information Officer, presented IBM with the award for its Maximo Visual Inspection solution.

In presenting the award, Ford executives said that a discussion with IBM about digital transformation led to discussions about improving quality with AI automation. The Maximo Visual Inspection solution supported Ford’s “no fault forward” initiative with in-station process control and quality remediation at point of installation or assembly. This is exactly the kind of continuous process improvement that is helping Ford lower repair and warranty costs, and improve customer satisfaction, while helping employees play a role in bringing technical innovation to the floor.

“By bringing the power of IBM’s deep AI capabilities, deployable on cost-effective edge infrastructure, and into the cloud to share across our plants, Maximo Visual Inspection has enabled higher quality for our vehicles and our customers. We are expanding in 2022 to additional plants and use cases, including our new electric vehicle plants,” said Scott King, Manager & Principal Technologist – Advanced Manufacturing IT at Ford Motor Company.

The collaboration Ford and my team at IBM on this project was a great experience, allowing us to witness firsthand the potential of Maximo Visual Inspection when deployed by a team of innovators at Ford.  

Our full team at IBM including my co-lead Mal Pattiarachi, as well as our development leaders, Abhi Singh and Carl Bender, are honored that Ford recognized IBM for technical and business excellence. We believe this reflects the IBM values of client dedication, innovation that matters, and personal responsibility. It’s also a direct acknowledgement of the value IBM Maximo Visual Inspection solution provides to our clients. Our Mission is to harness the power of data and AI to derive real-time, predictive business insights that help our clients make intelligent decisions. I’m excited to strengthen our collaboration with Ford as it implements the Maximo Visual Inspection solution in more locations and with more use cases.

Source: ibm.com

Tuesday 28 December 2021

IBM Business Partners, MSPs, and ISVs amplify growth with IBM Power Systems Virtual Server

IBM Power Systems, IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Preparation, IBM Skills, IBM Jobs

To achieve our mission of helping enterprises across the globe succeed through innovation, our team works alongside IBM® Business Partners within our valued, growing ecosystem.

Our network consists of IBM Business Partner companies, independent software vendors (ISVs), managed service providers (MSPs), global system integrators (GSIs) and other integral organizations that share our goals. Together, our Business Partners have achieved new levels of growth themselves through innovative products, including our latest IBM Power® offering, IBM Power Systems Virtual Server.

In this third installment of our blog series about IBM Power Systems Virtual Server, we’re looking at real-world examples of how growth is possible through scalable and transformative software hosting and management solutions.

IBM Power Systems Virtual Servers at a glance

Our team is thrilled about the impact our new hybrid cloud solution is already making for our Business Partners. We’ve combined the reliability and sheer processing power of on-premises Power servers with the scalability and flexibility that comes with a hybrid cloud environment. The result enables end-users to appreciate true hybrid cloud solutions that are strategically designed to be a seamless extension of their on-premises Power servers and IBM Power Systems Virtual Server data centers. IBM Power Systems Virtual Server is designed to deliver low-latency capacities thanks to 14 data centers in 7 countries with more on the way. They can run IBM AIX®, IBM i and Linux® workloads, including SAP SUSE Linux Enterprise Server (SLES) 12 and 15 Open Virtualization Alliance (OVA) boot images and Red Hat® Enterprise Linux 8.1 and 8.2.

ISVs and IBM Power Systems Virtual Server

Let’s look at some examples of how ISVs are currently using IBM Power Systems Virtual Servers to promote innovation, transformation and peace of mind.

Created as a global solution

While Iptor has over 1,000 clients across the globe, its data centers were only in Denmark. Its goal was not only to globalize its operations, but also to automate and streamline its process. The worldwide data centers of IBM Power Systems Virtual Servers helped Iptor maximize uptime, while its image capture feature helped them better meet the needs of clients quickly and in a more efficient way. Finally, our DevOps pipeline helped make installation and upgrades universally accessible so customers on its supply chain could stay up to date.

A security-rich environment for sensitive data

Silverlake, a provider of a software-as-a-service (SaaS) digital banking cloud platform, joined our ecosystem to support IBM Cloud® for Financial Services™. Our collaboration allowed Silverlake to create scalable and security-rich virtual digital banking solutions in the highly regulated financial industry. Silverlake uses IBM Power Systems Virtual Server as an efficient and effective way to virtualize its solutions while meeting compliances and lowering risk.

Sandbox capabilities

Many of our ISVs that are already part of the IBM Power Systems ecosystem have found success by conducting proof of concepts (PoCs) in IBM Power Systems Virtual Server data centers. These vendors have containerized their applications, while also running demo versions of their application in a client-accessible sandbox. With this structure in place, prospects can try out applications in an isolated environment. This offering is designed to help improve security and uptime while helping environments meet spikes in traffic.

MSPs, Business Partners and IBM Power Systems Virtual Server

It’s always vital for MSPs, resellers and business partners to grow and discover new opportunities and revenue paths. IBM Power Systems Virtual Server is an excellent way to extend Power Systems offerings in a hybrid cloud environment for current and new customers while being supported and covered by the IBM brand. Moreover, we take infrastructure-as-a-service (IaaS) management responsibilities from our MSPs and take care of it ourselves, including maintenance, data center floor space, updates and the costs of running a data center. Handling the maintenance of infrastructure helps not only cut costs by taking away data center expenses, but it also helps increase revenue margins and maximize uptime and availability.

As an IBM partner, you now have the opportunity to build partner-driven solutions on top of IBM Power Systems Virtual Server. We work alongside MSPs looking to grow in the enterprise space and shift their focus on services, such as disaster recovery (DR), migration, end-to-end management or reselling the offering. IBM Power Systems Virtual Server helps the clients of our Business Partners reach new geographies — regardless of their location — to further achieve globalization. Clients also have quicker access to new Power Systems features and functions.

At the end of the day, it’s all about making a better experience for the end-user. Plus, being the only on-premises and off-premises certified Power solution for SAP HANA and SAP NetWeaver platform workloads, IBM can help enterprises meet the deadline to migrate to SAP S/4 HANA by 2027.

◉ Success through collaboration

A leader in hosting and managed services, Connectria has years of demonstrated and recognized experience serving IBM i customers. Connectria is now offering IBM Power Systems Virtual Server solutions to both its install base and new prospects after seeing strong demand for such a virtualized Power solution.

The Connectria team was excited to work with us, embracing opportunities to access a new market of high-end P30-tier licensing clients, as well as several other IBM i software tiers. We provided a variety of managed services and licensing specifically tailored to Connectria’s goals, growth objectives and current offerings. Now, Connectria is proud to offer our hybrid cloud infrastructure portfolio solutions bundled together alongside its management solutions. Connectria is also extending and shifting current clients’ infrastructure, including older on-premises Power Systems, to IBM Power Systems Virtual Server. As a result, Connectria clients can stay modernized by staying virtual.

Driven by the benefits of this latest offering combined with competitive pricing, Connectria’s sales team is proposing Power System Virtual Server solutions worldwide. Clients can now experience a new level of performance, experience and peace of mind.

A look at our roadmap

We are always pushing to add new capacities and capabilities to IBM Power Systems Virtual Server. In addition to upgrading our storage to IBM FlashSystem® 9200, here’s a high-level look at what end-users are finding beneficial. We’ve recently implemented virtual private network as a service (VPNaaS) solutions into this offering, as well as new network automation features. Coming soon is a whole new way to manage your IBM Power Systems Virtual Server, with IBM Cloud credit management. This credit system creates a simplified and efficient way for you to get the most out of your virtualized solution.

Source: ibm.com

Saturday 25 December 2021

IBM Quantum’s Open Science Prize returns with a quantum simulation challenge

IBM Quantum is excited to announce the second annual Open Science Prize — an award for those who can present an open source solution to some of the most pressing problems in the field of quantum computing. Submissions are open now, and must be received by April 16, 2022.

IBM Quantum’s, IBM Exam, IBM Exam Study, IBM Exam Prep, IBM Career, IBM Tutorial and Materials, IBM Learning, IBM Study, IBM Skills

This year, the challenge will feature one problem from the field of quantum simulation, solvable through one of two approaches. The best open source solution to each approach will receive a $40,000 prize, and the winner overall will receive another $20,000.

Simulating physical systems on quantum computers is a promising application of near-term quantum processors. This year’s problem asks participants to simulate a Heisenberg model Hamiltonian for a three-particle system on IBM Quantum’s 7-qubit Jakarta system. The goal is to simulate the evolution of a known quantum state with the best fidelity as possible using Trotterization.

Researchers use the Heisenberg model to study a variety of physical systems involving interacting particles with spins. Quantum computers are useful tools to simulate these models because you can represent the spin states of particles as the computational states of qubits.

But tackling this Hamiltonian can prove challenging, since different subsets of qubits in the same system don’t commute — that is, you can’t measure subsets of the problem simultaneously to a high precision, due to the Uncertainty Principle.

Trotterization allows us to simulate these kinds of systems by quickly switching between the non-commuting parts.

We picked this problem because the Heisenberg model is ubiquitous and relatively simple — therefore, it’s a great place to start for those just dipping a toe into quantum simulation. But also, given the model’s ubiquity, any solution that betters our ability to simulate it will have broad impact on the field of quantum simulation overall.

Team up to win

1. Participants can team up into groups of up to five, and can choose to solve the problem in one of two ways:

2. Either use Qiskit Pulse, that is, the Qiskit module that allows users pulse-level control over quantum quantum gates,

Or try to solve the problem using Qiskit defaults.

We encourage each team to push outside of their members’ comfort zone and try whichever method they think is best suited to solve the problem. Although Qiskit Pulse offers more detailed control of the qubits, there are advantages and disadvantages to both approaches.

Closing the door on last year’s prize

Last year’s Open Science Prize was a hit, as participants used Qiskit Pulse to simulate two challenging outstanding problems whose solutions could help advance the field of quantum computation. We had more than 30 submissions to two cutting edge research challenges that were, and still are, open questions in the field. Participants used Qiskit to run more than 6 billion circuits on IBM Quantum’s 7-qubit Casablanca system. We hope to see even more participation this year from across the quantum computing community.

Source: ibm.com

Thursday 23 December 2021

Creating a global ecosystem for the Quantum industry

IBM Exam Study, IBM Exam Prep, IBM, IBM Exam Preparation, IBM Certification, IBM Career, IBM Guides, IBM Learning, IBM Skills, IBM Job

Quantum computing will transform key sectors of many industries in the years ahead and help us tackle some of the world’s most complex challenges in energy, materials and chemistry, finance, and elsewhere — and perhaps areas that we haven’t considered yet. This is the Quantum Decade: the integration of quantum computing, AI, and high performance computing into hybrid multi-cloud workflows will drive the most significant computing revolution in decades and enterprises will evolve from analyzing data to discovering new ways to solve problems.

We are seeing a growing community and industry with real technology and enormous excitement from venture capital, major technology players, and governments to create the next wave of information technology. At IBM’s recent Chief Data and Technology Officer Summit, I had an exciting conversation with Darío Gil, Senior Vice President, IBM and Director of Research around quantum computing and its current stage, and how IBM continues to rapidly innovate quantum hardware and software design, building ways for quantum and classical workloads to empower each other.

Building quantum computers has been a dream in computer science for many decades. We are finally building systems that behave and operate according to the laws of quantum mechanics, and making them widely accessible to a broad community. We are building the future of quantum computing together. In May 2016, IBM was the first company with a small quantum computer on the cloud. Today, we have 27 systems on the cloud with a community of over 350,000 users worldwide, running more than 2 billion quantum circuits every day.

At this year’s Quantum Summit, we debuted our 127-qubit Eagle quantum processor, the first IBM quantum processor to contain more than 100 operational and connected qubits. The increased qubit count allows users to explore problems at a new level of complexity, such as optimizing machine learning or modeling new molecules and materials. What is essential is how we have done it: solving some very hard problems while outperforming in three attributes: scale, quality, and speed.

Engaging the Developer Community 

IBM Exam Study, IBM Exam Prep, IBM, IBM Exam Preparation, IBM Certification, IBM Career, IBM Guides, IBM Learning, IBM Skills, IBM Job
Allowing developers to be shielded from the complexity of the quantum hardware and to seamlessly integrate classical and quantum computing is another breakthrough. Quantum Serverless, a new programming model for leveraging quantum and classical resources, offers users the advantage of quantum resources at scale without worrying about the intricacies of the hardware.

Discover how you can be quantum-ready and how this bleeding-edge technology can help you and your business thrive the moment quantum computers come of age here.

2021 CDO and CAO Award Winners

If you missed our CDO/CTO Summit, we invite you to watch the replay. In addition to discussing quantum computing and how IBM clients embrace it, we heard from leading industry experts their reflections on various topics around data, AI, privacy, ethics, governance, risk, and innovation. We also announced the winners of the U.S. 2021 “Chief Data Officer of the Year”: Eileen Vidrine, Chief Data Officer at The United States Department of the Air Force, and the U.S. 2021 “Chief Analytics Officer of the Year”: Sol Rashidi, Senior Vice President, Global Data & Analytics and Chief Analytics Officer at The Estée Lauder Companies Inc.

Source: ibm.com

Tuesday 21 December 2021

IBM is named a Leader in the 2021 Gartner® Magic Quadrant™ for Cloud Database Management Systems (DBMS)

IBM Gartner Magic Quadrant, IBM Cloud Database Management Systems (DBMS), IBM Exam Prep, IBM Learning, IBM Certification, IBM Preparation, IBM Career

For the second year in a row, Gartner named IBM a Leader in Gartner Magic Quadrant for 2021 Cloud Database Management Systems based on its Ability to Execute and Completeness of Vision. With emergence of a single cloud DBMS market, We believe our portfolio of feature-rich, enterprise-tested offerings, bold acquisitions, and partnerships enable our clients to address the unique needs of their business, respond to the growing volume, velocity and variety of today’s data and drive more accurate data driven decisions.

Magic Quadrant for Cloud Database Management Systems

IBM Gartner Magic Quadrant, IBM Cloud Database Management Systems (DBMS), IBM Exam Prep, IBM Learning, IBM Certification, IBM Preparation, IBM Career

IBM Cloud offers choice, an extensive array of fully managed database-as-a-service (DBaaS) offerings that span the needs of every business, large or small, simple or complex. IBM’s extensive and ongoing investment in delivering rich AI capabilities via the Db2 database engine and our IBM Cloud Pak® for Data platform provides significant value to our clients. Inside the Db2 engine, we leverage an advanced ML (machine learning) optimizer to deliver faster query performance. Our fully secure and scalable Db2 managed services provide a secure, scalable data environment and a rich library of machine learning (ML) functions that our users can leverage to quickly generate and evaluate ML models and run predictions right inside the engine. Db2 also runs on IBM Cloud, AWS and Azure, independent of Cloud Pak for Data.

IBM’s Cloud Pak for Data represents a cohesive ecosystem with a broad range of data management capabilities including multiple DBMS offerings, data integration, analytics, data science, metadata, and governance. This platform simplifies administration and includes advanced governance and data privacy features to safeguard data while enhancing usage for analytics and AI. Cloud Pak for Data tools and capabilities are fully containerized and run-on Red Hat OpenShift enabling businesses to run their applications and workloads wherever they want, on whatever cloud.

It’s great to see Gartner recognize the value that our clients can leverage from Cloud Pak for Data offering, a transportable environment that can be deployed in the customer’s choice of public cloud, or on-premises in software or an appliance form factor. In addition to hosting IBM’s DBMS offerings (Db2, Netezza, Cloudant), Cloud Pak for Data also supports third-party database offerings like DataStax, MongoDB, Redis and others. Data virtualization is inclusive to the platform providing the connection of distributed data across a hybrid cloud landscape without moving it. IBM Cloud Pak for Data is designed to give customers choice of where to aggregate, store, manage and explore their data by enabling them to deploy IBM’s offerings on their vendors of choice.

At IBM our approach to hybrid cloud and AI is founded on the principle that there is no AI without information architecture; the integration of our database management system portfolio with our AI and hybrid cloud is the manifestation of this principle. Data is an integral element of today’s digital transformation. The connection of data across distributed landscapes, coupled with the use of active metadata, semantics and machine learning capabilities, is driving the future of Data Fabric as an emerging technology allowing organizations to better integrate and engage all of an organization’s data for better business outcomes.  Explore IBM client studies such as GEO corporation, Marriott International, Kone Corp, and Active International.

You can also dive deeper into some of the products mentioned above by visiting the following:


Gartner, Magic Quadrant for Cloud Database Management Systems, Henry Cook, Merv Adrian, Rick Greenwald, Adam Ronthal, Philip Russom, December 2021

Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from IBM.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Source: ibm.com

Saturday 18 December 2021

The power of Automation and AI on API testing

IBM Exam Study Materials, IBM Career, IBM Tutorial and Material, IBM Guides, IBM Prep, IBM Certification, IBM API

Testing APIs is crucial. It helps identify errors in the code, improve code quality, and empowers developers to make changes more quickly with confidence that they haven’t broken existing behavior. Automation and Artificial Intelligence can have a significant impact on API testing. Utilizing automation in API testing can be found in many products, but the majority of companies have yet to tap into the potential that AI and machine learning can have on enhancing testing. At IBM, we believe there are a few key capabilities to keep an eye on as the future of API testing incorporates more AI and automation.

Adding Intelligence to Automation

With basic automated testing, a developer might use code that generates random inputs for each field. A lot of those tests will end up being wasted as they are repetitive, or don’t match the business use of the application. In those cases, manually written tests are more valuable because the developer has a better understanding of the API usage.

Adding intelligence provides a great opportunity to enhance automated testing to work with business logic – i.e. users will place an item in their online shopping cart before they are taken to the page that requires an address, so testing an API with an address but no items is a waste of time. Intelligent automated testing could generate a dynamic set of input values that make sense, and are a broader test of the API’s design with more confident results.

Semantic and Syntactic Awareness

Creating new API test cases can be time-consuming when done manually. Generating tests can accelerate this, but developers can only rely on this if the generated tests are high quality.

One way to improve the quality of generated tests is semantic and syntactic awareness – that is, training an intelligent algorithm to understand key business or domain entities such as a ‘customer’, ’email’, or ‘invoice’ – and how to generate data from them. By pointing it at existing tests, APIs and business rules, it should be able to ‘learn’ from that and become better at generating tests with less developer input later on.. 

Automating Setup and Teardown

A tester’s workload can be significantly decreased by identifying and automating routine tasks. Using an algorithm to look at an API specification and see what the dependencies are allows the machine to conduct routine setup and teardown tasks. For example, if a bookshop has an API for orders, the AI can set up the scaffolding and create the prerequisites for the test. If a tester needs to create a book and a customer prior to creating an order, those tasks are conducted by the AI, and then cleaned up and deleted after the test. As an algorithm learns about the company’s API structures, it can generate more of the setup and teardown tasks.

Mining real world data

IBM Exam Study Materials, IBM Career, IBM Tutorial and Material, IBM Guides, IBM Prep, IBM Certification, IBM API
The effectiveness of API testing is greatly increased when tests use realistic data, representative of real-world production conditions. Generating tests from production data must be done with care due to the risk of exposing sensitive data. Without automation, creating real-world useful tests is difficult to achieve at scale because of the high labor cost of combing through mounds of data, determining what is relevant, and cleansing the data of sensitive values.

Using AI to identify gaps in test coverage

A recent addition to the IBM Cloud Pak for Integration Test and Monitor uses AI to analyse the API workloads in both production and test environments, identifying the ways that APIs are being invoked in each. This analysis allows it to identify real-world production API scenarios that aren’t adequately recreated in the existing test suite, and automatically generate tests that fill that gap. 

Allowing an algorithm to efficiently examine millions of production API calls means that production personnel only need to review and approve the smartly generated tests. This is a very effective way of increasing test coverage in a way that will have the most impact – as it prioritizes closing testing gaps based on how users are interacting with APIs in the real world.

Source: ibm.com

Thursday 16 December 2021

IBM

VTFET: The revolutionary new chip architecture that could keep Moore’s Law alive for years to come

IBM Research, in partnership with our Albany Research Alliance partner Samsung, has come up with a breakthrough in semiconductor design — called VTFET — that could help reshape the semiconductor industry for years to come.

Back in 1965, computer scientist Gordon Moore first hypothesized that the number of transistors and other components in a dense integrated circuit would double as well as double the speed and capacity of computers, roughly every two years. But over 55 years later, the number of transistors that can be crammed onto a single chip has just about reached its limit.

At the same time, the road ahead for computing systems isn’t slowing down. Dynamic AI systems are poised to supercharge so many aspects of our lives — from road safety to drug discovery and advanced manufacturing — which will require considerably more powerful chips in the future. In order to continue the advances in speed and computing power that Moore posited, we’ll need to build chips with as many as 100 billion transistors.

IBM Research, in collaboration with our Albany Research Alliance partner Samsung, has made a breakthrough in semiconductor design: Our new approach, called Vertical-Transport Nanosheet Field Effect Transistor, or VTFET, could help keep Moore’s Law alive for years to come.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Tutorial and Materials, IBM Exam Study
Figure 1:
A VTFET (Vertical-Transport Nanosheet Field Effect Transistor) wafer

VTFET reimagines the boundaries of Moore’s Law — in a new dimension


Today’s dominant chip architectures are lateral-transport field effect transistors (FETs), such as fin field effect transistor, or finFET (which got its name because silicon body resembles the back fin of a fish), which layers transistors along a wafer’s surface. VTFET, on the other hand, layers transistors perpendicular to the silicon wafer and directs current flow vertical to the wafer surface. This new approach addresses scaling barriers by relaxing physical constraints on transistor gate length, spacer thickness, and contact size so that these features can each be optimized; either for performance or energy consumption.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Tutorial and Materials, IBM Exam Study
Figure 2:
Brent Anderson, VTFET Architect and Program Manager, and Hemanth Jagannathan, VTFET Hardware Technologist and Principal Research Staff Member, holding a wafer outside their lab. Credit: Connie Zhou

With VTFET, we’ve managed to successfully demonstrate that it’s possible to explore scaling beyond nanosheet technology in CMOS semiconductor design. At these advanced nodes, VTFET could be used to provide two times the performance or up to 85 percent reduction in energy use compared to the scaled finFET alternative.

The new VTFET architecture demonstrates a path to continue scaling beyond nanosheet. In May, we announced a 2-nanometer node chip designs which will allow a chip to fit up to 50 billion transistors in a space the size of a fingernail. VTFET continues the innovation journey, and opens the door to new possibilities.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Tutorial and Materials, IBM Exam Study
Figure 3:
A side-by-side comparison of how a VTFET (left) and lateral FET (right) transistor are arranged, with current flowing through them.

Finding more space


In the past, designers packed more transistors onto a chip by shrinking its gate pitch and wiring pitch. The physical space where all the components fit is called the Contacted Gate Pitch (CGP). The ability to shrink gate and wiring pitches has allowed integrated-circuit designers to go from thousands to millions to billions of transistors in our devices. But with the most advanced finFET technologies, there’s only so much room for spacers, gates and contacts. Once you’ve reached the CGP limit, you’re out of space.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Tutorial and Materials, IBM Exam Study
Figure 4:
FET configuration with layers arranged horizontally on the wafer. Dummy isolation gates, shown in blue, are required to isolate adjacent circuits which wastes space.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Tutorial and Materials, IBM Exam Study
Figure 5:
New VFET configuration with layers arranged vertically on the wafer, dramatically improving density scaling by shrinking the Gate Pitch and eliminating dummy isolation gates.

By orienting electrical current flow vertically, the gates, spaces and contacts are no longer constrained in traditional ways: We have room to scale CGP while maintaining healthy transistor, contact, and isolation (spacer and shallow trench isolation, STI) size. Released from the constraints of the lateral layout and current flow, we were able to use larger source/drain contacts to increase the current on the device. The gate length can be selected to optimize device drive current and leakage, while the spacer thickness can be independently optimized for lower capacitance. We are no longer forced to tradeoff the gate, spacer, and contact size, which can result in improved switching speed and reduced power use.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Tutorial and Materials, IBM Exam Study
Figure 6:
Working in the Albany lab. Credit: Connie Zhou

Another key VTFET feature is the ability to use STI for adjacent circuit isolation to achieve a Zero-Diffusion Break (ZDB) isolation, with no loss of active-gate pitches. By comparison, the density of lateral-transport FET circuitry is affected by double or single-diffusion breaks required for circuit isolation, which affects the ability to further shrink the technology.

A new way to look at the future of chip design


IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Tutorial and Materials, IBM Exam Study
Figure 7:
Working in the Albany lab. Credit: Connie Zhou

Even a decade ago, we could see lateral architectures would hit scaling limits at aggressive gate pitches: practically, each of the device components were nearing scaling limits. We wanted to find other paths that could break those barriers, and our motivation has never changed. Our goal has always been to produce a competitive device for the technologies of the future.

With a gate pitch more aggressive than anything known in production and CMOS logic transistors at sub-45 nm gate pitch on bulk silicon wafers, we believe that the VTFET design represents a huge leap forward toward building next-generation transistors that will enable a trend of smaller, more powerful and energy-efficient devices in the years to come.

Source: ibm.com

Tuesday 14 December 2021

How AI will help code tomorrow’s machines

At NeurIPS 2021, IBM Research presents its work on CodeNet, a massive dataset of code samples and problems. We believe that it has the potential to revitalize techniques for modernizing legacy systems, helping developers write better code, and potentially even enabling AI systems to help code the computers of tomorrow.

IBM Exam Study, IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Guides, IBM AI, IBM COBOL, IBM Materials, IBM Jobs, IBM Skills

Chances are, if you’ve done just about anything today, you’ve interacted with a code language older than the invention of the desktop computer, the internet, and VHS tape.

Whether you’ve checked your bank account, used a credit card, gone to the doctor, booked a flight, paid your taxes, or bought something in a store, you likely have interacted with a system that relies on COBOL (Common Business Oriented Language) code. It’s a programming language that many mission-critical business systems around the world still rely on, even though it was first implemented over six decades ago. It’s estimated that some 80% of financial transactions use COBOL, and the U.S. Social Security Administration utilizes around 60 million lines of COBOL code.

As programmers and developers versed in COBOL have started to retire, organizations have struggled to keep their systems up and running, let alone modernize them for the realities of the always-on internet. And this is just one of a myriad of languages still in use that don’t reflect what modern coders feel most comfortable writing in, or what’s best suited for modern business applications.

Code language translation is one of many problems that we strive to address with CodeNet, which we first unveiled back in May. Essentially, CodeNet is a massive dataset that aims to help AI systems learn how to understand and improve code, as well as help developers code more efficiently, and eventually, allow an AI system to code a computer. It’s made up of around 14 million code samples, comprising some 500 million lines of code from more than 55 different languages. It includes samples from modern languages like C++, Java, Python, and Go, to legacy ones like Pascal, FORTRAN, and COBOL. Within a short span of three months, our GitHub received 1,070 stars and has been forked over 135 times.

IBM Exam Study, IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Guides, IBM AI, IBM COBOL, IBM Materials, IBM Jobs, IBM Skills
Figure 1: CodeNet code composition.

This week at NeurIPS, we discuss our paper on CodeNet, and the work we’ve done to build out CodeNet, how we see it as different from anything else like it that’s available for anyone to download, and how we see it being used by the research community.

There has been a revolution in AI over the last decade. Language and image data, carefully curated and tagged in datasets like ImageNet, have given rise to AI systems that can complete sentences for writers, detect tumors for doctors, and automate myriad business and IT processes. But for code, the language of computers, crafting such a dataset that AI systems can be learn from has been a challenging task.

The end goal of CodeNet is to enable developers to create systems that can modernize existing codebases, as well as fix errors and security vulnerabilities in code. It’s something I recently discussed in a lightboard video: Can computers program computers?

Experiments on CodeNet


We’ve carried out baseline experiments on CodeNet for code classification, code similarity, and code completion. These results serve as a reference for CodeNet users when they perform their own experiments. Some of our results also indicate that the models derived from CodeNet can generalize better across datasets than those derived from other datasets due to CodeNet’s high quality.

Code classification

CodeNet can help create systems that can help determine what type of code a snippet is. We used a wide range of machine-learning methods for our experiments, including bag of tokens, sequence of tokens, BERT model, and graph neural networks (GNNs). We achieved upwards of 97% accuracy with some of our methods at matching code types to source code.

Code similarity

Code similarity determines if multiple pieces of code solve the same problem. It serves as the foundational technique for code recommendation, clone detection, and cross language transformations. We tested a wide spectrum of methods for code similarity (including MLP with bag of tokens, Siamese network with token sequence, a Simplified Parse Tree [SPT] with handcrafted feature extraction, and a GNN with SPT) against our benchmark datasets. The best similarity score comes from leveraging a sophisticated GNN with intra-graph and inter-graph attention mechanisms.

Generalization across datasets

We believe that models trained on the CodeNet benchmark datasets can benefit greatly from their high quality. For example, we took our benchmark, C++1000 and compared it against one of the largest publicly available datasets of its kind, GCJ-297, derived from problems and solutions in Google’s Code Jam. We trained the same MISIM neural code similarity system model on C++1000 and GCJ-297, and tested the two trained models on another independent dataset, POJ-104.

Our data suggests that the model trained on GCJ-297 has a 12% lower accuracy score than the model trained on C++1000. We believe C++1000 can better generalize because there’s less data bias than there is in GCJ-297 (where the top 20 problems with the greatest number of submissions account for 50% of all the submissions), and the quality of the cleaning and de-duplication of the data in CodeNet is superior.

Code completion

We believe this to be a valuable use case for developers, where an AI system can predict what code should come next at a given position in a code sequence. To test this, we built a masked language model (MLM) that randomly masks out (or hides) tokens in an input sequence and tries to correctly predict what comes next in a set of tests it hasn’t seen yet. We trained a popular BERT-like attention model on our C++1000 benchmark, and achieved a top-1 prediction accuracy of 91.04% and a top-5 accuracy of 99.35%.

Further use cases for CodeNet


The rich metadata and language diversity open CodeNet to a plethora of interesting and practical use cases. The code samples in CodeNet are labeled with their anonymized submitter and acceptance status so we can readily extract realistic pairs of buggy and fixed code from the same submitter for automated code repair. A large percentage of the code samples come with inputs so that we can execute the code to extract the CPU run time and memory footprint, which can be used for regression studies and prediction. CodeNet may also be used for program translation, given its wealth of programs written in a multitude of languages. The large number of code samples written in popular languages (such as C++, Python, Java, and C) provide good training datasets for the novel and effective monolingual approaches invented in the past several years.

What differentiates CodeNet


While CodeNet isn’t the only dataset aimed at tackling the world of AI for code, we believe it to have some key differences.

Large scale: To be useful, CodeNet needs to have a large number of data samples, with a broad variety of samples to match what users might encounter when trying to code. With its 500 million lines of code, we believe that CodeNet is the largest dataset in its class: It has approximately 10 times more code samples than GCJ, and its C++ benchmark is approximately 10 times larger than POJ-104.

Rich annotation: CodeNet also includes a variety of information on its code samples, such as whether a sample solves a specific problem (and the error categories it falls into if it doesn’t). It also includes a given task’s problem statement, as well as a sample input for execution and a sample output for validation, given that the code is supposed to be solving a coding problem. This additional information isn’t available in other similar datasets.

Clean samples: To help with CodeNet’s accuracy and performance ability, we analyzed the code samples for near-duplication (and duplication), and used clustering to find identical problems.

How CodeNet is constructed


CodeNet contains a total of 13,916,868 code submissions, divided into 4,053 problems. Some 53.6% (7,460,588) of the submissions are accepted, meaning they can pass the test they’re prescribed to do, and 29.5% are marked with wrong answers. The remaining submissions were rejected due to their failure to meet runtime or memory requirements.

The problems in CodeNet are mainly pedagogical and range from simple exercises to sophisticated problems that require advanced algorithms, and the people submitting the code range from beginners to experienced developers. The dataset is primarily composed of code and metadata scraped from two online code judging sites, AIZU and AtCoder. These sites offer courses and contests where coding problems are posed and submissions are judged by an automated review process to see how correct they are. We only considered public submissions and manually merged the information from the two sources, creating a unified format from which we made a single dataset.

We ensured that we applied a consistent UTF-8 character encoding on all the raw data we collected, given that the data came from different sources. We also removed byte-order marks and use Unix-style line-feeds as the line ending.

We looked for duplicate problems, as much of these problems were compiled over many decades. We also identified near-duplicate code samples to facilitate extraction of benchmark datasets in which data independence is desirable.

We provided benchmark datasets for the dominant languages (C++, Python, and Java) for the convenience of the users. Each code sample and related problem are unique, and have passed through several pre-processing tools we’ve provided to ensure that code samples can effectively be converted into a machine learning model input. Users can also create benchmark datasets that are customized to their specific purposes using the data filtering and aggregation tools provided in our GitHub.

What’s next for CodeNet


This is just the start for our vision of what CodeNet can offer to the world of AI for code. We hope to achieve widespread adoption of the dataset to spur on innovation in using AI to modernize the systems we all rely on every day.

In the near future, we will be launching a series of challenges based on the CodeNet data. The first is a challenge for data scientists to develop AI models using CodeNet that can identify code with similar functionality to another piece of code. This challenge was launched in partnership with Stanford’s Global Women in Data Science organization. We’ve organized workshops to introduce the topic, code similarity, and provide educational material. Every team that participates in these challenges is comprised of at least 50% women to encourage diversity in this exciting area of AI for Code.

We envision a future where a developer can build on legacy code in a language they’re accustomed to. They could write in Python and an AI system could convert it in completely executable COBOL, extending the life of the system they’re working on, as well as its reliability, indefinitely. We see the potential for AI systems that can evaluate a developer’s code, based on thousands of examples of past code, and suggest how to improve how they develop their system, or even write a more efficient piece of code itself. In short, we’ve begun to explore how computers can program the ones that succeed them. But for CodeNet to succeed, we need developers to start using what we’ve built.

Source: ibm.com

Friday 10 December 2021

IBM’s quantum computers: an optimal platform for condensed matter physics research

While condensed matter physicists often must rely on large collaborations or costly hardware to run their experiments, our cloud-based quantum processors allow users to make groundbreaking advances in condensed matter physics with little more than their laptop and a user account with IBM Quantum.

IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Certification, IBM Guides, IBM Learning, IBM Career

Matters’ constituent particles can interact in a variety of ways based on their intrinsic properties. These interactions manifest themselves as materials with properties that serve functions in every aspect of our lives — whether solid, liquid, or gas. Some interactions between particles, however, can give rise to exotic properties and phases of matter, like superconductivity or ferromagnetism. Condensed matter physicists study how inter-particle interactions give rise to these interesting behaviors. And the physics of these interactions is described by the laws of quantum mechanics, which was one of the first motivations for building and simulating them on a quantum computer.

Condensed matter physics has important implications for our understanding of nature and the development of new technologies. Advances made by condensed matter physicists have led to seminal inventions, like the development of the transistor, and the building blocks of IBM Quantum processors’ superconducting qubits with Josephson junctions.

Given the importance of furthering our understanding of matter, we’re excited that IBM Quantum systems make ideal laboratories to study condensed matter physics. And while condensed matter physicists often must rely on large collaborations or costly hardware to run their experiments, our cloud-based quantum processors enable users to make potentially groundbreaking advances in condensed matter physics with little more than their laptop and a user account with IBM Quantum.

Spinning up real world condensed matter physics research


In fact, a small team of researchers employing even today’s noisy quantum computers can make a valuable impact. An active area studied by condensed matter physicists is the dynamics of interacting spin systems. Spin is a crucial property of elementary particles that could be described by the clockwise or counter-clockwise spinning of toy tops, but with a quantum take.

The two states of a quantum bit form a natural analogue to the “up” and “down” spins, and interactions between spins can be easily toggled by our systems’ control pulses. The connectivity of our qubits therefore allows users to naturally simulate the dynamics of spin lattices, and explore how the collective behavior of spins change under the influence of external forces.

For example, a paper by researchers at the Autonomous University of Madrid (UAM) simulates the dynamics of a one-dimensional Ising model — essentially, a line of particles in one of the two spin states that could interact only with their neighbors — in the presence of external magnetic fields both parallel and perpendicular to the system. They recreated this system on the ibmq_paris system’s 27-qubit processor.

Another study by researchers at Lawrence Berkeley National Laboratory simulated another canonical spin system described by the Heisenberg model — also done on the ibmq_paris system. In each case, the teams found that they could accurately calculate relevant properties of the systems that they were studying, and could significantly enhance the quality of their simulations with error mitigation techniques, even on existing noisy processors.

Crucially, these teams were able to run their simulations without any specialized equipment; they simply had to run their quantum programs on a cloud-based IBM Quantum computing system. Dozens of papers published on the arXiv physics pre-print server, including our own experiments with our 27-qubit processors, have demonstrated the power of IBM Quantum processors in simulating spin dynamics. Even noisy quantum computers may soon have the potential to provide a quantum advantage over classical methods for some condensed matter problems.

Using a quantum computer to understand phases of matter that act like a crystal in time


Other researchers are using quantum computers to study phases of matter beyond what we can create in the lab. This past May, two researchers at the University of Melbourne reported evidence for the much-heralded time crystal on arXiv — using ibmq_manhattan’s and ibmq_brooklyn’s 65-qubit processors to simulate a chain of 57 driven spins with nearest-neighbor interactions. The traditional crystal is a spatial crystal, such as matter composed of a lattice of atoms in a stable, preferred structure in space. The idea of the time crystal posits that perhaps there exists some phases of matter that act like a crystal in time, with states that have a periodicity in time. Physicists have been actively studying driven spin systems as a platform for the realization of such phases of matter.

Source: ibm.com

Thursday 9 December 2021

Increase business agility with IBM Z and Cloud Modernization Center

IBM Z, IBM Cloud Modernization Center, IBM Certification, IBM Career, IBM Prep, IBM Preparation

It’s no secret that digital transformation requires modernization. But the path to modernization can be less clear-cut. Many businesses struggle to identify who they can trust to bring the right approach for their business that is grounded in experience and proven business outcomes.

As digital transformation speeds up and modernization requirements become more urgent, it’s more important than ever to turn to a trusted partner like IBM. Sixty-seven of the Fortune 100 companies rely on IBM Z technology today and in a recent study of 261 decision makers, 74%1 say they “believe the mainframe has long-term viability as a strategic platform for their organizations.” With industry-first investments in AI and data analytics, IBM Z innovates with the clients and industries it supports.

The rise of hyperscalers has led many organizations to consider an application migration approach to public cloud alone, but in many cases it can be a one-way street and lock-in to one public cloud which may have implications on cost, governance and security.

IBM Z takes a more balanced approach to application modernization that brings the best of cloud and IBM Z together.

To help clients choose the best approach for their unique business, IBM today announced the IBM Z and Cloud Modernization Center. The center is a digital front door to tools, resources, training, ecosystem partners and real client examples. It is designed to help clients on their modernization journey and provide support for creating an effective modernization roadmap. The center is the result of collaboration with IBM Consulting and broad ecosystem participation, including global services and technology partners. The ecosystem brings deep expertise in modernization, whether clients are modernizing existing apps and data or integrating cloud native apps and data.

The IBM Z and Cloud Modernization Center helps clients leverage existing investments, rather than committing to a costly one-size-fits all migration strategy. A study done by the IBM IT Economics team shows a 3.2x2 lower annual TCO with IBM Z application modernization vs. application migration to the cloud only. With the expertise and guidance of partners clients can accelerate transformation in their Z environment.

IBM Z clients across industries have begun their modernization journeys, including Garanti BBVA, a Turkish bank that was one of the first to launch mobile banking in Turkey. By taking advantage of tools from IBM to modernize applications, Garanti BBVA estimates increased developer productivity up to 25 percent. This transformation has enabled Garanti BBVA to continue hiring top talent and maintain a competitive edge

To accelerate your modernization journey today, visit the IBM Z and Cloud Modernization Center for access to top industry resources.  Schedule a briefing, connect with a partner, join a workshop, earn digital badges, speak with a domain expert or dive deep with additional in-depth technical resources.

Source: ibm.com

Tuesday 7 December 2021

ESG

Aligning ESG with enterprise risk

IBM Exam Study, IBM Exam, IBM Exam Prep, IBM Exam Certification, IBM Exam Guides, IBM Career, IBM Jobs

Today’s business leaders have inherited a turbulent market landscape in which they must understand, monitor and manage the impact their firms have on external entities, in a much wider sense than before. When considering their effect on the physical environment, their social responsibilities in areas like supply chain ethics, or their overall responsibility to society through effective governance practices, firms need to ensure they have a handle on environmental, social and governance (ESG) risk and compliance.

The benefits of ESG

An effective ESG practice delivers several benefits, including:

◉ Reduced energy costs

◉ Improved employee/labor relationships

◉ Reduction in regulatory and legal intervention

◉ Greater access to capital alongside revenue growth

A key challenge here is aligning the goals of ESG with current governance, risk and compliance (GRC) management disciplines and responsibilities already well-established within the business. Organizations must develop an integrated approach to the goals spanning ESG and GRC, using a data-driven approach.

The immediate future of ESG

A recent IDC FutureScape Report identified several predictions on sustainability that directly align with current GRC disciplines:

◉ “By 2024, two-thirds of organizations worldwide will be tracking their diversity, equity, and inclusion performance using ESG metrics and KPIs.”

◉ “Companies will extend their data privacy initiatives to surpass compliance requirements with 60% of enterprises establishing KPIs regarding the ethical use of data by 2023.”

◉ “By 2023, 40% of organizations will mandate responsible sourcing policies and implement audit and accountability solutions requiring proof of compliance to build trust among consumers and stakeholders.”

These may sound like optimistic projections. But this sort of approach is a foundation for GRC management that firms have been applying for almost 15 years. ESG is simply an extension to these approaches.

Reduce, reuse, recycle

ESG risk management fits under the classic sustainability rubric of reduce, reuse, recycle:

Reduce

“New” non-financial risk disciplines often bring a slew of additional solutions to market. But firms can reuse flexible GRC tools to extend their frameworks.

Reduction in the number of non-financial risk systems will reduce the overall running cost, deployment effort and environmental impact of implementing another platform. At a minimum, firms can minimize costly CAPEX and gain efficiencies in moving their risk management practices to the cloud.

Reuse

An effective ESG program will look to:

1. Identify ESG risks against business objectives and external reporting frameworks through a risk assessment

2. Document and evaluate corresponding controls

3. Monitor achievement progress objectives and potential impediments using key indicators (KRIs/KPIs) and metrics

4. Manage the remediation of the gaps and identified issues through proper assignment, accountability and tracking mechanisms

5. Report on the position, progress, outcomes and regulatory alignment of ESG requirements

ESG is inherently linked to multiple existing GRC domains including operational risk, third-party risk, and policy and compliance management. In an effective integrated risk management initiative, the above 5 steps are directly aligned to the processes already in place, so businesses can look to manage their ESG profile through the reuse of existing GRC processes.

Conceptually this is as simple as extending categorization models; enhancing risk, control and indicator libraries; and adding new reports. In reality, this can be a challenging exercise. A flexible GRC solution allows your organization to try and deploy configuration changes easily and rapidly.

Recycle

ESG, like GRC, is a data-hungry activity. Firms must look at their existing data inventory and look to recycle. Whether it’s feeds of data related to third-party content such as those provided by Supply Wisdom, regulatory change content from Thomson Reuters or internally generated content related to HR hiring practices and profiles, this type of content is crucial to help you understand, manage, and monitor the ESG stance of your business. Accordingly, many existing GRC platforms embed these data feeds natively into the platform.

Conclusion

Delivering an effective ESG program is not a simple exercise. Firms can align their ESG objectives with enterprise risk by adapting and extending existing policies, processes and systems within their GRC frameworks and supplementing with additional third-party content. Those firms who see ESG as a separate discipline will most likely achieve their goals, but they may be saddling their firm with yet another silo of risk management. Managers who do this have missed some of the core ethos of ESG.

It’s not necessary to reinvent the wheel with ESG. Adding new data systems, technology or processes often results in costly, inefficient programs that lose sight of the overall business objectives and performance. By aligning ESG objectives with your enterprise risk management program, you’re more likely to meet your ESG goals. Solutions like IBM OpenPages with Watson support those goals by providing a flexible enterprise risk platform that breaks down silos and opens up GRC capabilities to leaders across the organization, giving total visibility of the company’s risk position from one integrated point of view.

Source: ibm.com

Saturday 4 December 2021

Solving business challenges through democratized data

IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Certification, IBM Learning, IBM Career

“Data is key to any enterprise but must be readily available to employees, partners, and customers in real-time.”

The struggle is real. While most enterprises are teeming with volumes of finance, operations and consumer information, their users often lack the business-ready data needed to drive new insights. Their challenges are many. Some data architectures cannot meet real-time requirements. Data assets are locked up in siloed data repositories or spread across disparate hybrid or multicloud environments. Governance and security fears may keep data access restricted, under the control of technology organizations

The 2021 Forrester report, Enterprise Data Fabric Enables DataOps, reads, “Delivering connected data across hybrid and multicloud sources isn’t trivial, especially with growing data volumes, complexity, and the need for rapid ingestion. Poorly integrated business data leads to poor business decisions, bad customer experience, reduced competitive advantage, and slower innovation and growth.” Organizations recognize that to stay competitive requires new strategies and practices. Data fabric architecture solutions have catapulted to the forefront.

Why is data fabric important?

Leading organizations seek to democratize data to allow business users to access and leverage data themselves to support faster and more accurate business decisions—but often lack the technology or skills to do so. A data fabric architecture helps solve that challenge.

A data fabric includes a set of data management capabilities that enable the full breadth of integrated data management capabilities, including discovery, governance, curation, and orchestration, regardless of on-premises, multicloud or hybrid cloud platforms. The approach provides organizations with business-ready data needed to support applications, analytics and business process automation. Data fabric helps businesses meet the needs of their authorized users by having the right data, just in time with end-to-end governance. 

Weaving technology and talent together for peak results

Through our partnership with IBM and other solution providers, we remain at the forefront of successful modernization efforts. Using the leading data fabric architecture delivered through the IBM Cloud Pak® for Data platform, our clients experience the power of a single architecture for data management, governance, and security across their hybrid and multi-cloud environments. Two recent clients, Wichita State University and a regional bank located in the Western US, gained newfound business insights, process agility, and operational efficiencies achieved through comprehensive, consistently governed information to authorized data users across their organizations.

Designing a proactive safety net to thwart student dropout

Wichita State University (WSU) sought to evaluate student satisfaction, retention, and dropout rates. Administrators wanted a proactive plan to help struggling Wichita State Shockers get the help they need to stay in college, thus increasing their graduation rates.

Prolifics teamed with WSU to modernize its ecosystem using IBM Cloud Pak for Data and a data fabric architecture.  Data fabric eliminated data silos across departments, improving data governance, quality and lineage capabilities. It created a single view of university data regardless of its location. Reporting dashboards provide easily accessed insights and trends to university analysts and administration. WSU got to the root cause of why students leave, deployed mitigation plans and increased the numbers of alumni who graduated. The agile, scalable AI and analytics platform enables further examination of student and operational trends.

Data fabric uncovers new opportunities across the bank

A medium-sized regional bank wanted consolidated data platforms for ease of use, enhanced development capabilities, greater performance, and a comprehensive roadmap for data integration and governance. Data-driven business analytics and reporting capabilities were essential to its executive team.

IBM Exam, IBM Exam Study, IBM Tutorial and Materials, IBM Certification, IBM Learning, IBM Career
Using the robust capabilities of IBM’s data fabric, our solution provided authorized user access to appropriately governed, highly secure data and new modeling tools for decision making. Now the bank can access its data across the organization and better analyze its business processes and operations to identify cost-saving and revenue-generating operational efficiencies, customer care opportunities and new product development.

“Firms that invest in data fabric architecture will democratize data to support new business insights, customer analytics, personalization, and real-time and predictive analytics, and they will be able to respond more quickly to business needs and competitive threats.”

– Noel Yuhanna, VP and Principal Analyst, Forrester, Enterprise Data Fabric Enables DataOps

Data fabric is not a one-size-fits-all solution

Prolifics’ strength is bringing together talent resources and technology tailored to your needs today and scales to meet future requirements. We collaborate with organizations across IBM to design your solution.  As an award-winning IBM Premier Business Partner and IBM Global Elite Program member, let us help you and your organization achieve the next level of success.

Source: ibm.com