Testing APIs is crucial. It helps identify errors in the code, improve code quality, and empowers developers to make changes more quickly with confidence that they haven’t broken existing behavior. Automation and Artificial Intelligence can have a significant impact on API testing. Utilizing automation in API testing can be found in many products, but the majority of companies have yet to tap into the potential that AI and machine learning can have on enhancing testing. At IBM, we believe there are a few key capabilities to keep an eye on as the future of API testing incorporates more AI and automation.
Adding Intelligence to Automation
With basic automated testing, a developer might use code that generates random inputs for each field. A lot of those tests will end up being wasted as they are repetitive, or don’t match the business use of the application. In those cases, manually written tests are more valuable because the developer has a better understanding of the API usage.
Adding intelligence provides a great opportunity to enhance automated testing to work with business logic – i.e. users will place an item in their online shopping cart before they are taken to the page that requires an address, so testing an API with an address but no items is a waste of time. Intelligent automated testing could generate a dynamic set of input values that make sense, and are a broader test of the API’s design with more confident results.
Semantic and Syntactic Awareness
Creating new API test cases can be time-consuming when done manually. Generating tests can accelerate this, but developers can only rely on this if the generated tests are high quality.
One way to improve the quality of generated tests is semantic and syntactic awareness – that is, training an intelligent algorithm to understand key business or domain entities such as a ‘customer’, ’email’, or ‘invoice’ – and how to generate data from them. By pointing it at existing tests, APIs and business rules, it should be able to ‘learn’ from that and become better at generating tests with less developer input later on..
Automating Setup and Teardown
A tester’s workload can be significantly decreased by identifying and automating routine tasks. Using an algorithm to look at an API specification and see what the dependencies are allows the machine to conduct routine setup and teardown tasks. For example, if a bookshop has an API for orders, the AI can set up the scaffolding and create the prerequisites for the test. If a tester needs to create a book and a customer prior to creating an order, those tasks are conducted by the AI, and then cleaned up and deleted after the test. As an algorithm learns about the company’s API structures, it can generate more of the setup and teardown tasks.
Mining real world data
The effectiveness of API testing is greatly increased when tests use realistic data, representative of real-world production conditions. Generating tests from production data must be done with care due to the risk of exposing sensitive data. Without automation, creating real-world useful tests is difficult to achieve at scale because of the high labor cost of combing through mounds of data, determining what is relevant, and cleansing the data of sensitive values.Using AI to identify gaps in test coverage
A recent addition to the IBM Cloud Pak for Integration Test and Monitor uses AI to analyse the API workloads in both production and test environments, identifying the ways that APIs are being invoked in each. This analysis allows it to identify real-world production API scenarios that aren’t adequately recreated in the existing test suite, and automatically generate tests that fill that gap.
Allowing an algorithm to efficiently examine millions of production API calls means that production personnel only need to review and approve the smartly generated tests. This is a very effective way of increasing test coverage in a way that will have the most impact – as it prioritizes closing testing gaps based on how users are interacting with APIs in the real world.
Source: ibm.com
0 comments:
Post a Comment