Many of us saw the Terminator movie series. It’s entertaining, true, but the whole concept behind this action-packed thriller is even more interesting. Skynet, the advanced artificial intelligence antagonist in the film, develops the ability to time travel. Both Skynet and the humans use the capability in their efforts to win their ongoing war — either by altering or accelerating past events in their favor or by preventing or forestalling their apocalyptic present timeline.
As human beings, we try to learn from our mistakes, assessing risks and taking corrective actions to achieve the results we want. Unfortunately, in today’s complex and dynamic IT environment, this methodology falls short because it simply cannot move fast enough to accommodate the pace of change.
So, why not use machines — less nefarious machines than Skynet, of course — to more quickly and contextually understand what we have learned and try to alter the future?
Machine learning is the science of enabling computers to make decisions and act on them without being explicitly programmed. In the past decade, machine learning has powered the development of self-driving cars, practical speech recognition, intelligent web search and a vastly improved understanding of the human genome. Machine learning is now so pervasive that consumers likely use it dozens of times a day without realizing it.
Machine learning’s benefits also effectively apply to the quality assurance and testing process. It can help IT teams answer questions as simple as “How many defects might I encounter post release?” “Which modules are riskier?” and “What are the major problem areas?” But the opportunities and advantages go much further, especially when we apply learning techniques such as supervised, unsupervised, reinforcement and deep learning to the process.
The big question is: how easy or difficult it is to implement machine learning? The best approach for easing the implementation process is to start with an existing framework and customize it based on data, learning techniques and prediction areas. One such solution is CRESTA, an NTT DATA machine-learning framework designed to address software quality assurance challenges.
CRESTA is a web-based solution for analyzing historical test artifacts such as test cases, defects and defect metrics. It integrates with project and defect management tools like Redmine, ALM, JIRA and Bugzilla to collect project defects data, metrics and other artifacts. It then leverages intelligent robotic automation technologies such as machine learning, AI and text analytics for predictive defects analysis and test optimization. It presents user-friendly results in the form of dashboards, graphs, charts and tables.
Leveraging solutions like CRESTA enables you to gather and analyze QAT data and predict the outcome automatically, which helps minimize risk. Early predictions of results are impressive: our initial level test-drives with NTT DATA’s clients have a predicted accuracy as high as 90% — a figure we expect will rise as we add more data.
So can we change the future? Yes, and (good) machines can help.
Post Date: 20/11/2016