Monday, April 1, 2019

Software testing

bundle exam1.0 Softwargon examination ActivitiesWe proceed examination activities from the first build of the softw ar development life cycle. We whitethorn generate streamlet cases from the SRS and SDD documents and drug ab affair them during arrangement and toleration exam. Hence, development and examen activities ar carried expose simultaneously in order to turn untroubled quality confirmable com puzzleer packet package in eon and within budget. We whitethorn carry stunned exam at legion(predicate) a(prenominal) obligate aims and whitethorn sympatheticly ram help of a bundle visitationing tool. Whenever we pinkger off a misfortune, we de pink the author computer write in jurisprudence to see reasons for such a sorrow. determination the reasons of a failure is in truth signifi plentyt seeking activeness and consumes broad descend of re ances deforms and whitethorn wish wellwise delay the release of the parcel.1.1 Levels of TestingSoft w atomic number 18 testing is generally carried out at contrary levels. in that respect ar four such levels namely building block of taproomment of amount of moneyment testing, consolidation testing, system testing, and rentance testing as shown in physical body 8.1. First three levels of testing activities atomic number 18 through by the testers and last level of testing (acceptance) is make by the customer(s)/user(s). Each level has specific testing objectives. For example, at unit testing level, in drug- chalk upicted units atomic number 18 well-tried using obligational and/or geomorphological testing techniques. At integrating testing level, two or more units argon faithd and testing is carried out to test the integration cerebrate issues of various units. At system testing level, the system is tried as a whole and primarily functional testing techniques argon use to test the system. Non functional requirements wish well performance, reliability, usab ility, testability and so forthtera are excessively tested at this level. Load/stress testing is in addition performed at this level. Last level i.e. acceptance testing is d peerless by the customer(s)/users for the use of goods and services of accepting the final product.1.1.1 Unit TestingWe develop bundle in separate / units and e actually unit is expected to induct delineate functionality. We whitethorn call it a compvirtuosont, module, procedure, function etcetera which will set out a aspire and whitethorn be developed independently and simultaney. A. Bertolino and E. Marchetti piddle defined a unit as BERT07A unit is the smallest testable piece of bundle, which whitethorn lie of hundreds or even just fewer lines of showtime legislation, and generally represents the dissolving agent of the work of one or few developers. The unit test cases aspire is to ensure that the unit satisfies its functional specification and / or that its implemented social organizat ion matches the intended pattern structure. BEIZ90, PFLE01. there are similarly problems with unit testing. How earth-closet we run a unit independently? A unit whitethorn non be completely independent. It whitethorn be calling few units and withal called by one or more units. We whitethorn have to carry through additional ancestor enrol to execute a unit. A unit X may call a unit Y and a unit Y may call a unit A and a unit B as shown in figure 8.2(a). To execute a unit Y independently, we may have to write additional source code in a unit Y which may superintend the activities of a unit X and the activities of a unit A and a unit B. The additional source code to handle the activities of a unit X is called driver and the additional source code to handle the activities of a unit A and a unit B is called ticket stub. The complete additional source code which is written for the aspiration of stub and driver is called scaffolding. The scaffolding should be removed after the completion of unit testing. This may help us to send an error easily due(p) to small size of a unit. M both white box testing techniques may be effectively applicable at unit level. We should withhold stubs and drivers simple and small in size to reduce the cost of testing. If we design units in such a way that they can be tested without make-up stubs and drivers, we may be very high-octane and lucky. Generally, in practice, it may be delicate and thus requirement of stubs and drivers may non be eliminated. We may merely decrease the requirement of scaffolding depending upon the functionality and its division in various units.1.1.2 Integration TestingA parcel may have m some(prenominal) units. We test units independently during unit testing after writing required stubs and drivers. When we combine two units, we may like to test the interfaces amongst these units. We combine two or more units be ca-ca they share some relationship. This relationship is correspond by an interface and is know as yoke. The marriage is the measure of the degree of interdependence amongst units. Two units with high bring together are powerfully connected and thus, dependent on each separate. Two units with low coupling are weakly connected and thus have low habituation on each other. Hence, highly coupled units are heavily dependent on other units and loosely coupled units are comparatively little dependent on other units as shown in figure 8.3. mating increases as the topic of calls amongst units increases or the amount of shared info increases. The design with high coupling may have more errors. Loose coupling minimize the interdependence and some of the steps to minimize the coupling are come across throughn as(i) Pass only data, non the control information.(ii) turn away passing unwanted data.(iii) Minimize parent / child relationship between calling and called units.(iv) Minimize the number of parameters to be passed between two units.(v) fo refend passing complete data structure.(vi) Do non declare worldwide variables.(vii) Minimize the image of variables.Different types of coupling are data ( outmatch), stamp, control, external, habitual and content (worst). When we design test cases for interfaces, we should be very clear astir(predicate) the coupling amongst units and if it is high, salient number of test cases should be designed to test that circumstance interface.A good design should have low coupling and thus interfaces construct very important. When interfaces are important, their testing will withal be important. In integration testing, we focalize on the issues related to interfaces amongst units. There are several integration strategies that veritablely have little root word in a rational methodology and are given in figure 8.4. Top down integration starts from the main unit and keeps on adding all called units of next level. This portion should be tested entirely by focusing on interface issues . After completion of integration testing at this level, add next level of units and as so on till we reach the lowest level units (leaf units). There will not be any requirement of drivers and only stubs will be designed. In bottom-up integration, we start from the bottom, (i.e. from leaf units) and keep on adding upper level units till we reach the superlative (i.e. root node). There will not be any admit of stubs. A sandwich strategy runs from top and bottom concurrently, depending upon the availability of units and may meet somewhere in the middle.(b) asshole up integration (focus starts from edges i, j and so on)c) Sandwich integration (focus starts from a, b, i, j and so on)Each approach has its own advantages and disadvantages. In practice, sandwich integration approach is more popular. This can be started as and when two related units are available. We may use any functional or geomorphological testing techniques to design test cases.The functional testing techniques ar e subdued to implement with a get outicular focus on the interfaces and some geomorphologic testing techniques may also be used. When a new unit is added as a set off of integration testing then the software is considered as a diversenessd software. New paths are designed and new input(s) and output(s) conditions may emerge and new control logic may invoke. These changes may also cause problems with units that previously worked flawlessly.1.1.3 System TestingWe perform system testing after the completion of unit and integration testing. We test complete software alongwith its expected environment. We generally use functional testing techniques, although few structural testing techniques may also be used.A system is defined as a combination of the software, hardware and other associated parts that together add product features and solutions. System testing ensures that each system function works as expected and it also tests for non-functional requirements like performance, secu rity, reliability, stress, load etc. This is the only phase of testing which tests both functional and non-functional requirements of the system. A police squad of the testing persons does the system testing under the supervision of a test group leader. We also review all associated documents and manuals of the software. This verification activity is as important and may improve the quality of the final product.Ut or so bring off should be taken for the defects found during system testing phase. A prudish(a) wallop summary should be done before jaming the defect. sometimes, if system permits, instead of fixing the defects are just documented and mentioned as the know limitation. This may decease in a authority when fixing is very time consuming or technically it is not possible in the present design etc. Progress of system testing also builds confidence in the development team as this is the first phase in which complete product is tested with a specific focus on customer s expectations. After the completion of this phase, customers are involved to test the software.1.1.4 Acceptance TestingThis is the extension of system testing. When testing team feels that the product is ready for the customer(s), they invite the customer(s) for demonstration. After demonstration of the product, customer(s) may like to use the product for their bliss and confidence. This may range from adhoc usage to systematic well-planned usage of the product. This type of usage is essential before accepting the final product. The testing done for the purpose of accepting a product is known as acceptance testing. This may be carried out by the customer(s) or persons authorized by the customer. The venue may be developers state of affairs or customers site depending on the rough-cut agreement. Generally, acceptance testing is carried out at the customers site. Acceptance testing is carried out only when the software is developed for a particular customer(s). If, we develop software for anonymous customers (like operating systems, compilers, case tools etc), then acceptance testing is not feasible. In such cases, potential customers are identified to test the software and this type of testing is called of import / beta testing. Beta testing is done by many potential customers at their sites without any involvement of developers / testers. Although alpha testing is done by some potential customers at developers site under the direction and supervision of testers.1.2 De tauntgingWhenever a software fails, we would like to insure the reason(s) of such a failure. After penetrative the reason(s), we may attempt to respect solution and may make necessary changes in the source code accordingly. These changes will hopefully remove the reason(s) of that software failure. The go of identifying and correcting a software error is known as de hemipteranging. It starts after receiving a failure circulate and completes after ensuring that all corrections have been justifiedly placed and the software does not fail with the same set of input(s). The de ragging is quite a surd phase and may become one of the reasons of the software delays.Every bug detection movement is different and it is punishing to know how long it will take to come upon and fix a bug. Sometimes, it may not be possible to detect a bug or if a bug is detected, it may not be feasible to correct it at all. These land sites should be handled very deliberately. In order to remove bugs, developer must first take hold of that a problem exists, then classify the bug, put where the problem real lies in the source code, and finally correct the problem.1.2.1 Why debugging is so difficult?Debugging is a difficult process. This is probably due to human involvement and their psychology. Developers become uncomfortable after receiving any request of debugging. It is taken over against their professional pride. Shneiderman SHNE80 has rightly commented on human aspect of debug ging asIt is one of the most scotch parts of computer computer weapons platformmememing. It has elements of problem solving or brain teasers, coupled with the plaguy recognition that we have made a mistake. Heightened anxiety and the unwillingness to accept the possibility of errors, increase the childbed difficulty. Fortunately, in that location is a great breathe of relief and a lessening of tension when the bug is ultimately corrected.These comments formulate the difficulty of debugging. Pressman PRES97 has given some clues about the characteristics of bugs asThe debugging process attempts to match symptom with cause, thereby leading to error correction. The symptom and the cause may be geo representically remote. That is, symptom may appear in one part of weapons platform, while the cause may actually be located in other part. Highly coupled platform structures may further complicate this situation. Symptom may also disappear temporarily when another error is correct ed. In real time applications, it may be difficult to accurately reproduce the input conditions. In some cases, symptom may be due to causes that are distri hardlyed across a number of tasks running on different processors.There may be many reasons which may make debugging process difficult and time consuming. However, psychological reasons are more prevalent everyplace technical reasons. Over the years, debugging techniques have well improved and they will continue to develop significantly in the expert future. Some debugging tools are available and they minimize the human involvement in the debugging process. However, it is still a difficult area and consumes significant amount of time and resources.1.2.2 Debugging ProcessDebugging means detecting and removing bugs from the designs. Whenever a program generates an unexpected behaviour, it is known as a failure of the program. This failure may be mild, annoying, disturbing, serious, extreme, harmful or infectious. Depending on the type of failure, actions are required to be taken. Debugging process starts after receiving a failure report each from testing team or from users. The steps of the debugging process are replication of the bug, studying the bug, locate the bug, fix the bug and retest the program.(i) Replication of the bugThe first step in fixing a bug is to replicate it. This means to recreate the undesired behaviour under controlled conditions. The same set of input(s) should be given under similar conditions to the program and the program, after execution, should produce similar unexpected behaviour. If this happens, we are able to replicate a bug. In many cases, this is simple and genuine forward. We execute the program on a particular input(s) or we cupboard a particular button on a particular dialog, and the bug occurs. In other cases, replication may be very difficult. It may require many steps or in an interactive program such as a game, it may require precise timing. In worst cases, replication may be nearly impossible. If we do not replicate the bug, how will we verify the fix? Hence, failure to replicate a bug is a real problem. If we cannot do it, any action, which cannot be verified, has no meaning, how so ever important it may be. Some of the reasons for non-replication of bug are The user incorrectly reported the problem. The program has failed due to hardware problems like repositing overflow, sad network connectivity, network congestion, non availability of system buses, tie-up conditions etc. The program has failed due to system software problems. The reason may be the usage of different type of operating system, compilers, device drivers etc. there may be any above mentioned reason for the failure of the program, although there is no inherent bug in program for this particular failure.Our effort should be to replicate the bug. If we cannot do so, it is advisable to keep the content pending till we are able to replicate it. There is no point in pla ying with the source code for a situation which is not reproducible.(ii) Understanding the bugAfter replicating the bug, we may like to actualize the bug. This means, we want to play the reason(s) of this failure. There may be one or more reasons and is generally the most time consuming activity. We should understand the program very clearly for brain a bug. If we are the designers and source code writers, there may not be any problem for understanding the bug. If not, then we may even have more serious problems. If discernability of the program is good and associated documents are available, we may be able to get off the problem. If readability is not that good, (which happens in many situations) and associated documents are not proper, situation becomes very difficult and complex. We may call the designers, if we are lucky, they may be available with the company and we may get them. Imagine otherwise, what will happen? This is a real challenging situation and in practice many times, we have to face this and struggle with the source code and documents written by the persons not available with the company. We may have to put effort in order to understand the program. We may start from the first statement of the source code to the last statement with a special focus on critical and complex areas of the source code. We should be able to know, where to look in the source code for any particular activity. It should also tell us the general way in which the program acts.The worst cases are large programs written by many persons over many years. These programs may not have consistency and may become poorly readable over time due to various maintenance activities. We should simply do the best and try to avoid making the mess worse. We may also take the help of source code psychoanalysis tools for examining the large programs. A debugger may also be helpful for understanding the program. A debugger inspects a program statement wise and may be able to show the f ighting(a) behaviour of the program using a breakpoint. The breakpoints are used to rupture the program at any time sine qua noned. At every breakpoint, we may look at values of variables, confine of relevant storehouse locations, registers etc. The main point is that in order to understand a bug, program understanding is essential. We should put desired effort before finding the reasons of the software failure. If we fail to do so, unnecessarily, we may waste our effort, which is neither required nor desired.(iii) root the bugThere are two portions of the source code which need to be considered for locating a bug. First portion of the source code is one which causes the visible incorrect behaviour and second portion of the source code is one which is actually incorrect. In most of the situations, both portions may overlap and sometimes, both portions may be in different parts of the program. We should first find the source code which causes the incorrect behaviour. After knowi ng the incorrect behaviour and its related portion of the source code, we may find the portion of the source code which is at fault. Sometimes, it may be very halcyon to identify the problematic source code (second portion of the source code) with manual inspection. Otherwise, we may have to take the help of a debugger. If we have core dumps, a debugger can immediately identify the line which fails. A core dumps is the printout of all registers and relevant memory locations. We should document them and also retain them for possible future use. We may provide breakpoints while replicating the bug and this process may also help us to locate the bug.Sometimes simple print statements may help us to locate the sources of the naughty behaviour. This simple way provides us the status of various variables at different locations of the program with specific set of inputs. A sequence of print statements may also portray the dynamics of variable changes. However, it is cumbersome to use in l arge programs. They may also generate superfluous data which may be difficult to analyze and manage.another(prenominal) utilitarian approach is to add check routines in the source code to verify that data structures are in a valid state. Such routines may help us to narrow down where data corruption occurs. If the check routines are fast, we may want to always enable them. Otherwise, leave them in the source code, and provide some sort of mechanism to turn them on when we need them.The most useful and powerful way is to do the source code inspection. This may help us to understand the program, understand the bug and finally locate the bug. A clear understanding of the program is an absolute requirement of any debugging activity. Sometimes, bug may not be in the program at all. It may be in a library routine or in the operating system, or in the compiler. These cases are very rare, but there are chances and if everything fails, we may have to look for such options.(iv) fixation the bug and retest the programAfter locating the bug, we may like to fix the bug. The fixing of a bug is a programming exercise sort of than a debugging activity. After making necessary changes in the source code, we may have to retest the source code in order to ensure that the corrections have been rightly done at right place. Every change may affect other portions of the source code also. Hence an impact analysis is required to identify the affected portion and that portion should also be retested thoroughly. This retesting activity is called regression testing which is very important activity of any debugging process.1.2.3 Debugging ApproachesThere are many popular debugging approaches, but succeeder of any approach is dependant upon the understanding of the program. If the persons involved in debugging understand the program correctly, they may be able to detect and remove the bugs.(i) streamlet and Error MethodThis approach is dependent on the ability and experience of the debu gging persons. After getting a failure report, it is analyzed and program is inspected. found on experience and intelligence, and also using hit and trial technique, the bug is located and a solution is found. This is a slow approach and becomes wordy in large programs.(ii) BacktrackingThis can be used successfully in small programs. We start at the point where program gives incorrect end such as unexpected output is printed. After analyzing the output, we trace retrospective the source code manually until a cause of the failure is found. The source code from the statement where symptoms of failure is found to the statement where cause of failure is found is analyzed properly. This technique brackets the locations of the bug in the program. Subsequent careful study of bracketed location may help us to rectify the bug. Another obvious variation of backtracking is forward tracking, where we use print statements or other means to examine a succession of intermediate results to dete rmine at what point the result first became wrong. These approaches (backtracking and forward tracking) may be useful only when the size of the program is small. As the program size increases, it becomes difficult to manage these approaches.(iii) Brute ForceThis is probably the most common and efficient approach to identify the cause of a software failure. In this approach, memory dumps are taken and run time traces are invoked and the program is loaded with print statements. When this is done, we may find a clue by the information produced which leads to identification of cause of a bug. Memory traces are similar to memory dumps, except that the printout contains only certain memory and register contents and printing is qualified on some event occurring. Typically conditional events are entry, exit or use of one of the following(a) A particular subroutine, statement or database(b) Communication with I/O devices(c) appraise of a variable(d) Timed actuations (periodic or random) in certain real time system.A special problem with trace programs is that the conditions are entered in the source code and any changes require a recompilation. The huge amount of data is generated which although may help to identify the cause but may be difficult to manage and analyze.(iv) Cause EliminationCause excrement is manifested by induction or deduction and also introduces the concept of binary star partitioning. Data related to error occurrence are organized to single out potential causes. Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each. Therefore, we may rule out causes one by one until a single one remains for validation. The cause is identified, properly wintry and retested accordingly.1.2.4 Debugging ToolsMany debugging tools are available to support the debugging process. Some of the manual activities can also be automated using a tool. We may need a tool that may execute every statement of a program at a time and print values of any variable after executing every statement of the program. We will be free from inserting print statements in the program manually. Thus, run time debuggers are designed. In principle, a run time debugger is nothing more than an automatonlike print statement generator. It allows us to trace the program path and the variables without having to put print statements in the source code. Every compiler available in the market comes with run time debugger. It allows us to compile and run the program with a single compilation, rather than modifying the source code and recompiling as we try to narrow down the bug. Run time debuggers may detect bugs in the program, but may fail to find the causes of failures. We may need a special tool to find causes of failures and correct the bug. Some errors like memory corruption and memory leaks may be detected semiautomatically. The automation was the revision in debugging process, because it automated the process of finding the bu g. A tool may detect an error, and our job is to simply fix it. These tools are known as automatic debugger and come in several varieties. The simplest ones are just a library of functions that can be linked into a program. When the program executes and these functions are called, the debugger checks for memory corruption, if it finds this, it reports it.Compilers are also used for finding bugs. Of course, they check only sentence structure errors and particular type of run time errors. Compilers should give proper and detailed messages of errors that will be of great help to the debugging process. Compilers may give all such information in the property table, which is printed along with the listing. The attribute table contains various levels of warnings which have been picked up by the compiler descry and which are noted. Hence, compilers are coming with error detection feature and there is no excuse to design compilers without meaningful error messages.We may founder wide var iety of tools like run time debugger, automatic debugger, automatic test case generators, memory dumps, cross reference maps, compilers etc during the debugging process. However, tools are not the substitute for careful examination of the source code after thorough understanding.1.3 Software Testing ToolsThe most important effort consuming task in software testing is to design the test cases. The execution of these test cases may not require much time and resources. Hence, designing part is more significant than execution part. Both parts are unremarkably handled manually. Do we really need a tool? If yes, where and when can we use it? In first part (designing of test cases) or second part (execution of test cases) or both. Software testing tools may be used to reduce the time of testing and to make testing as easy and pleasant as possible. Automated testing may be carried out without human involvement. This may help us in the areas where similar data set is to be given as input to the program again and again. A tool may do the repeated testing, unattended also, during nights or weekends without human intervention. Many non-functional requirements may be tested with the help of a tool. We want to test the performance of a software under load, which may require many computers, manpower and other resources. A tool may simulate multiple users on one computer and also a situation when many users are accessing a database simultaneously.There are three broad categories of software testing tools i.e. dormant, dynamic and process management. Most of the tools fall clearly into one of the categories but there are few exceptions like mutation analysis system which falls in more than one the categories. A wide variety of tools are available with different scope and quality and they assist us in many ways.1.3.1 Static software testing toolsStatic software testing tools are those that perform analysis of the programs without executing them at all. They may also find the source code which will be hard to test and maintain. As we all know, static testing is about prevention and dynamic testing is about cure. We should use both the tools but prevention is always better than cure. These tools will find more bugs as compared to dynamic testing tools (where we execute the program). There are many areas for which effective static testing tools are available, and they have shown their results for the profit of the quality of the software.(i) Complexity analysis toolsComplexity of a program plays very important role while determining its quality. A popular measure of complexity is the cyclomatic complexity as discussed in chapter 4. This gives us the idea about the number of independent paths in the program and is dependent upon the number of decisions in the program. Higher value of cyclomatic complexity may indicate about poor design and risky implementation. This may also be applied at module level and higher cyclomatic complexity value modules may eith er be redesigned or may be tested very thoroughly. There are other complexity measures also which are used in practice like Halstead software size measures, knot complexity measure etc. Tools are available which are based on any of the complexity measure. These tools may take the program as an input, process it and produce a complexity value as output. This value may be an indicator of the quality of design and implementation.(ii) Syntax and Semantic Analysis ToolsThese tools find syntax and semantic errors. Although compiler may detect all syntax errors during compilation, but early detection of such errors may help to minimize other associated errors. Semantic errors are very significant and compilers are confounded to find such errors. There are tools in the market that may analyze the program and find errors. Non-declaration of a variable, double declaration of a variable, divide by zero issue, unspecified inputs, non-initialization of a variable are some of the issues which ma y be detected by semantic analysis tools. These tools are language dependent and may parse the source code, maintain a list of errors and provide implementation information. The parser may find semantic errors as well as make an inference as to what is syntactically correct.(iii) Flow graph generator toolsThese tools are language dependent and take the program as an input and convert it to its flow graph. The flow graph may be used for many purposes like complexity calculation, paths identification, times of definition use paths, program slicing etc. These

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.