Not registered yet?
Register now! It is easy and done in 1 minute and gives you access to special discounts and much more!
What is the difference between unit, functional, acceptance, and integration testing (and any other types of tests that I failed to mention)?
This is very simple.
Unit testing: This is the testing actually done by developers that have coding knowledge. This testing is done at the coding phase and it is a part of white box testing. When a software comes for development, it is developed into the piece of code or slices of code known as a unit. And individual testing of these units called unit testing done by developers to find out some kind of human mistakes like missing of statement coverage etc..
Functional testing: This testing is done at testing (QA) phase and it is a part of black box testing. The actual execution of the previously written test cases. This testing is actually done by testers, they find the actual result of any functionality in the site and compare this result to the expected result. If they found any disparity then this is a bug.
Acceptance testing: know as UAT. And this actually done by the tester as well as developers, management team, author, writers, and all who are involved in this project. To ensure the project is finally ready to be delivered with bugs free.
Integration testing: The units of code (explained in point 1) are integrated with each other to complete the project. These units of codes may be written in different coding technology or may these are of different version so this testing is done by developers to ensure that all units of the code are compatible with other and there is no any issue of integration.
@OlegTsyba the answer came 4 years after the question was answered.
I will explain you this with a practical example and no theory stuff:
A developer writes the code. No GUI is implemented yet. The testing at this level verifies that the functions work correctly and the data types are correct. This phase of testing is called Unit testing.
When a GUI is developed, and application is assigned to a tester, he verifies business requirements with a client and executes the different scenarios. This is called functional testing. Here we are mapping the client requirements with application flows.
Integration testing: let's say our application has two modules: HR and Finance. HR module was delivered and tested previously. Now Finance is developed and is available to test. The interdependent features are also available now, so in this phase, you will test communication points between the two and will verify they are working as requested in requirements.
Regression testing is another important phase, which is done after any new development or bug fixes. Its aim is to verify previously working functions.
"A developer writes the code. No GUI is implemented yet. The testing at this level verifies that the functions work correctly and the data types are correct. This phase of testing is called Unit testing" This is not true. GUI is actually just a "plugin". You can already write E2E tests to your API output. (or any response object you generate)
Unit Testing - As the name suggests, this method tests at the object level. Individual software components are tested for any errors. Knowledge of the program is needed for this test and the test codes are created to check if the software behaves as it is intended to.
Functional Testing - Is carried out without any knowledge of the internal working of the system. The tester will try to use the system by just following requirements, by providing different inputs and testing the generated outputs. This test is also known as closed-box testing or black-box.
Acceptance Testing - This is the last test that is conducted before the software is handed over to the client. It is carried out to ensure that the developed software meets all the customer requirements. There are two types of acceptance testing - one that is carried out by the members of the development team, known as internal acceptance testing (Alpha testing), and the other that is carried out by the customer or end user known as (Beta testing)
Integration Testing - Individual modules that are already subjected to unit testing are integrated with one another. Generally the two approachs are followed :
Martin Fowler's blog post speaks about strategies to test code (Especially in a micro-services architecture) but most of it applies to any application.
I'll quote from his summary slide:
Unit tests - exercise the smallest pieces of testable software in the application to determine whether they behave as expected.
Integration tests - verify the communication paths and interactions between components to detect interface defects.
Component tests - limit the scope of the exercised software to a portion of the system under test, manipulating the system through
internal code interfaces and using test doubles to isolate the code
under test from other components.
Contract tests - verify interactions at the boundary of an external service asserting that it meets the contract expected by a consuming
End-To-End tests - verify that a system meets external requirements and achieves its goals, testing the entire system, from
end to end.
That's a great article by the way. However I don't completely understand what contract test are for. Aren't they redundant in light of component and integration tests?
In some languages (that Mr Fowler uses) you can implement an interface that is not exposed when using the standard definition of a class e.g. void IMyInterface.MyMethod(). Which in turn would logically have its own tests. Although at that point you are heading back towards BDD.. Which ironically Mr Fowler has had a land grab at as well.
it's not Fowler article btw, just posted there. Contract tests are test that are made after clients starts to use your service, you then write tests that checks if you don't broke something for that particular clients, i.e. change service api.
@wheleph unit, integration and component tests speak mostly for the software internals that are heavily controllable by the developer. An issue in the first three means changing your source to fix the issue. -- The contract tests touch what is promised to you in functionality but you may not be able to change directly in the face of defect. This requires adding support code to work around those possible issues instead of just fixing the defect. -- So you would work around a web service giving you back malformed json even if the contract specification told you it be of a certain structure.
unit test: testing of individual module or independent component in an application is known to be unit testing , the unit testing will be done by developer.
integration test: combining all the modules and testing the application to verify the communication and the data flow between the modules are working properly or not , this testing also performed by developers.
funcional test checking the individual functionality of an application is mean to be functional testing
acceptance testing this testing is done by end user or customer whether the build application is according to the customer requirement , and customer specification this is known to be acceptance testing
Some (relatively) recent ideas against excessive mocking and pure unit-testing:
I am new to testing code. Unit tests seem mostly like a waste of time. I thought I was doing unit testing but I was doing integration testing and then I read about unit testing and it seems silly, maybe for people with very little experience? There's a chance I'm missing some sort of point.
If Unit is defined broadly, then you are properly unit-testing. I oppose testing implementation details. A private class should not be "unit-tested". However, if you have several public classes, you might be tempted to mock one while testing another. That's the real debate. Is the Unit (a) your entire library? (b) each public class within the library? Or (c), each public method within each class? I prefer to test a given library as an integrated component, but to mock or fake external dependencies (unless they are fast and reliable). So I think I'm with you.
@PixMach: actually it's the other way around. Not having (good) unit tests in place, wastes a lot of your time, if you (or somebody else) have to change that code in the future. If you have experience maintaining code with and without unit tests, you'll know the difference. The idea is, that if a unit test breaks, you should know exactly which part of the code has to be fixed. Failing big scale acceptance / integration tests often only tell you: it does not work. And then you have to start old school debugging...
@Goodsquirrel, it depends what you call a "unit". That's the problem. Bad tests will be deleted during refactoring. Good tests will still be helpful. Bad tests add no value and get in the way. Good tests are self-documenting and greatly appreciated. Let's get specific. I have a private method to return a value if another value is True, otherwise a default value. (Legacy code.) Should that method be tested? I say no. Another private method returns the nth Fibonacci number. Should that be tested? I say yes.
Unit is the smallest portion of code possible to test, usually a single function or class
The smallest exposed code. Big difference.
Depending on where you look, you'll get slightly different answers. I've read about the subject a lot, and here's my distillation; again, these are slightly wooly and others may disagree.
Tests the smallest unit of functionality, typically a method/function (e.g. given a class with a particular state, calling x method on the class should cause y to happen). Unit tests should be focussed on one particular feature (e.g., calling the pop method when the stack is empty should throw an InvalidOperationException). Everything it touches should be done in memory; this means that the test code and the code under test shouldn't:
Any kind of dependency that is slow / hard to understand / initialise / manipulate should be stubbed/mocked/whatevered using the appropriate techniques so you can focus on what the unit of code is doing, not what its dependencies do.
In short, unit tests are as simple as possible, easy to debug, reliable (due to reduced external factors), fast to execute and help to prove that the smallest building blocks of your program function as intended before they're put together. The caveat is that, although you can prove they work perfectly in isolation, the units of code may blow up when combined which brings us to ...
Integration tests build on unit tests by combining the units of code and testing that the resulting combination functions correctly. This can be either the innards of one system, or combining multiple systems together to do something useful. Also, another thing that differentiates integration tests from unit tests is the environment. Integration tests can and will use threads, access the database or do whatever is required to ensure that all of the code and the different environment changes will work correctly.
If you've built some serialization code and unit tested its innards without touching the disk, how do you know that it'll work when you are loading and saving to disk? Maybe you forgot to flush and dispose filestreams. Maybe your file permissions are incorrect and you've tested the innards using in memory streams. The only way to find out for sure is to test it 'for real' using an environment that is closest to production.
The main advantage is that they will find bugs that unit tests can't such as wiring bugs (e.g. an instance of class A unexpectedly receives a null instance of B) and environment bugs (it runs fine on my single-CPU machine, but my colleague's 4 core machine can't pass the tests). The main disadvantage is that integration tests touch more code, are less reliable, failures are harder to diagnose and the tests are harder to maintain.
Also, integration tests don't necessarily prove that a complete feature works. The user may not care about the internal details of my programs, but I do!
Functional tests check a particular feature for correctness by comparing the results for a given input against the specification. Functional tests don't concern themselves with intermediate results or side-effects, just the result (they don't care that after doing x, object y has state z). They are written to test part of the specification such as, "calling function Square(x) with the argument of 2 returns 4".
Acceptance testing seems to be split into two types:
Standard acceptance testing involves performing tests on the full system (e.g. using your web page via a web browser) to see whether the application's functionality satisfies the specification. E.g. "clicking a zoom icon should enlarge the document view by 25%." There is no real continuum of results, just a pass or fail outcome.
The advantage is that the tests are described in plain English and ensures the software, as a whole, is feature complete. The disadvantage is that you've moved another level up the testing pyramid. Acceptance tests touch mountains of code, so tracking down a failure can be tricky.
Also, in agile software development, user acceptance testing involves creating tests to mirror the user stories created by/for the software's customer during development. If the tests pass, it means the software should meet the customer's requirements and the stories can be considered complete. An acceptance test suite is basically an executable specification written in a domain specific language that describes the tests in the language used by the users of the system.
They're all complementary. Sometimes it's advantageous to focus on one type or to eschew them entirely. The main difference for me is that some of the tests look at things from a programmer's perspective, whereas others use a customer/end user focus.
+1. @Mark Simpson Could functional and acceptance testing to be summed up as "system testing"? Where do end-to-end tests fit in? (too much different vocabulary for my taste)
Could you add to your answer how Continuos Integration fits in? What types of tests is CI using?
You write that "you can prove they work perfectly in isolation". I learned that tests can never prove the absence of bugs, only their presence. So what did you mean with that sentence?
@Franz I was speaking about the ability and ease with which you can reduce risk via isolating units of code and testing them. You're right though, the language I used was a bit loose, as tests cannot prove that code is bug-free.
@benregn ideally, your Continuous Integration server runs all of your tests. That's the point of Continuous Integration. Unit tests will be fast and should run after every commit. If your functional tests are too slow, you may run those less often but it's still helpful to run them.
Despite the up-votes, this is completely wrong. Unit-tests do not test even the "trivial" collaborators; any injected dependency must be mocked. Functional tests do not test "behavior"; they test only "function", i.e. "f(A) returns B". If side-effects matter, it is "behavioral". If these include system-calls, they are also "system" tests, as in "behavioral system tests". (See [email protected] below.) "Acceptance" tests are a subset of "behavioral system tests" which cover the full-stack. "Integration" tests upward, simulating actual usage; it tests that all dependencies can be integrated in practice.
@cdunn2001: I didn't say unit tests test the "trivial" collaborators, I said that if a dependency is non-trivial, it should be replaced with a test double. If a dependency is trivial, straightforward and fast, I generally won't replace it with a test double unless I think there's a risk that it'll influence/break the test I'm writing, as I think that time can be better spent elsewhere for my use-case.
@cdunn2001: As for the rest, I just gave my understanding what these tests are, as I find most sources give pretty fuzzy definitions (plus the waters are muddied with similar terms used by agile development). I've corrected some of the errors and removed a few terms that were ambiguous, thanks. It sounds like you have something useful to add, so could you add an answer rather than a footnote, please? Also, links to articles/books would be much appreciated, cheers!
@mark-simpson: I don't mean to criticize you at all! You have 300+ upvotes for conveying lucidly the common usage. Instead, I am fighting a losing battle for discrimination in the world of testing. As for keeping the "trivial" collaborators, yes, I love that, but I call that functional testing. If you search the web today, you'll see that unit-testing is losing its luster. Too often, it couples the test to implementation details. It became popular because of dynamic languages, where we must guarantee execution of every line-of-code, but functional has always been more maintainable.
@cdunn2001: Don't worry, constructive criticism is always good :) Your comment taught me a few things I didn't know and cleaned up my terminology somewhat. I'm always keen to learn new things from developers who are keen on testing. I remember the first time I discovered Miško Hevery's blog -- it was like a treasure trove :)
Acceptance testing is a testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria of a given use case. It is usually performed by the expert-user, customer or other authorized entity to determine whether or not the system is acceptable. In aeronautics a test pilot is an aviator who tests new aircraft by fling specific maneuvers. Top pilots, navigators and engineers conduct flight tests and at the end of the test missions they will provide evaluation and certification data.
@MarkSimpson although your answer is very good i would like a little more detail regarding functional tests. I mean in your answer, for me, is hard to distinguish between functional tests and unit tests. I hope you have time for this, keep up the great work!
maybe add a comment that the first group ("unit test") are sometimes seemingly called micro tests [?]
I don't see the differentiation between unit and functional tests in this answer.
The important thing is that you know what those terms mean to your colleagues. Different groups will have slightly varying definitions of what they mean when they say "full end-to-end" tests, for instance.
I came across Google's naming system for their tests recently, and I rather like it - they bypass the arguments by just using Small, Medium, and Large. For deciding which category a test fits into, they look at a few factors - how long does it take to run, does it access the network, database, filesystem, external systems and so on.
I'd imagine the difference between Small, Medium, and Large for your current workplace might vary from Google's.
However, it's not just about scope, but about purpose. Mark's point about differing perspectives for tests, e.g. programmer vs customer/end user, is really important.
+1 for the google test naming thing as it helps give a bit of perspective on why various organisations / people have different definitions for tests.