Entry tags:
testing DW
In the past few days I managed to fix some of the unit tests and set the rest to be skipped so once my latest patch of bug 1721 gets applied all tests in t/ will either pass or skip.
In order to make development easier in DW it would be important to keep the test suit intact and even adding more tests.
Currently the way to run the tests is by typing
cd $LJHOME
prove t/
In case you would like to run a single test file you can type
prove t/userpics.t
A better way to run the tests will be to run
prove -w t/
but currently it produces lots of warnings, something we should fix soon.
Every time to make a change you should at a minimum make sure that all the tests still pass after the change. If some of the tests stop working you should check if the test is incorrect or if the change you made to the code introduced a real bug and fix it.
In order to get the test system back on its feet quickly I set many of the test files to be skipped. So we have two main tasks in order to improve testing.
1) Go over the tests. For the ones that are passing, run them with prove -w and fix (either the test or the application) so there won't be warnings.
For the tests that are skipped remove the skip and find out why are they breaking. Sometimes the test are not relevant any more (I saw a few trying to tests modules that do not exists in the codebase), sometimes a little tweak to the configuration is needed (many tests were using modules that were relying on $LJ::HOME being set and it was not in the test script). Sometimes they might actually indicate some broken feature in the application itself.
2) Add more tests. Actually the best practice would be that every time you touch a piece of code you first make sure that it has tests. If it does not, or not enough then first add tests and only then change the application. If you are fixing a bug then first write a test that can reproduce the bug. The test will obviously fail. Then fix the code and see that the test passes.
For the actual techniques of writing tests, the best is to read the testing tutorial
http://search.cpan.org/dist/Test-Simple/lib/Test/Tutorial.pod
and then go on reading the docs of Test::More http://search.cpan.org/dist/Test-Simple/lib/Test/More.pm
and then those of Test::Most http://search.cpan.org/dist/Test-Most/lib/Test/Most.pm
In order to see which part of the code has tests and which does not we can
use Devel::Cover to create a report. In order to use that we need to do the following:
# this will run the tests in regular mode and is not really needed here
# this will run the tests using Devel::Cover and will generate a report.
# ignoring the files in /usr/local
# It will run the tests a lot slower than in normal mode
opposed to
41 minutes opposed to 1 minute. (it is a bit unusual to me so we have to find out why is such a big different)
The result is here: http://hack.dreamwidth.net/coverage/20090902/coverage.html
We will run the test report once in a while so we can see progress.
Explanations about Devel::Cover can be found here: http://search.cpan.org/dist/Devel-Cover/lib/Devel/Cover/Tutorial.pod
I'd recommend the following testing strategy:
1) Every time you encounter a bug first write a test case that reproduces it. You can do this even if you are not going to fix it right away. In this case set the test as a TODO test. See the Test::More documentation on how to do this.
2) We should add page level tests (Using WWW::Mechanzie and/or Selenium) that will test certain functionality of the web site. These tests will remain with us as we refactor the code beneath.
3) Every new piece of code should be tested at the level of the function(s) and module(s) and also at the level of the application as well (see entry 2). This will both allow us to make sure the users get the correct functionality but will also make it easy to test small parts of the code.
4) In the longer term old code should also get function and module level tests in order to allow us more fine-tuned testing of the code.
In order to make development easier in DW it would be important to keep the test suit intact and even adding more tests.
Currently the way to run the tests is by typing
cd $LJHOME
prove t/
In case you would like to run a single test file you can type
prove t/userpics.t
A better way to run the tests will be to run
prove -w t/
but currently it produces lots of warnings, something we should fix soon.
Every time to make a change you should at a minimum make sure that all the tests still pass after the change. If some of the tests stop working you should check if the test is incorrect or if the change you made to the code introduced a real bug and fix it.
TODO
In order to get the test system back on its feet quickly I set many of the test files to be skipped. So we have two main tasks in order to improve testing.
1) Go over the tests. For the ones that are passing, run them with prove -w and fix (either the test or the application) so there won't be warnings.
For the tests that are skipped remove the skip and find out why are they breaking. Sometimes the test are not relevant any more (I saw a few trying to tests modules that do not exists in the codebase), sometimes a little tweak to the configuration is needed (many tests were using modules that were relying on $LJ::HOME being set and it was not in the test script). Sometimes they might actually indicate some broken feature in the application itself.
2) Add more tests. Actually the best practice would be that every time you touch a piece of code you first make sure that it has tests. If it does not, or not enough then first add tests and only then change the application. If you are fixing a bug then first write a test that can reproduce the bug. The test will obviously fail. Then fix the code and see that the test passes.
For the actual techniques of writing tests, the best is to read the testing tutorial
http://search.cpan.org/dist/Test-Simple/lib/Test/Tutorial.pod
and then go on reading the docs of Test::More http://search.cpan.org/dist/Test-Simple/lib/Test/More.pm
and then those of Test::Most http://search.cpan.org/dist/Test-Most/lib/Test/Most.pm
Test Coverage
In order to see which part of the code has tests and which does not we can
use Devel::Cover to create a report. In order to use that we need to do the following:
cd $LJHOME perl Build.PL ./Build ./Build test
# this will run the tests in regular mode and is not really needed here
DEVEL_COVER_OPTIONS=-ignore,/usr/local ./Build testcover
# this will run the tests using Devel::Cover and will generate a report.
# ignoring the files in /usr/local
# It will run the tests a lot slower than in normal mode
Files=85, Tests=1281, 2467 wallclock secs ( 0.72 usr 0.66 sys + 2238.73 cusr 50.97 csys = 2291.08 CPU)
opposed to
Files=85, Tests=1281, 69 wallclock secs ( 0.68 usr 0.54 sys + 36.69 cusr 7.41 csys = 45.32 CPU)
41 minutes opposed to 1 minute. (it is a bit unusual to me so we have to find out why is such a big different)
The result is here: http://hack.dreamwidth.net/coverage/20090902/coverage.html
We will run the test report once in a while so we can see progress.
Explanations about Devel::Cover can be found here: http://search.cpan.org/dist/Devel-Cover/lib/Devel/Cover/Tutorial.pod
Testing Strategy
I'd recommend the following testing strategy:
1) Every time you encounter a bug first write a test case that reproduces it. You can do this even if you are not going to fix it right away. In this case set the test as a TODO test. See the Test::More documentation on how to do this.
2) We should add page level tests (Using WWW::Mechanzie and/or Selenium) that will test certain functionality of the web site. These tests will remain with us as we refactor the code beneath.
3) Every new piece of code should be tested at the level of the function(s) and module(s) and also at the level of the application as well (see entry 2). This will both allow us to make sure the users get the correct functionality but will also make it easy to test small parts of the code.
4) In the longer term old code should also get function and module level tests in order to allow us more fine-tuned testing of the code.