I wasn't such a big LinkedIn fan until a short time ago, when a post by Guy Kawasaki caught my attention. Then synchronicity kicked in and Tennessee Leeuwenburg sent a message to the python-advocacy mailing list, asking Python developers to connect to each other on LinkedIn; here's what Tennessee had to say:
"One way to help spread Python would be to have a strong presence of Python developers in various online networks. One that springs to mind is LinkedIn, a job related social networking site.
If we could encourage Python developers to start adding eachother to their LinkedIn network, then we shoud be able to create a well-connected developer network with business and industry contacts. This should benefit everyone -- both people looking for Python developers, and also people looking for work."
So in the past week or so I started to send LinkedIn invitations to people I know, either by having worked with them, or through the various forums, mailing lists and Open Source communities I have been part of. It's amazing how many people we all know, if we think about it.
LinkedIn has several nice features that can help when you're looking for people to hire, or when you're looking for a job. Perhaps the easiest way to find people is to click on 'Advanced search' (the small link next to the main search box) and type something in the Keywords field. Try it with 'python' for example -- you'll see that a lot of people whose blogs are aggregated on Planet Python have a LinkedIn profile. Your next step, if you are a Python developer yourself, is to send invitations to people you want to connect with. If enough of us Pythonistas do this, our networks will become more and more interconnected, to everybody's advantage. And you can replace 'Pythonistas' with 'agilistas', 'rubyistas' or whatever your interest is.
It's also interesting to see how LinkedIn displays the number of the degrees of separation between yourself and people you are searching for. Amazingly enough, that number is usually 2 or 3, if not 1. This makes me think of Malcolm Gladwell's theory about Connectors in 'The Tipping Point', namely that there is a small number of people that have a LOT of connections. If you are connected to one of these Connectors, then all of a sudden you have a huge number of people in your network, and you can potentially benefit by introducing yourself to them as someone only 2 or 3 degrees of separation apart from them. This is true in my own network, where I am only 2 degrees of separation away from Guy Kawasaki for example. Why? Because a long long time ago I accepted a LinkedIn invitation from one Paul Davis, who has 500+ LinkedIn connections.
If I made you curious about LinkedIn, I'd advise you one more time to read Guy Kawasaki's blog post on how to improve your LinkedIn profile.
Speaking of jobs and hiring, if you are a hardcore Python programmer looking for work, especially in the D.C. area, the Zope Corporation is hiring.
Sunday, January 28, 2007
Saturday, January 20, 2007
Pybots updates
Since my last Pybots-related blog post, a lot has happened. We added 2 buildslaves, a Sparc Solaris 10 host running the Django unit test suite (courtesy of Matthew Flanagan) and a G5 OSX host running the SQLAlchemy test suite (courtesy of Skip Montanaro). So now we have 10 buildslaves altogether. I also added more test suites to my Ubuntu Breezy buildslave. In addition to the Cheesecake test suite, I'm now running the unit test suites for the py library, nose, twill and Testoob.
Email notifications finally started to work too, after I finally figured out I was passing the wrong builder names to the MailNotification class. And we also have RSS feeds available. If you want to be notified of failures from all builders, subscribe to:
http://www.python.org/dev/buildbot/community/all/rss or
http://www.python.org/dev/buildbot/community/all/atom
To be notified of failures from the trunk builders, subscribe to:
http://www.python.org/dev/buildbot/community/trunk/rss or
http://www.python.org/dev/buildbot/community/trunk/atom
To be notified of failures from the 2.5 branch builders, subscribe to:
http://www.python.org/dev/buildbot/community/2.5/rss or
http://www.python.org/dev/buildbot/community/2.5/atom
Matthew Flanagan also added functionality that allows you to subscribe to a feed for a particular builder. For example, to subscribe to the feed for the "x86 Ubuntu Breezy trunk" builder, use this URL:
http://www.python.org/dev/buildbot/community/all/rss?show=x86%20Ubuntu%20Breezy%20trunk
If you are interested in contributing a buildslave to the Pybots project, please send a message to the Pybots mailing list, or to me (grig at gheorghiu dot net), or leave a comment here.
Email notifications finally started to work too, after I finally figured out I was passing the wrong builder names to the MailNotification class. And we also have RSS feeds available. If you want to be notified of failures from all builders, subscribe to:
http://www.python.org/dev/buildbot/community/all/rss or
http://www.python.org/dev/buildbot/community/all/atom
To be notified of failures from the trunk builders, subscribe to:
http://www.python.org/dev/buildbot/community/trunk/rss or
http://www.python.org/dev/buildbot/community/trunk/atom
To be notified of failures from the 2.5 branch builders, subscribe to:
http://www.python.org/dev/buildbot/community/2.5/rss or
http://www.python.org/dev/buildbot/community/2.5/atom
Matthew Flanagan also added functionality that allows you to subscribe to a feed for a particular builder. For example, to subscribe to the feed for the "x86 Ubuntu Breezy trunk" builder, use this URL:
http://www.python.org/dev/buildbot/community/all/rss?show=x86%20Ubuntu%20Breezy%20trunk
If you are interested in contributing a buildslave to the Pybots project, please send a message to the Pybots mailing list, or to me (grig at gheorghiu dot net), or leave a comment here.
Tuesday, January 09, 2007
Steve Rowe on "Letting test drive the process"
Just came across a blog post by Microsoft' Steve Rowe called "Letting test drive the process". Steve quotes an article by Richard Collins -- "Test, test and test again" -- and adds his own observations on the practices of involving testers early in the development process, and of building testable interfaces into the product instead of heavy UIs.
According to Steve Rowe, Microsoft's development and testing process follows these recommended practices. I quote Steve:
"Also, at Microsoft, testing begins from day one. Every product I've ever been involved with at Microsoft has had daily builds from very early on. Every product has also had what we call BVTs (build verfication tests) that are run every day right after the build completes. If any of their tests fail, the product is held until they can be fixed."
Hmmm...I would expect Microsoft to have less problems with their products in this case. But I think a couple of problems that plague Microsoft in particular are backwards compatibility and the sheer amount of hardware/OS/service pack combinations that they need to test.
Speaking of Microsoft and testing, I found The Braidy Tester's blog very informative.
According to Steve Rowe, Microsoft's development and testing process follows these recommended practices. I quote Steve:
"Also, at Microsoft, testing begins from day one. Every product I've ever been involved with at Microsoft has had daily builds from very early on. Every product has also had what we call BVTs (build verfication tests) that are run every day right after the build completes. If any of their tests fail, the product is held until they can be fixed."
Hmmm...I would expect Microsoft to have less problems with their products in this case. But I think a couple of problems that plague Microsoft in particular are backwards compatibility and the sheer amount of hardware/OS/service pack combinations that they need to test.
Speaking of Microsoft and testing, I found The Braidy Tester's blog very informative.
Monday, January 08, 2007
Testing tutorial at PyCon07
Titus and I will present a tutorial on "Testing Tools in Python" at PyCon07. It is scheduled for the afternoon session on Thursday Feb.22. It will be an improved version of our "Agile Development and Testing in Python" tutorial from last year.
Here is the tutorial outline (courtesy of Titus). If you have any suggestions, please leave a comment.
Introduction
* Why test?
* What to test?
* Using testing to boost maintainability of code.
Setting up a project
* Source control management with Subversion.
* A brief introduction to using Trac for project
documentation and ticket management.
* Packaging with distutils
* Packaging with setuptools
* Registering your project with the Python Cheeseshop
* What else is out there? (distributed vs svn, roundup, ...)
Unit testing
* How to think about unit testing
* Using nose to run unit tests
* doctest-style unit tests
* What else is out there? (unittest, py.test, testosterone...)
Functional Web testing with twill
* Writing twill scripts
* Running twill scripts
* Using scotch to record actions
* Using wsgi_intercept to avoid network sockets
* What else is out there? (zope.testbrowser, mechanize, mechanoid)
Using code coverage in conjunction with unit/functional testing
* Basic code coverage with figleaf
* Monitoring code coverage in remote servers
* Combining figleaf code coverage analyses
* What else is out there (coverage)
** BREAK **
Acceptance testing with FitNesse/PyFit
* How FitNesse works
* Writing fixtures
* Running Python fixtures
Web application testing with Selenium
* How Selenium works
* Writing and recording Selenium tests
* Scripting Selenium tests remotely with SeleniumRC
* What else is out there? (Sahi, Watir)
Continuous integration with buildbot
* Introduction to buildbot
* Discussion of concepts, demonstration.
* Integrating tests into buildbot.
* GUI testing in buildbot.
* Using pybots to test your open source project
Conclusion
* Why test, revisited
* Maintainability and testing
Here is the tutorial outline (courtesy of Titus). If you have any suggestions, please leave a comment.
Introduction
* Why test?
* What to test?
* Using testing to boost maintainability of code.
Setting up a project
* Source control management with Subversion.
* A brief introduction to using Trac for project
documentation and ticket management.
* Packaging with distutils
* Packaging with setuptools
* Registering your project with the Python Cheeseshop
* What else is out there? (distributed vs svn, roundup, ...)
Unit testing
* How to think about unit testing
* Using nose to run unit tests
* doctest-style unit tests
* What else is out there? (unittest, py.test, testosterone...)
Functional Web testing with twill
* Writing twill scripts
* Running twill scripts
* Using scotch to record actions
* Using wsgi_intercept to avoid network sockets
* What else is out there? (zope.testbrowser, mechanize, mechanoid)
Using code coverage in conjunction with unit/functional testing
* Basic code coverage with figleaf
* Monitoring code coverage in remote servers
* Combining figleaf code coverage analyses
* What else is out there (coverage)
** BREAK **
Acceptance testing with FitNesse/PyFit
* How FitNesse works
* Writing fixtures
* Running Python fixtures
Web application testing with Selenium
* How Selenium works
* Writing and recording Selenium tests
* Scripting Selenium tests remotely with SeleniumRC
* What else is out there? (Sahi, Watir)
Continuous integration with buildbot
* Introduction to buildbot
* Discussion of concepts, demonstration.
* Integrating tests into buildbot.
* GUI testing in buildbot.
* Using pybots to test your open source project
Conclusion
* Why test, revisited
* Maintainability and testing
Testing Tools Panel at PyCon07: questions needed
I will be moderating the Testing Tools Panel at PyCon07, currently scheduled from 11:40 AM to 12:25 PM on Sat. Feb. 24th (and immediately followed by Guido's keynote). I put together a Wiki page for the panel, with questions and topics that I thought would be interesting for the audience (and with some input from Ian Bicking.)
I'd be very grateful if people who plan on attending the panel could add more questions or topics of interest either by directly editing the Wiki page, or by leaving a comment here, or by sending me an email at grig at gheorghiu.net. Thanks in advance!
I'd be very grateful if people who plan on attending the panel could add more questions or topics of interest either by directly editing the Wiki page, or by leaving a comment here, or by sending me an email at grig at gheorghiu.net. Thanks in advance!
Sunday, January 07, 2007
New Year's resolution
Here's one New Year resolution I'm trying to keep: each day, read the corresponding page for that date from "The Daily Drucker". I blogged about this book before, and I continue to be amazed at the insight and wisdom that Drucker manages to pack in almost every sentence he writes. Although Drucker writes about general practices of management and leadership, many of his ideas can be easily applied to software development in general, and testing in particular.
Here are some fragments from January 4th, on "Organizational inertia", which can be applied just as well to any software project ("bitrot" and "goldplating" come to mind):
"All organizations need to know that virtually no program or activity will perform effectively for a long time without modifications and redesign. Eventually every activity becomes obsolete."
"Businessmen are just as sentimental about yesterday as bureaucrats. They are just as likely to respond to the failure of a product or program by doubling the efforts invested in it. But they are, fortunately, unable to indulge freely in their predilections. They stand under an objective discipline, the discipline of the market. They have an objective outside measurement, profitability.And so they are forced to slough off the unsuccessful and unproductive sooner or later."
And how do you measure the efficiency of an organization? By testing, testing, testing:
"All organizations must be capable of change. We need concepts and measurements that give to other kinds of organizations what the market test and profitability yardstick give to business. Those tests and yardsticks will be quite different."
Here are some fragments from January 4th, on "Organizational inertia", which can be applied just as well to any software project ("bitrot" and "goldplating" come to mind):
"All organizations need to know that virtually no program or activity will perform effectively for a long time without modifications and redesign. Eventually every activity becomes obsolete."
"Businessmen are just as sentimental about yesterday as bureaucrats. They are just as likely to respond to the failure of a product or program by doubling the efforts invested in it. But they are, fortunately, unable to indulge freely in their predilections. They stand under an objective discipline, the discipline of the market. They have an objective outside measurement, profitability.And so they are forced to slough off the unsuccessful and unproductive sooner or later."
And how do you measure the efficiency of an organization? By testing, testing, testing:
"All organizations must be capable of change. We need concepts and measurements that give to other kinds of organizations what the market test and profitability yardstick give to business. Those tests and yardsticks will be quite different."
Friday, January 05, 2007
Cheesecake now including PEP8 checks
The inclusion actually happened a couple of weeks ago. I saw Johann Rocholl's message on conp.lang.python.announce where he talked about his pep8.py module -- a tool which checks Python modules against some of the style conventions in PEP8.
Here's a sample output of running pep8.py against one of the modules in the Cheesecake project. By default, pep8 reports only the first occurrence of the error or warning. The numbers after the file name represent the line and column where the error/warning occurred:
If you just want to see how many lines in a given file have PEP8-related errors/warnings, use the --statistics flag, along with -qq, which quiets the default output:
So now cheesecake_index.py includes a check for PEP8 compatibility as part of the 'code kwalitee' index. To compute the PEP8 score, it only looks at types of errors and warnings, not at the line count for each type. It subtracts 1 from the code kwalitee score for each warning type reported by pep8, and 2 for each error type reported. Johann told me he'll try to come up with a scoring scheme within the pep8 module, so when that's ready I'll just use it instead of my ad-hoc one. Kudos to Johann for creating a very useful module.
Here's a sample output of running pep8.py against one of the modules in the Cheesecake project. By default, pep8 reports only the first occurrence of the error or warning. The numbers after the file name represent the line and column where the error/warning occurred:
If you want to see all occurrences, use the --repeat flag.
$ python pep8.py logger.py
logger.py:1:11: E401 multiple imports on one line
logger.py:7:23: W291 trailing whitespace
logger.py:8:5: E301 expected 1 blank line, found 0
logger.py:40:33: W602 deprecated form of raising exception
logger.py:60:1: E302 expected 2 blank lines, found 1
logger.py:114:80: E501 line too long (85 characters)
If you just want to see how many lines in a given file have PEP8-related errors/warnings, use the --statistics flag, along with -qq, which quiets the default output:
You can also pass multiple file and directory names to pep8.py, and it will give you an overall line count when you use the --statistics flag.
$ python pep8.py logger.py --statistics -qq
3 E301 expected 1 blank line, found 0
4 E302 expected 2 blank lines, found 1
1 E401 multiple imports on one line
1 E501 line too long (85 characters)
40 W291 trailing whitespace
1 W602 deprecated form of raising exception
So now cheesecake_index.py includes a check for PEP8 compatibility as part of the 'code kwalitee' index. To compute the PEP8 score, it only looks at types of errors and warnings, not at the line count for each type. It subtracts 1 from the code kwalitee score for each warning type reported by pep8, and 2 for each error type reported. Johann told me he'll try to come up with a scoring scheme within the pep8 module, so when that's ready I'll just use it instead of my ad-hoc one. Kudos to Johann for creating a very useful module.
Thursday, January 04, 2007
Tuesday, January 02, 2007
Martin Fowler's "Mocks Aren't Stubs" -- updated version
As the title says, Martin Fowler just announced a significant update -- indeed, a rewrite -- of his classic "Mocks Aren't Stubs". Most of the terminology he uses in the new version of the article is borrowed from Gerard Meszaros' xUnit Patterns jargon. Highly recommended. I'm not so vain as to believe that my post on mock testing influenced Martin's rewrite, so I'm just going to invoke synchronicity here :-)
Subscribe to:
Posts (Atom)
Modifying EC2 security groups via AWS Lambda functions
One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...
-
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms inte...
-
Update 02/26/07 -------- The link to the old httperf page wasn't working anymore. I updated it and pointed it to the new page at HP. Her...
-
I've been using dnspython lately for transferring some DNS zone files from one name server to another. I found the package extremely us...