This is a mobile version, full one is here.
16 April 2015
Software Quality Award, 2015
I'm a big fan of rules and discipline in software development; as an example, see Are You a Hacker or a Designer?. Also, I'm a big fan of object-oriented programming in its purest form; for example, see Seven Virtues of a Good Object. I'm also a co-founder and the CTO of Teamed.io, a software development company through which I put my admiration of discipline and clean design into practice.
I want to encourage you to share my passion—not just by reading this blog but through making real open source software in a disciplined way. This award is for those who are brave enough to swim against the current and value quality above everything else.
Send me your own project for review and participate in the contest.
One person can submit up to three projects.
accepted until the September 1, 2015closed.
Submissions must be sent via email to email@example.com. All I need is your GitHub login and repository name; I will check the commit history to make sure you're the main contributor to the project.
I reserve the right to reject any submission without explanation.
All submissions will be published on this page (including rejected ones).
Results will be announced October 15 on this page and by email.
The best project will receive $4,096.
The best 8 projects will receive 1-year open source licenses to any JetBrains products (one license per project).
Final decisions will be made by me and are not negotiable (although I may invite other people to help me make the right decision).
Each project must be:
Open source (in GitHub).
At least 5,000 lines of code.
At least one year old.
Object-oriented (that's the only thing I understand).
The best projects will feature (more about it):
Strict and visible principles of design.
Traceability of changes.
Self-documented source code.
Strict rules of code formatting.
What doesn't matter:
Popularity. Even if nobody is using your product, it is still eligible for this award. I don't care about popularity; quality is the key.
Programming language. I believe that any language, used correctly, can be applied to design a high-quality product.
Buzz and trends. Even if your project is yet another parser of command line arguments, it's still eligible for the award. I don't care about your marketing position; quality is all.
By the way, if you want to sponsor this award and increase the bonus, email me.
158 projects submitted so far (in order of submission):
A few weeks ago I asked three guys, who work with me, to check every single
project in this list and provide their feedback. I've received three plain
text files from them. Here they are, combined into one, with almost no corrections:
award-2015.txt (you can find your project there).
Based on their opinions,
I've decided to select the following 12 projects for closer review
(in alphabetic order):
raphw/byte-buddy(added on Oct-5)
I'll review them soon. The winner will be announced on the 15th of October.
I received an email from the author of
me to reconsider my decision about this project. I took a quick
look at why the project was filtered out and decided to include
it into the list of finalists. BTW, if any of you think that your
project was excluded by mistake, don't hesitate to
October, 11th: I analyzed all 12 projects today. All of them are really good projects, that's why, in order to find the best one I was focusing on their sins, not virtues. Here is what I found, preliminary.
coala-analyzer/coala (14K Python LoC, 160K HoC)
Noneis used in many places (over 400 places I found), which technically is
NULL, and it's a serious anti-pattern
- There are global functions, for example
DictUtilities. It is definitely a bad idea in OOP.
Constantsis a terrible idea.
- Checking object types in runtime is a
- What's wrong with
cindex.py? There are almost 3200 lines of code, that's way too many.
- Static analysis is not a mandatory step in the build/release pipeline.
That's why, I believe, code formatting is not consistent and sometimes
rather ugly. For example,
pylintreports hundreds of issues. (update: scrutinizer is used, but I still believe that a local use of pylint would seriously improve the quality of code)
- Some methods have documentation, others don't. I didn't understand the logic. Would be great to have all methods documented. Also, not all classes are documented.
- Score: 5
checkstyle/checkstyle (83K Java LoC, 553K HoC)
- There are many ER-ending classes, like
AbstractLoader(for example), which are anti-patterns.
- There is a whole bunch of utility classes,
which are definitely
a bad thing in OOP. They are even grouped into a special
utilspackage, such a terrible idea.
- Setters and getters are everywhere, together with immutable classes, which
really are not an OOP thing,
NULLis actively used, in many places—it's a serious anti-pattern
- I've found five
.javafiles with over 1000 lines in each of them, for example 2500+ in
- There are direct commits to master made by different contributors
and some of them are not linked back to any tickets. It's impossible
to understand why they were made. Look at this for example:
7c50922. Was there a discussion involved? Who made a decision? Not clear at all.
- Releases are not documented at all.
- Release procedure is not automated. At least I didn't find any release script in the repository.
- Score: 3
citiususc/hipster (5K Java LoC, 64K HoC)
Getters and setters
are used in many places, for example
Transition. Aside from that, almost all classes are mutable and are used as transfer bags for the needs of the algorithm. This is not how OOP should be used, I believe.
- There are public static methods and even
for example this one, with a funny name
NULLis used actively, especially in iterators—it's a bad idea
- JavaDoc documentation is not consistent, some methods are documented, others aren't.
- Not all commits are linked to tickets, look at this, for example:
- Changes are committed directly to
masterbranch, pull requests are not used at all.
- I didn't find an automated procedure for release. I found one for regular snapshot deployment to Bintray, but what about releases? Are they done manually?
- There is no static analysis, that's why the code looks messy sometimes.
- The amount of unit tests is rather small. Besides that, I didn't find a real code coverage report published anywhere.
- Score: 4
gulpjs/gulp (700 JS LoC)
- This project is too small for the competition, just 700 lines of code. Disqualified.
- Score: 0
kaitoy/pcap4j (42K LoC, 122K HoC)
- There is a
utilpackage with utility classes, which are a bad practice
NULLis used in mutable objects, for example in
AbstractPcapAddress; it's a bad idea
- There too many
staticmethods and variables. They are literally everywhere. There is even a module called
pcap4j-packetfactory-static, full of "classes" with static methods.
- JavaDoc documentation is not consistent and sometimes just incomplete, check this, for example
- There are just a few issues and only six pull requests. Commits are not linked to issues. There is almost zero traceability of changes.
- Release procedure is not automated, releases are not documented
- There is no static analysis, that's why the code looks messy sometimes
- Score: 3
raphw/byte-buddy (84K LoC, 503K HoC)
- I found over 20
.javafiles with over 1000 lines of code.
TypePool.javaeven has 6200 lines!
- There are many
public staticmethods and properties. I realize that maybe that the only way to deal with the problem domain in Java, but still...
instanceofis used very often, and it's a bad practice in OOP. Again, I understand that problem domain may require it sometimes, but still...
- Most commits are made directly to master, without pull requests or tickets, that's why traceability of them is broken.
- Release procedure is not automated (I didn't find a script).
- Score: 5
subchen/snack-string (1K LoC, 2K HoC)
- The project is too small, disqualified.
- Score: 0
gvlasov/inflectible (5K LoC, 36K HoC)
- The project is rather small, right on the edge of competition requirements and is made by a single developer. Besides that I don't see any problems here. The code looks object oriented, all changes are traceable back to issues and pull requests, release procedure is automated, static analysis is mandatory, releases are documented. Thumbs up!
- Score: 10
testinfected/molecule (10K LoC, 43K HoC)
- There are a few utility classes,
- There are setters and getters in some classes,
even through they are in a different naming convention,
- Most of
.javafiles don't have any JavDoc blocks, and they look consistent, but then, all of a sudden, some files do have documentation, for example
- There are not so many issues and most of commits are not
traceable back to any of them, for example
b4143a0—why it was made? Not clear. Also, there are almost no pull request. Looks like the author is just committing to master.
- Release procedure is not documented/automated. I didn't find it. Also, releases are not documented at all.
- Static analysis is absent.
- Score: 6
trautonen/coveralls-maven-plugin (4.5K LoC)
- The project is too small for the competition, less than 5K lines of code. Besides that, it's younger than one year, the first commit was made in May 2015. Disqualified.
- Score: 0
wbotelhos/raty (8.7K LoC, 63K HoC)
- There are utility classes, for example
- There are global functions, for example in
jasmine.jshas 2400 lines of code, which is way too many
- I didn't understand why
.htmlfiles stay together with
.jsin the same directory, for example
- Not all changes are traceable to issues,
0a233e8. There are not so many issues in the project and just a few pull requests.
- Release procedure is not automated (at least I didn't find any documentation about it)
- There is no static analysis
- There are no unit tests
- Score: 2
xvik/guice-persist-orient (17K LoC, 54K HoC)
- There is ORM
and that's why getters and setters,
for example in
- Dependency injection is actively used, which, I believe, is a bad idea in OOP in general. But in this project, I understand, it is required by the problem domain. Anyway...
- There are just a few issues and almost no pull requests,
and commits are not traceable back to issues,
for example this one:
- ~~There is no static analysis~~ (static analysis is there, with a subset of Checkstyle, PMD and FindBugs checks)
- Score: 5
I paid most attention to anti-patterns, which is the first
and the most terrible sin we should try to avoid. Presence of
null, for example, much more seriously affected the score than
the absence of an automated release procedure.
Oct 15: Thus, we have these best projects, out of 158 submitted to the competition:
@gvlasov, the winner!
Here is your badge:
Put this code into GitHub
<a href="http://www.yegor256.com/2015/04/16/award.html"> <img src="http://img.teamed.io/award/2015/winner.png" style="width:203px;height:45px;"/></a>
All eight projects will receive a one-user free one-year license for one JetBrains product. I will email you all and we'll figure out how to transfer them.
Thanks to everybody for participation! See you next year.