QR code

Software Quality Award, 2017

  • comments



This is the third year of the Software Quality Award. The prize is still the same—$4,096. The rules are still the same. Read on. Previous years are here: 2015, 2016.

The rules:

  • One person can submit only one project.

  • Submissions are accepted until the September 1, 2017.

  • I will check the commit history to make sure you’re the main contributor to the project.

  • I reserve the right to reject any submission without explanation.

  • All submissions will be published on this page (including rejected ones).

  • Results will be announced October 15, 2017 on this page and by email.

  • The best project will receive $4,096.

  • Final decisions will be made by me and are not negotiable (although I may invite other people to help me make the right decision).

  • Winners that received any cash prizes in previous years can’t submit again.

Each project must be:

  • Open source (in GitHub).

  • At least 10,000 lines of code.

  • At least one year old.

  • Object-oriented (that’s the only thing I understand).

The best project is selected using this criteria.

What doesn’t matter:

  • Popularity. Even if nobody is using your product, it is still eligible for this award. I don’t care about popularity; quality is the key.

  • Programming language. I believe that any language, used correctly, can be applied to design a high-quality product.

  • Buzz and trends. Even if your project is yet another parser of command line arguments, it’s still eligible for the award. I don’t care about your marketing position; quality is all.

By the way, if you want to sponsor this award and increase the bonus, email me.

These 28 projects submitted so far (in order of submission):

[15 Sep 2017] I invited six people to help me review the projects. Their names are:

[15 Oct 2017] This is the summary of everything they sent me: award-2017.txt. I will pick the winner in the next few days, stay tuned!

[21 Oct 2017] My short list includes these six projects (in random order): php-ai/php-ml, vavr-io/vavr, zetaops/ulakbus, mafagafogigante/dungeon, ribtoks/xpiks, javascript-obfuscator/javascript-obfuscator. Tomorrow (hopefully) I will decide how to split $4096.

[23 Oct 2017] These are my own observations per project from the short list. I will only mention negative things, since all projects are pretty good, no need to say how good they are. I listed problems in order of importance (most critical on top).

php-ai/php-ml (9.8K LoC PHP, 29K HoC):

  • How do you release it?
  • Getters, setters and mutability in many places
  • NULL is in many places (again, I know that no method overloading in PHP)
  • -ER: Estimator, Classifier, Clusterer, Optimizer, etc.
  • code in constructors (yes, I understand that it’s PHP)
  • empty lines in method bodies
  • Score: 5

vavr-io/vavr (70K LoC Java, 834K HoC):

  • How do you release it?
  • There are really big “classes”, they are huge in io.vavr.collection package
  • Interface Seq has 120+ methods! What is going on?
  • Utility classes, static methods
  • Some .java files have a few Java classes. Why?
  • Could not build master: #2147
  • Score: 4

zetaops/ulakbus (25K LoC Python, 707K HoC):

  • How do you release it?
  • No CI, no test coverage, no static analysis automation?
  • See the comments from the reviewer
  • Score: 2

mafagafogigante/dungeon (14K LoC Java, 88K HoC):

  • Release automated but only for one person
  • Static methods, getters, setters, mutability
  • Commits don’t link to issues and PRs
  • In-method-body comments are in many places, it’s a bad practice
  • Score: 5

ribtoks/xpiks (180K+ LoC C/C++, 739K HoC):

  • No coverage, no static analysis
  • Types are rather big, with many methods
  • Util classes, helpers
  • -ERs: CommandManager, SpellCheckWorker, etc.
  • I didn’t really find much documentation inside the code
  • Commits are not linked to issues/PRs
  • Score: 4

javascript-obfuscator/javascript-obfuscator (72K LoC JS/TS, 400K HoC):

  • Utils and just global functions
  • Annotation-driven injectable dependencies
  • -ERs: reader, sanitizer, emitter
  • Public attributes in classes
  • I believe many “objects” are just DTOs here
  • Interfaces are prefixed with I, it’s an anti-pattern
  • Score: 4

My overall impression this year is that I’m getting much less garbage. There are fewer projects submitted, but the quality of them is much higher than in the previous two years. I’m glad to see this tendency. It means to me that I’m doing the right thing.

This time I paid more attention to the elegance of OOP and maintainability of the code base. Key factors for the maintainability were:

  • Automated releases
  • Automated static analysis
  • Automated builds (CI)
  • Automated tests
  • Disciplined commits, via issues and PRs

For the elegance of OOP, as usual, I paid attention to the absence of anti-patterns, including NULL, getters, setters, static, mutability, etc.

There are two winners this year: php-ai/php-ml and mafagafogigante/dungeon. But I don’t really like the code I found in these repositories. It’s obviously better than everybody else, but not perfect at all.

That’s why, here is my decision: I will give just $1,024 to each winner, instead of $2,048.

Congratulations to @itcraftsmanpl for php-ml ($1,024) and to @mafagafogigante for dungeon ($1,024).

Here are your badges:

winner   winner

Put this code into GitHub README (replace ??? with your GitHub name in the URL):

<a href="https://www.yegor256.com/2016/10/23/award-2017.html">
  <img src="//www.yegor256.com/images/award/2017/winner-???.png"
  style="width:203px;height:45px;" alt='winner'/></a>

Thanks to everybody for your participation! See you next year.

sixnines availability badge   GitHub stars