I need help. I want to find sponsors for my Software Quality Award, but don't have time and connections to do that. Would you be interested to volunteer and help me? We need more money for the award, to better motivate programmers. Please, email.
If you have a method that fails occasionally and you want to retry it a few times before throwing an exception. @RetryOnFailure from jcabi-aspects can help. For example, if you're downloading the following web page:
This method call will throw an exception only after three failed executions with a ten seconds interval between them.
There are many tools that control the quality of Java code, including Checkstyle, PMD, FindBugs, Cobertura, etc. All of them are usually used to analyze quality and build some fancy reports. Very often, those reports are published by continuous integration servers, like Jenkins.
Qulice takes things one step further. It aggregates a few quality checkers, configures them to a maximum strict mode, and breaks your build if any of them fail.
What do you think? Would it be convenient for you—to have your code rejected every time it breaks just one of 900 checks? Would it be productive for the team—to force developers to focus so much on code quality?
Rultor is a coding team assistant. Travis is a hosted continuous integration system. In this article I'll show how our open source projects are using them in tandem to achieve seamless continuous delivery.
Docker is a command line tool that can run a shell command in a virtual Linux, inside an isolated file system. Every time we build our projects, we want them to run in their own Docker containers. Take this Maven project for example:
This command will start a new Ubuntu system and execute mvn clean test inside it. Rultor.com, our virtual assistant, does exactly that with our builds, when we deploy, package, test and merge them.
You get a GitHub pull request. You review it. It looks correct—it's time to merge it into master. You post a comment in it, asking @rultor to test and merge. Rultor starts a new Docker container, merges the pull request into master, runs all tests and, if everything looks clean—merges, pushes, and closes the request.
Then, you ask @rultor to deploy the current version to production environment. It checks out your repository, starts a new Docker container, executes your deployment scripts and reports to you right there in the GitHub issue.
Continuous integration is easy. Download Jenkins, install, create a job, click the button, and get a nice email saying that your build is broken (I assume your build is automated). Then, fix broken tests (I assume you have tests), and get a much better looking email saying that your build is clean.
Then, in a few weeks, start filtering out Jenkins alerts, into their own folder, so that they don't bother you anymore. Anyway, your team doesn't have the time or desire to fix all unit tests every time someone breaks them.
After all, we all know that unit testing is not for a team working with deadlines, right?
Liquibase is a migration management tool for relational databases. It versionalizes schema and data changes in a database; similar to the way Git or SVN works for source code. Thanks to their Maven plugin, Liquibase can be used as a part of a build automation scenario.
Every Java package (JAR, WAR, EAR, etc.) has a MANIFEST.MF file in the META-INF directory. The file contains a list of attributes, which describe this particular package. For example:
When your application has multiple JAR dependencies, you have multiple MANIFEST.MF files in your class path. All of them have the same location: META-INF/MANIFEST.MF. Very often it is necessary to go through all of them in runtime and find the attribute by its name.