Robert Basic's blog

Enable UDP for NFS on Fedora

by Robert Basic on July 03, 2017.

Recently bringing up Vagrant boxes started acting up when mounting the NFS shared folders. This is the error message I get:

==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o vers=3,udp 192.168.33.1:/home/robert/projects/project/application /var/www

Stdout from the command:



Stderr from the command:

mount.nfs: requested NFS version or transport protocol is not supported

For some reason NFS doesn’t like UDP on my machine, but as far as I know, UDP is the default in Vagrant.

This can be changed by telling Vagrant to not use UDP for synced folders, by adding nfs_udp: false:

  config.vm.synced_folder "./application", "/var/www", type: "nfs", nfs_udp: false

But as this is something only I have experienced in my team so far, “fixing” it on a project level seems like a bad choice. And when the next project comes, I’ll probably have the same problem all over again.

Digging a bit deeper, I’ve came across this ServerFault answer, which says that since nfs-utils version 2.1.1 UDP support for NFS is disabled by default.

The solution is to edit /etc/sysconfig/nfs and add --udp to RPCNFSDARGS:

RPCNFSDARGS="--udp"

Restarting the NFS server and Vagrant mounts the shared folders without problems again!

Happy hackin’!

Tags: fedora, nfs, udp.
Categories: Development, Software.

Bug triage, the paperwork of open source

by Robert Basic on May 24, 2017.

Everyone loves contributing patches to open source projects, adding new features. Some even like to write documentation.

Probably the least talked about way of contributing to open source is triaging issues (I have no data to back this statement, so I might be wrong!).

I do believe however that it can be the biggest help to project maintainers, because with issue triage out of the way, they are left dealing with the “bigger” problems of the project, such as fixing difficult bugs and implementing new features.

Ideally, a good issue report will include the version numbers of the affected projects, a good description of what the user tried to do, what did they experience, expected and actual results, any logs or stacktraces, and even the smallest possible test case that reproduces the issue being reported.

I say ideally, but that’s not always the case. Sometimes the report has a lot less details, does not include version numbers, or any other information that would help identifying the underlying cause of the issue. In those cases someone needs to go through the reported issues and ask for more information.

It’s paperwork

Issue triage boils down to going through the list of open issues for a project and making sure that the reports include as much as possible useful information. If the reporter hasn’t provided everything needed, we should ask them for more details.

If the initial report includes just enough information to start investigating, we could that as well. Start digging into the codebase and try to figure out what’s going on. If the project has automated tests, we can use them to get a better picture of the issue, and maybe even provide a failing test case to the maintainers. Fun fact: this is how I started with unit tests and test driven development - by submitting failing test cases to projects.

When we deem that we have enough information, we can try to reproduce the issue, and confirm or deny its validity.

Some issues are not really issues, but a case of misconfigured library, or documentation not being read fully. In those cases the solution is not to leave a “RTFM” comment and close it. Asking if they read pages X or Y, is a much better approach. It might be that the documentation is not detailed or clear enough, so we need to update our docs.

Sometimes it’s the lack of documentation that is the real issue.

Once we have enough information, we can leave a comment for the maintainers saying that we managed, or not, to reproduce the bug. From there the maintainers can take over and deal with the issue as they see fit. Or we could attempt at writing a patch and fixing it.

Bug? Feature? Support?

Users will open all kinds of reports. There will be issue reports, there will be feature requests, and there will be questions asking for support.

Deciding on which is which is also a part of issue triage. Label them accordingly, so maintainers and contributors will have an easier time filtering them out.

If you are familiar with how the software works, you might provide an answer to a question and help the user, again taking of the load from the maintainers.

No experience required

One of the best things for me about issue triage is that we don’t need to have experience with the project, let alone be experts in using it. Most of this is communicating with others, asking for more feedback, and making sure that the persons who can decide on the reports can do so with least effort required. Of course, having experience does help, but that’s the way with everything in life, I guess.

Besides, this is also a great opportunity to learn more about the project and the ecosystem around it.

While the work is not grandiose, it will help in getting better at communication skills, it will help the project to move forward just a little bit faster, and it is a great way to contribute to open source projects.

Happy hackin’!

P.S.: Sometimes if you wait long enough, the reporter won’t even remember what the issues was, and they’ll just close the issue.

Tags: bugs, contributions, issues, open source, triage.
Categories: Blablabla, Software.

Everybody knows that

by Robert Basic on May 08, 2017.

Back in December last year, Matthew Turland published a blog post asking “Why aren’t you speaking?

It made me think.

What I realised is that I always havehad this feeling that everybody already knows what I know.

Is that part of an impostor syndrome?

I don’t know. I really don’t feel like an impostor. I know what I know, I’m perfectly fine accepting that I don’t know everything… but then there’s this feeling that everybody else knows what I know. It’s a strange feeling, I’m not even sure if I can explain it properly.

This also led me to realise why I don’t blog more often. I like blogging. I like writing. I don’t consider myself being a good writer, but with English being my third language, mostly self-taught, I think I do quite alright.

It’s the same thing as with me not speaking at a conference or a user group — everybody knows that.

After doing some more thinking on this subject, there’s only one logical result — it is not possible for everyone to know what I already know. It’s just not possible.

I have learned, and still am learning from other people, by either reading their blogs, or hearing them talk, or looking at their answers on StackOverflow, or digging through their code on GitHub… Surely there are others out there that can learn a thing or two from me.

I also “agreed” with myself that not every blog post needs to be an essay, that it’s OK to publish a couple of short paragraphs, quickly writing down the things going around in my mind.

With those thoughts, with that kind of a mindset, I set out to start blogging again. Since December, since Matthew’s post, I blogged 20 times. I don’t think I have written so many posts in the past 4 years.

Oh, and I gave a talk at two different occasions as well.

Thanks Matthew.

Tags: about, blog, blogging.
Categories: Blablabla.

Complex argument matching in Mockery

by Robert Basic on May 08, 2017.

This past weekend I did some issue maintenance and bug triage on Mockery. One thing I noticed going through all these issues, is that people were surprised when learning about the \Mockery::on() argument matcher. I know Mockery’s documentation isn’t the best documentation out there, but this still is a documented feature.

First of all, Mockery supports validating arguments we pass when calling methods on a mock object. This helps us expect a method call with one (set of) argument, but not with an other. For example:

<?php
$mock = \Mockery::mock('AClass');

$mock->shouldReceive('doSomething')
    ->with('A string')
    ->once();

$mock->shouldReceive('doSomething')
    ->with(42)
    ->never();

This will tell Mockery that the doSomething method should receive a call with A string as an argument, once, but never with the number 42 as an argument.

Nice and simple.

But things are not always so simple. Sometimes they are more complicated and complex.

When we need to do a more complex argument matching for an expected method call, the \Mockery::on() matcher comes in really handy. It accepts a closure as an argument and that closure in turn receives the argument passed in to the method, when called. If the closure returns true, Mockery will consider that the argument has passed the expectation. If the closure returns false, or a “falsey” value, the expectation will not pass.

I have used the \Mockery::on() matcher in various scenarios — validating an array argument based on multiple keys and values, complex string matching… and every time it was invaluable. Though, now that I think back, the older the codebase, the higher the usage frequency was. Oh, well.

Say, for example, we have the following code. It doesn’t do much; publishes a post by setting the published flag in the database to 1 and sets the published_at to the current date and time:

<?php
namespace Service;
class Post
{
    public function __construct($model)
    {
        $this->model = $model;
    }

    public function publishPost($id)
    {
        $saveData = [
            'post_id' => $id,
            'published' => 1,
            'published_at' => gmdate('Y-m-d H:i:s'),
        ];
        $this->model->save($saveData);
    }
}

In a test we would mock the model and set some expectations on the call of the save() method:

<?php
$postId = 42;

$modelMock = \Mockery::mock('Model');
$modelMock->shouldReceive('save')
    ->once()
    ->with(\Mockery::on(function ($argument) use ($postId) {
        $postIdIsSet = isset($argument['post_id']) && $argument['post_id'] === $postId;
        $publishedFlagIsSet = isset($argument['published']) && $argument['published'] === 1;
        $publishedAtIsSet = isset($argument['published_at']);

        return $postIdIsSet && $publishedFlagIsSet && $publishedAtIsSet;
    }));

$service = new \Service\Post($modelMock);
$service->publishPost($postId);

\Mockery::close();

The important part of the example is inside the closure we pass to the \Mockery::on() matcher. The $argument is actually the $saveData argument the save() method gets when it is called. We check for a couple of things in this argument:

  • the post ID is set, and is same as the post ID we passed in to the publishPost() method,
  • the published flag is set, and is 1, and
  • the published_at key is present.

If any of these requirements is not satisfied, the closure will return false, the method call expectation will not be met, and Mockery will throw a NoMatchingExpectationException.

Happy hackin’!

Tags: arguments, matching, mockery, mocking, php, testing.
Categories: Development, Programming, Software.

Open source taught me how to work with legacy code

by Robert Basic on April 28, 2017.

Contributing to open source projects has many benefits — you learn and you teach, you can make friends or find business partners, you might get a chance to travel. Even have a keynote at a conference, like Gary did.

Contributing to open source projects was the best decision I made in my professional career. Just because I contributed to, and blogged about Zend Framework, I ended up working and consulting for a company for four and a half years. I learned a lot during that time.

What I realized just recently is that open source also taught me how to work with legacy code. It taught me how to find my way around an unknown codebase faster, where to look and what to look for when investigating an issue. Most importantly, it taught me how to react to legacy code.

Usually when people hear “legacy code”, they think code that was written by a bunch of code monkeys who know nothing about writing good software. The past was stupid, the present is smart and wise, and will make everything better for the future. A long time ago, I was the same.

Today, my thinking and my approach is completely different.

I have the utmost respect for the programmer and their code that is before me. Rarely do I have the privilege knowing the circumstances under which a piece of legacy code was written.

In many cases the original author of the code is not on the team any more, or they just don’t remember why was some decision made and a piece of code written in a certain way. It might be a hack workaround for a code that was written by someone even before their time on the project. Maybe they didn’t know better at the time, or maybe they indeed made an error and now it’s my bug to fix.

Whatever the reason is, the code is written, used, and it delivers business value. It requires maintenance, fixes, and improvements and I welcome the challenges it brings.

Happy hackin’!

Tags: code, legacy, maintenance, open source.
Categories: Development, Programming, Software.