Mockery partial mocks

published on January 28, 2018.

In dealing with legacy code I often come across some class that extends a big base abstract class, and the methods of that class call methods on that big base abstract class that do an awful lot of things. I myself have written such classes and methods in the past. Live and learn.

One of the biggest problems with this kind of code is that it is pretty hard to test. The methods from the base class can return other objects, have side effects, do HTTP calls…

A typical example of this would be a base model class that has a getDb() method:

AbstractModel.php

<?php

abstract class AbstractModel
{
    protected $db = null;

    protected function getDb()
    {
        if ($this->db == null) {
            $db = Config::get('dbname');
            $user = Config::get('dbuser');
            $pass = Config::get('dbpass');
            $this->db = new PDO('mysql:host=localhost;dbname='.$db, $user, $pass);
        }
        return $this->db;
    }
}

which can be called in child classes to get access to the database connection:

ArticleModel.php

<?php

class ArticleModel extends AbstractModel
{
    public function listArticles()
    {
        $db = $this->getDb();
        $stmt = $db->query('SELECT * FROM articles');

        return $stmt->fetchAll();
    }
}

If we want to write unit tests for this listArticles() method, the best option would probably be to refactor the models so that the database connection can be injected either through the constructor, or with a setter method.

In case refactoring is not an option for whatever reason, what we can do is to create a partial mock of the ArticleModel using Mockery and then mock (well, stub to be more precise) out only the getDb() method that will always return a mocked version of the PDO class:

tests/ArticleModelTest.php

<?php

use Mockery\Adapter\Phpunit;

class ArticleModelTest extends MockeryTestCase
{
    public function testListArticlesReturnsAnEmptyArrayWhenTheTableIsEmpty()
    {
        $stmtMock = \Mockery::mock('\PDOStatement');
        $stmtMock->shouldReceive('fetchAll')
            ->andReturn([]);
        $pdoMock = \Mockery::mock('\PDO');
        $pdoMock->shouldReceive('query')
            ->andReturn($stmtMock);

        // Create a partial mock of ArticleModel
        $articleModel = \Mockery::mock('ArticleModel')->makePartial();

        // Stub the getDb method on the ArticleModel
        $articleModel->shouldReceive('getDb')
            ->andReturn($pdoMock);

        // List all the articles
        $result = $articleModel->listArticles();
        $expected = [];

        $this->assertSame($expected, $result);
    }
}

When we tell Mockery to make a partial mock of a class, any method on that partially mocked class that has expectations set up will be mocked, but calls to other methods Mockery will pass through the real class. In other words, even though the ArticleModel is a partial mock, anytime we call the listArticles() method Mockery will pass that call to the original method, and only the calls to the getDb() method are being mocked.

Using partial mocks should probably be an option of a last resort and we should always aim to refactor code to be easier for testing, but there are cases when they can really help us in testing legacy code.

Happy hackin’!

Build and run Golang projects in VS Code

published on January 24, 2018.

I’ve been using VS Code for my Golang development needs for a few months now. Minor kinks here and there, nothing serious, and the development experience gets better with every update. I have also tried out IntelliJ Idea as the editor, and one feature that I’m missing in Code from Idea is the build-run-reload process. I thought to myself, that’s such a basic feature, it should be possible to have that.

And it is! VS Code Tasks to the rescue!

These tasks allow us to run different kind of tools and, well, tasks inside VS Code.

Go to Tasks -> Configure Default Build Task and then select the “Create tasks.json file from template” in the little pop-up window, and after that select the “Others” option. This tasks.json file will live inside the .vscode directory.

For my overcomplicated d20 roller, which is my first website built with Golang, I have the following content for the tasks:

.vscode/tasks.json

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "Build and run",
            "type": "shell",
            "command": "go build && ./d20",
            "group": {
                "kind": "build",
                "isDefault": true
            }
        }
    ]
}

What this one task does is that it runs go build to build the project and then runs the generated executable, which for this project is d20.

I guess providing a standardized name to go build with the -o flag this could be made more portable so that the command part reads something like go build -o proj && ./proj, but I’m ok with this for now.

And now just type Ctrl+Shift+b and Code will execute this “Build and run” task for us! Hooray! The terminal window in Code will pop-up saying something like:

> Executing task: go build && ./d20 <

By going to http://localhost:8080 I can see that my little website is up and running. Cool.

But if we want to restart this taks, running Ctrl+Shift+b again won’t work and Code will complain that the “Task is already active blablabla…”.

Looking at the Tasks menu, we can see that there’s a “Restart running task…” menu entry. Clicking that, it pops up a window with a list of running tasks. In this case there’s only one, our “Build and run” task. Clicking through the menu every time would be boring, so let’s add a keyboard shortcut for it.

Go to File -> Preferences -> Keyboard shortcuts (or just hit Ctrl+k Ctrl+s), search for “Restart running task” keybinding, and set it to whatever you like. I’ve set it to Ctrl+Alt+r.

Finally, the flow is Ctrl+Shift+b to start the taks for the very first time, code-code-code, Ctrl+Alt+r to re-build. Sweet. Now the only annoying bit is that I have to pick out that one running task from the list of running tasks when restarting, but I can live with that. For now.

Happy hackin’!

Static web pages in Hugo

published on January 24, 2018.

Last week I created a page on this site that holds all the talks I have prepared for meetups and conferences. As this site is powered by Hugo, the process wasn’t that straightforward. I want to write down the steps I did to make it easier in the future.

Oh, and when I say “static” in the title of this post, I mean pages whose content is not completely powered by a markdown content file.

I have tried different approaches, but what ended up working is the following.

In the configuration file, I added a new type of permalink:

config.toml

[permalinks]
    talks = "/talks/"

I created a new type of an archetype under the archetypes directory of my theme:

themes/robertbasic.com/archetypes/talks.md

+++
draft = false
date = {{ .Date }}
title = "{{ replace .TranslationBaseName "-" " " | title }}"
+++

I have also created a new template file for that talks type, which actually has all the content I want to display, but is also capable of using the partials I have created before:

themes/robertbasic.com/layouts/static/list.html

{{ partial "header.html" . }}
...
&lt;div class="post">
    &lt;h1>
        Talks
    &lt;/h1>
    ...
&lt;/div>
&lt;div class="column">
    {{ partial "sidebar.html" . }}
&lt;/div>
...
{{ partial "footer.html" . }}

And finally create a markdown file for it with hugo new talks/page.md, leaving it as is.

Happy hackin’!

Tags: hugo, blog, talks.
Categories: Blablabla, Software.

Free the Geek interview

published on January 12, 2018.

Last year (feels weird to say “last year”, when it was less than a month ago) Matthew asked me if I’d be willing to appear as a guest on his Free the Geek podcast. To have a short chat about running my own software consultancy, freelancing, public speaking, and whatever else pops up.

Matthew is a great host, I have no idea why I was so nervous while chatting. Maybe because he also interviewed people like Matthew, and Matthew, and Chris, and Chris, all programmers that I learned and keep learning a lot from?

*wipes sweat off brow*

In the end I’m really happy that I did this, if nothing else, because now I’m even more determined to improve my public speaking skills, to talk more at local meetups, to submit to more conferences.

Anyway, have a listen to our chat, and make sure to subscribe to the podcast, I’m sure Matthew will have more great guests we can all learn from in the future.

Happy hackin’!

Tags: about, podcast, interview.
Categories: Blablabla.

Buffered vs. unbuffered channels in Golang

published on December 25, 2017.

As any newcomer to Golang and it’s ecosystem, I was eager to find out what is this hubbub about these things called goroutines and channels. I read the documentation and blog posts, watched videos, tried out some of the “hello world” examples and even wasted a couple of days trying to solve the puzzles for day 18 from Advent of Code 2017 using goroutines and failed spectacularly.

All this was just… doing stuff without actually understanding of when, how, and why use goroutines and channels. And without that basic understanding, most of my attempts at doing anything that resembles a useful example ended up with deadlocks. Lots and lots and lots of deadlocks.

Ugh. I decided to stop forcing myself to understand all of this concurrency thing and figure it all out once I actually need it.

Few days later, I start reading the description for the first puzzle day 20. The gist of the problem is that we have many particles “flying” around on three axes: X, Y, and Z. Each particle has a starting position, a starting velocity, and a constant acceleration. When a particle moves, the acceleration increases the velocity of the particle, and that new velocity determines the next position of the particle. Our task is to find the particle that will be the closest to the center point after every particle has moved the same number of times.

Then there’s this one sentence in the description:

Each tick, all particles are updated simultaneously.

Could it… could it be? A puzzle where goroutines can be applied to do what they were meant to be doing? Only one way to find out, and that’s to try and use goroutines to solve the puzzle.

Prior knowledge

Up until this point I knew the following things about goroutines and channels:

  • goroutines are started with the go keyword,
  • we must wait on the gouroutines to be done, otherwise they fall out of scope and things sort of leak,
  • the waiting can be done with either sync.WaitGroup or with a “quit” channel,
  • I prefer the sync.WaitGroup approach as it is easier for me to follow it,
  • goroutines communicate through channels, by sending and receiving a specific data type on them,
  • there are buffered and unbuffered channels, but other than it’s pretty much ¯\_(ツ)_/¯
  • I think it’s not possible to send and receive to the same channel in the same goroutine?

Setting up the code

From the puzzle’s description we know we have a bunch of particles, every particle has an ID from zero to N, every particle has a position, velocity, and acceleration, and we want to know the particle’s destination from the center. I went ahead and made a struct like this:

type Particle struct {
	id int
	p  coord
	v  coord
	a  coord
	d  int
}

The coord is again a struct that holds the XYZ coordinates and has two methods on it, nothing fancy. When the particle moves, we add the acceleration to the velocity and the velocity to the position. Again, nothing interesting there.

That’s pretty much all the setup I had.

Time to move

With all the prior knowledge I had, my idea was to move every particle in a separate goroutine for 10000 times, send every particle over a channel to somewhere where I can compare them and find the one that’s the closest to the center, and use sync.WaitGroup to orchestrate the waiting on the goroutines to move the particles. The 10k is just a random number that seemed high enough so all particles are given enough time to move.

The first iteration looked like something the following:

func closest(particles []Particle) Particle {
	var cp Particle
	var wg sync.WaitGroup
	wg.Add(len(particles))
	pch := make(chan Particle)

	for _, particle := range particles {
		go move(particle, 10000, pch, &wg)
	}

	wg.Wait()
	close(pch)
    return cp
}

cp will hold the closest particle, there’s the wg to do all the waiting, the pch channel to send particles over it.

For every particle in the slice of particles I tell it to “split out” in a goroutine and move that Particle for 10000 times. The move function moves the particle for the given number of iterations, sends it over the pch channel once we’re done moving it, and tells the wait group that it’s done:

func move(particle Particle, iterations int, pch chan Particle, wg *sync.WaitGroup) {
	for i := 0; i < iterations; i++ {
		particle.move()
	}
	pch <- particle
	wg.Done()
}

So far this seems like something that could work. Running this code as is, results in “fatal error: all goroutines are asleep - deadlock!“. OK, sort of make sense that it fails, we only send to the particle channel, we never receive from it.

Send and you shall receive

So… let’s receive from it I guess:

func closest(particles []Particle) Particle {
    // snip...
	for _, particle := range particles {
		go move(particle, 10000, pch, &wg)
	}

    p := <-pch
    log.Println(p)

	wg.Wait()
	close(pch)
    return cp
}

Surprise! “fatal error: all goroutines are asleep - deadlock!“. Errr…

Oh, right, can’t send and receive to a channel in the same goroutine! Even though, receiving is not in the same goroutine as sending, lets see what happens if we do receive in one:

func closest(particles []Particle) Particle {
    // snip...
	for _, particle := range particles {
		go move(particle, 10000, pch, &wg)
	}

    go func() {
        p := <-pch
        log.Println(p)
    }()

	wg.Wait()
	close(pch)
    return cp
}

Guess what? “fatal error: all goroutines are asleep - deadlock!” This should totally be possible, I’m doing something wrong!

Matej mentioned something on Twitter the other day that buffered channels can send and receive inside the same goroutine. Let’s try a buffered channel, see if that works.

Maybe buffers is what we need after all

When creating the pch channel, we pass in a second argument to make, the size of the buffer for the channel. No idea what’s a good size for it, so let’s make it the size of the particles slice:

func closest(particles []Particle) Particle {
    // snip...
	pch := make(chan Particle, len(particles))

	for _, particle := range particles {
		go move(particle, 10000, pch, &wg)
	}

    p := <-pch
    log.Println(p)

	wg.Wait()
	close(pch)
    return cp
}

Run it and… A-ha! One particle gets logged! There’s no comparing of particles in there yet, so it must be the first particle that was sent on that channel. OK, let’s range over it, that’ll do it:

func closest(particles []Particle) Particle {
    // snip...
	for _, particle := range particles {
		go move(particle, 10000, pch, &wg)
	}

    for p := range pch {
        log.Println(p)
    }

	wg.Wait()
	close(pch)
    return cp
}

“fatal error: all goroutines are asleep - deadlock!” motherf… gah! Fine, I’ll loop over all the particles and receive from the channel, see what happens then:

func closest(particles []Particle) Particle {
    // snip...
	for _, particle := range particles {
		go move(particle, 10000, pch, &wg)
	}

    for i := 0; i < len(particles); i++ {
		p := <-pch
		log.Println(p)
	}

	wg.Wait()
	close(pch)
    return cp
}

Bingo! All particles logged, no deadlocks. Throw in a closure to find the closest particle and we’re done!

// snip...
    var findcp = func(p Particle) {
        if cp.d == 0 || particle.d < cp.d {
			cp = particle
		}
    }
    for i := 0; i < len(particles); i++ {
		p := <-pch
		findcp(p)
	}
// snip...

For my input I get that the answer is Particle number 243, submit it to Advent of Code and it’s the correct answer! I did it! I used goroutines and channels to solve one puzzle!

But… how?

I made it work, but I still didn’t understand how and why does this work. Time to play around it with some more.

I’ve seen code examples using range to range over a channel and use whatever is received from the channel to do something with it. Countless blog posts and tutorials, I’ve never seen a “regular” for loop and in it receiving from the channel. There must be a nicer way to achieve the same. Re-reading a couple of the articles, I spot the error I made in the range approach; I was closing the channel too late:

func closest(particles []Particle) Particle {
    // snip...
	for _, particle := range particles {
		go move(particle, 10000, pch, &wg)
	}

	wg.Wait()
	close(pch)

    for p := range pch {
        findcp(cp)
    }

    return cp
}

Particle number 243! After some thinking about it, it makes sense, or at least this is how I made it make sense:

Lesson number 1 — golang’s range doesn’t “like” open-ended things, it needs to know where does the thing we range over begin and where does it end. By closing the channel we tell range that it’s OK to range over the pch channel, as nothing will send to it any more.

To put it in another way, if we leave the channel open when we try to range over it, range can’t possibly know when will the next value come in to the channel. It might happen immediately, it might happen in 2 minutes.

Next step is to try and make it work using unbuffered channels. If I just make this buffered channel an unbuffered one, but otherwise leave the working code as-is, it blows up with a deadlock. Something something same goroutine.

Back to the example where I tried to receive in a separate goroutine:

    // snip..
    go func() {
        p := <-pch
        log.Println(p)
    }()
    // snip...

Ah, this goroutine runs only once. It even prints out a single particle at the beginning, I just didn’t look closely enough the first time, all I saw was the deadlock error message and moved on. I should probably loop over the channel somehow and receive from it in that loop.

I remembered reading something about a weird looking for/select loop, let’s try writing one of those:

    // snip..
    go func() {
        for{
			select {
			case p := <-pch:
				findcp(p)
			}
		}
    }()
    // snip...

Particle number 243! Yey! And again, after giving it some thought, this is how I explained this unbuffered channel version to myself:

Lesson number 2 — an unbuffered channel can’t hold on to values (yah, it’s right there in the name “unbuffered”), so whatever is sent to that channel, it must be received by some other code right away. That receiving code must be in a different goroutine because one goroutine can’t do two things at the same time: it can’t send and receive; it must be one or the other.

So far whatever I threw at these two lessons learned, they’re standing their grounds.

Buffered or unbuffered?

Armed with these two bits of new knowledge, when to use buffered and when to use unbuffered channels?

I guess buffered channels can be used when we want to aggregate data from goroutines and then do with that data some further processing either in the current goroutine or in new ones. Another possible use case would be when we can’t or don’t want to receive on the channel at the exact moment, so we let the senders fill up the buffer on the channel, until we can process it.

And I guess in any other case, use an unbuffered channel.

And that’s pretty much all I learned from this one Advent of Code puzzle. Here’s my final solution using buffered channels and here’s the one using unbuffered channels. I even figured out other ways to make it work while writing this blog post, but those solutions all come from understanding of how channels work.

If you spot any errors in either my working code examples or in my reasoning, please let me know. I want to know better. Thanks!

Happy hackin’!

Robert Basic

Robert Basic

Software engineer, consultant, open source contributor.

Let's work together!

If you require outsourcing or consulting help on your projects, I'm available!

Robert Basic © 2008 — 2019
Get the feed