Archive for the 'Development' category

Testing Symfony commands with Behat

published on January 18, 2019.

The other day I was creating a Symfony command that will be periodically executed by a cronjob. I decided to write a Behat test for it, to see what a test like that would look like. Plus, just because it is executed by the system from a command line, doesn’t mean we can skimp on the business requirements.

We need Symfony, Behat, and Behat Symfony2 extension. In the behat.yml file we configure the Behat extension to boot up the Kernel for us and pass it in is a constructor argument to our Behat Context:

./behat.yml

default:
    extensions:
        Behat\Symfony2Extension:
          kernel:
            bootstrap: features/bootstrap/bootstrap.php
            class: App\Kernel

    suites:
      system:
        paths:
          - '%paths.base%/features/system.feature'
        contexts:
          - SystemContext:
              kernel: '@kernel'

We enable and configure the Behat Symfony2 extension, and tell Behat that the system suite will use the system.feature feature file and the SystemContext context which takes one constructor argument, the kernel. I don’t like to put everything into the default FeatureContext for Behat, but rather split different contexts into, well, different contexts. That’s why I created the separate SystemContext.

The boostrap.php file is created when installing the extension (at least, it was created for me as I installed it using Symfony Flex):

./features/bootstrap/bootstrap.php

<?php
putenv('APP_ENV='.$_SERVER['APP_ENV'] = $_ENV['APP_ENV'] = 'test');
require dirname(__DIR__, 2).'/config/bootstrap.php';

The system.feature file doesn’t have much, just an example scenario:

./features/system.feature

Feature: System executed commands

  Scenario: Behat testing a Symfony command
    Given I am the system
    When I greet the world
    Then I should say "Hello World"

The SystemContext

The SystemContext file is where it gets interesting:

./features/bootstrap/SystemContext.php

<?php

use App\Kernel;
use Behat\Behat\Context\Context;
use Symfony\Bundle\FrameworkBundle\Console\Application;
use Symfony\Component\Console\Input\ArgvInput;
use Symfony\Component\Console\Output\BufferedOutput;
use Webmozart\Assert\Assert;

class SystemContext implements Context
{
    /**
     * @var Application
     */
    private $application;

    /**
     * @var BufferedOutput
     */
    private $output;

    public function __construct(Kernel $kernel)
    {
        $this->application = new Application($kernel);
        $this->output = new BufferedOutput();
    }

It implements the Behat Context interface so that Behat recognizes it as a context.

In the constructor we create a new console Application with the Kernel that the Behat Symfony2 extension created for us. We will use this application instance to run the command that we are testing. For all intents and purposes, this application instance acts the same as the application instance that gets created in the bin/console script that we usually use to run Symfony commands.

We also create a BufferedOutput in the constructor, that will hold the output that the command produces, which we can later on use to assert did the command produce the desired output.

Behat step definitions

The steps are defined like so (it’s in the same SystemContext.php file as the previous example):

./features/bootstrap/SystemContext.php

<?php

class SystemContext implements Context
{
    /**
     * @Given I am the system
     */
    public function iAmTheSystem()
    {
        Assert::same('cli', php_sapi_name());
    }

    /**
     * @When I greet the world
     */
    public function iGreetTheWorld()
    {
        $command = 'app:hello-world';
        $input = new ArgvInput(['behat-test', $command, '--env=test']);
        $this->application->doRun($input, $this->output);
    }

    /**
     * @Then I should say :sentence
     */
    public function iShouldSay($sentence)
    {
        $output = $this->output->fetch();

        Assert::same($sentence, $output);
    }

I am the system

In the first Behat step we assert that the PHP interface is the cli. Not sure how it could be anything else in this case, but let’s have that in there.

I greet the world

The second Behat step is where the fun part happens, where we run the command. The ArgvInput takes an array of parameters from the CLI in the argv format. In the case of bin/console it ends up being populated from $_SERVER['argv']. In this case though, we need to populate it on our own.

The first argument is always the name that was used to run the script and it ends up being just a “placeholder”, hence the behat-test value. We can put in anything there, really.

The second parameter is the command that we want to run: app:hello-world. It is the same string we would use when executing that command through bin/console. Because we created an instance of the Application in the constructor, Symfony will know exactly what command that is.

The third parameter is an option to tell Symfony to run the command in the test environment.

Once we have the input ready, we tell the application to run using the doRun method, passing in the input and the output (which is a BufferedOutput).

I should say :sentence

In the third Behat step we fetch the output and assert that it is the same as the output we expected it to be.

Make it reusable

To make it a bit more reusable, the running of the command in the iGreetTheWorld step can be extracted to a private method so that it all reads a little bit nicer. The final result looks something like this:

./features/bootstrap/SystemContext.php

<?php

use App\Kernel;
use Behat\Behat\Context\Context;
use Symfony\Bundle\FrameworkBundle\Console\Application;
use Symfony\Component\Console\Input\ArgvInput;
use Symfony\Component\Console\Output\BufferedOutput;
use Webmozart\Assert\Assert;

class SystemContext implements Context
{
    /**
     * @var Application
     */
    private $application;

    /**
     * @var BufferedOutput
     */
    private $output;

    public function __construct(Kernel $kernel)
    {
        $this->application = new Application($kernel);
        $this->output = new BufferedOutput();
    }

    /**
     * @Given I am the system
     */
    public function iAmTheSystem()
    {
        Assert::same('cli', php_sapi_name());
    }

    /**
     * @When I greet the world
     */
    public function iGreetTheWorld()
    {
        $this->runCommand('app:hello-world');
    }

    /**
     * @Then I should say :sentence
     */
    public function iShouldSay($sentence)
    {
        $output = $this->output->fetch();

        Assert::same($sentence, $output);
    }

    private function runCommand(string $command)
    {
        $input = new ArgvInput(['behat-test', $command, '--env=test']);
        $this->application->doRun($input, $this->output);
    }

Happy hackin’!

Versions used for examples: Symfony 4.1, Behat 3.4, Behat Symfony2 Extension 2.1.
Tags: symfony, behat, commands, bdd, testing, php.
Categories: Programming, Development.

Ignore Errors in Makefile

published on December 24, 2018.

I’ve been using Make and Makefiles quite extensively over the past few months in my personal projects. One of the make targets I have is to run Behat tests:

  • run Doctrine migrations to create the database,
  • run the Behat tests,
  • run Doctrine migrations to tear down the database.
.PHONY: e2e-tests
e2e-tests:
	docker-compose exec php-fpm /application/bin/console doctrine:migrations:migrate -n -q --env=test
	docker-compose exec php-fpm /application/vendor/bin/behat --verbose
	docker-compose exec php-fpm /application/bin/console doctrine:migrations:migrate first -n -q --env=test

This works great, but today I ended up chasing a weird data related bug which at first I didn’t tie to the Makefile. In this case, whenever there was a failure in one of the Behat tests, the Doctrine migration to tear down the database didn’t happen, leaving bad data in the test database lying around.

The solution is to tell Make to ignore errors from the Behat recipe. By prepending that recipe with a dash -, Make doesn’t stop if there’s a failure in the Behat tests and goes on to tear down the database:

.PHONY: e2e-tests
e2e-tests:
	docker-compose exec php-fpm /application/bin/console doctrine:migrations:migrate -n -q --env=test
	- docker-compose exec php-fpm /application/vendor/bin/behat --verbose
	docker-compose exec php-fpm /application/bin/console doctrine:migrations:migrate first -n -q --env=test

Make will still print out that one of the recipes for the target failed and that there were ignored errors:

Makefile:24: recipe for target 'e2e-tests' failed
make: [e2e-tests] Error 1 (ignored)

Happy hackin’!

Tags: make, makefile, errors.
Categories: Development.

Legacy code is 3rd party code

published on July 19, 2018.

Within the TDD community there’s an advice saying that we shouldn’t mock types we don’t own. I believe it is good advice and do my best to follow it. Of course, there are people who say that we shouldn’t mock in the first place. Whichever TDD camp you’re in I think this “don’t mock what you don’t own” advice has an even better advice hidden in it. An advice that people often overlook because they see the word “mock” in it and go full berserk.

This hidden advice is that we should create interfaces, clients, bridges, adapters between our application and the 3rd party code we use. Will we create mocks from those interfaces in our tests doesn’t even matter that much. What matters is that we create and use these interfaces so that our application is better decoupled from the 3rd party code. A classic example of this in the PHP world would be to create and use an HTTP client within the application that uses the Guzzle HTTP client, instead of using the Guzzle client directly in the application code.

Why? Well, for one, Guzzle has a much bigger public API than what your application (in most cases) needs. Creating a custom HTTP client that exposes only the required subset of Guzzle’s API will limit what the application developers can do with it. If Guzzle’s API changes in the future, we’ll have to change how we call it in only one place, instead of trying to make the required changes in the entire application and hope that we broke nothing. Two very good reasons and I haven’t even mentioned mocks! gasp

I don’t think this is that hard to achieve. 3rd party code lives in a separate folder from our application code, usually in vendor/ or library/. It also has a different namespace and naming convention than our application code. It is fairly easy to spot 3rd party code and with a bit of a discipline we can make our application code less dependant on 3rd parties.

What if we apply the same rule to legacy code?

What if we start looking at our legacy code the same way we look at the 3rd party code? This might be difficult to do, or even counterproductive, if the legacy code is in a maintenance-only mode, where we only fix bugs and tweak bits and pieces of it. But if we are writing new code that is (re)using legacy code, I believe we should look at legacy code the same way we look at 3rd party code, at least from the perspective of the new code.

If at all possible legacy and new code should live in different folders and use different namespaces. It’s been a long time since I last saw a system without autoloading so this is doable. But instead of just blindly using legacy code within the new code, what if we create interfaces for the legacy code and use those in the new code?

Legacy code is all too often full of “god” objects that do way too many things. They reach out to global state, have public properties or magic methods that expose privates as if they were public, have static methods that are just so convenient to call from anywhere and everywhere. Well, guess what? That convenience got us in this situation in the first place.

Another, maybe an even bigger issue with legacy code is that we are so ready to change it, fix it, hack it, because we don’t see it as a 3rd party code. What do we do when we see a bug or when we want to add a new feature to 3rd party code? We open up an issue and/or create a pull request. What we don’t do is go inside the vendor/ folder and make our changes there. Why would we do that to legacy code? And then we cross our fingers and hope we didn’t break anything.

Instead of blindly using legacy code within new code, let’s try writing interfaces that will expose only the required subset of the legacy “god” object’s API. Say we have a User object in the legacy code that knows everything about everyone. It knows how to change emails and passwords, how to promote forum members to moderators, how to update a user’s public profile, set notification settings, how to save itself, and so much more.

src/Legacy/User.php

<?php
namespace Legacy;
class User
{
    public $email;
    public $password;
    public $role;
    public $name;

    public function promote($newRole)
    {
        $this->role = $newRole;
    }

    public function save()
    {
        db_layer::save($this);
    }
}

It’s a crude example, but shows the problems: every property is public and can be easily changed to whatever value, we have to remember to explicitly call the save method after any change for these changes to persist, etc.

Let’s limit and prohibit ourselves from reaching out to those public properties and having to guess how does the legacy system work any time we want to promote a user:

src/LegacyBridge/Promoter.php

<?php
namespace LegacyBridge;
interface Promoter
{
    public function promoteTo(Role $role);
}

src/LegacyBridge/LegacyUserPromoter.php

<?php
namespace LegacyBridge;
class LegacyUserPromoter implements Promoter
{
    private $legacyUser;
    public function __construct(Legacy\User $user)
    {
        $this->legacyUser = $user;
    }

    public function promoteTo(Role $newRole)
    {
        $newRole = (string) $newRole;
        // I guess you thought $role in legacy is a string? Guess again!
        $legacyRoles = [
            Role::MODERATOR => 1,
            Role::MEMBER => 2,
        ];
        $newLegacyRole = $legacyRoles[$newRole];
        $this->legacyUser->promote($newLegacyRole);
        $this->legacyUser->save();
    }
}

Now when we want to promote a User from the new code we use this LegacyBridge\Promoter interface that deals with all the details of promoting a user within the legacy system.

Change the language of the legacy

An interface for the legacy code gives us an opportunity to improve the design of the system. An interface can free us from any potential naming mistakes we did in the legacy. The process of changing a user’s role from a moderator to a member is not a “promotion”, but rather a “demotion”. Nothing stops us from creating two interfaces for these two different things, even though the legacy code sees it the same:

src/LegacyBridge/Promoter.php

<?php
namespace LegacyBridge;
interface Promoter
{
    public function promoteTo(Role $role);
}

src/LegacyBridge/LegacyUserPromoter.php

<?php
namespace LegacyBridge;
class LegacyUserPromoter implements Promoter
{
    private $legacyUser;
    public function __construct(Legacy\User $user)
    {
        $this->legacyUser = $user;
    }

    public function promoteTo(Role $newRole)
    {
        if ($newRole->isMember()) {
            throw new \Exception("Can't promote to a member.");
        }
        $legacyMemberRole = 2;
        $this->legacyUser->promote($legacyMemberRole);
        $this->legacyUser->save();
    }
}

src/LegacyBridge/Demoter.php

<?php
namespace LegacyBridge;
interface Demoter
{
    public function demoteTo(Role $role);
}

src/LegacyBridge/LegacyUserDemoter.php

<?php
namespace LegacyBridge;
class LegacyUserDemoter implements Demoter
{
    private $legacyUser;
    public function __construct(Legacy\User $user)
    {
        $this->legacyUser = $user;
    }

    public function demoteTo(Role $newRole)
    {
        if ($newRole->isModerator()) {
            throw new \Exception("Can't demote to a moderator.");
        }
        $legacyModeratorRole = 1;
        $this->legacyUser->promote($legacyModeratorRole);
        $this->legacyUser->save();
    }
}

Not that big of a change, yet the intent of the code is much clearer.

Now the next time you want to reach out to that legacy code and call some methods on it, try and make an interface for it. It might not be feasible, it might be too expensive to do. I know that static method on that god object is really easy to use and will get the job done much quicker, but at least consider this option. You might just improve the design of the new system you’re building a tiny little bit.

Happy hackin’!

Docker containers for PHP with PHPDocker.io

published on March 27, 2018.

This past weekend I was playing around on some pet projects and wanted to get up and running quickly. My initial reaction was to reach for a Vagrant box provisioned with Ansible. After all, that’s what I’ve been using for a really long time now.

Recently I’ve been also learning a bit more about Docker, so I figured maybe this pet project would be a good project to replace the standard Vagrant set up and go with Docker instead. When it comes to Docker, by now I believe I have a fairly good understanding of how it works and how it’s meant to be used in a development environment. I’ve learned a lot about it from Vranac, as well as poked around it on my own.

While trying to write a set of Docker files and Docker compose files for this project, I thought there must be an easier way to do this… And then I remembered that some time ago I came across a generator to generate Docker environments for PHP projects: PHPDocker.io. As they state on their website:

PHPDocker.io is a tool that will help you build a typical PHP development environment based on Docker with just a few clicks. It supports provisioning of the usual services (MySQL/MariaDB, Redis, Elasticsearch...), with more to come. PHP 7.1 is supported, as well as 7.0 and 5.6.

Click-click-click… done.

What I like about PHPDocker is that it takes a couple of clicks and filling out a couple of text fields to get a nice zip with all the things needed to get a project up and running. It has support for a “generic” PHP application, like Symfony 4, Zend Framework and Expressive, or Laravel, as well as for applications based on Symfony 23, or Silex. PHP versions supported range from 5.6 to 7.2 and a variety of extensions can be additionally enabled. Support for either MySQL, MariaDB, or Postgres is provided, as well as a couple of “zero-config” services like Redis or Mailhog.

The zip file comes with a phpdocker directory that holds the configurations for the specific containers such as the nginx.conf file for the nginx container. In the “root” of the zip there’s a single docker-compose.yml file which configures all the services we told PHPDocker we need:

docker-compose.yml

###############################################################################
#                          Generated on phpdocker.io                          #
###############################################################################
version: "3.1"
services:

    mysql:
      image: mysql:5.7
      container_name: test-mysql
      working_dir: /application
      volumes:
        - .:/application
      environment:
        - MYSQL_ROOT_PASSWORD=root
        - MYSQL_DATABASE=test
        - MYSQL_USER=test
        - MYSQL_PASSWORD=test
      ports:
        - "8082:3306"

    webserver:
      image: nginx:alpine
      container_name: test-webserver
      working_dir: /application
      volumes:
          - .:/application
          - ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
      ports:
       - "8080:80"

    php-fpm:
      build: phpdocker/php-fpm
      container_name: test-php-fpm
      working_dir: /application
      volumes:
        - .:/application
        - ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini

Run docker-compose up in the directory where the docker-compose.yml file is located and it’ll pull and build the required images and containers, and start the services. The application will be accessible from the “host” machine at localhost:8080, as that’s the port I defined I wanted exposed in this case. You can see that in the ports section of the webserver service.

One thing I noticed is that the mysql service doesn’t keep the data around, but that can be fixed by adding a new line to the volumes section of that service: ./data/mysql:/var/lib/mysql. The mysql service definition should now read:

    mysql:
      image: mysql:5.7
      container_name: test-mysql
      working_dir: /application
      volumes:
        - .:/application
        - ./data/mysql:/var/lib/mysql
      environment:
        - MYSQL_ROOT_PASSWORD=root
        - MYSQL_DATABASE=test
        - MYSQL_USER=test
        - MYSQL_PASSWORD=test
      ports:
        - "8082:3306"

Other than this, I didn’t notice any issues (so far) with the environment.

Inside the phpdocker folder there’s also a README file that provides additional information how to use the generated Docker environment, what services are available on what port, a small “cheatsheet” for Docker compose, as well as some recommendations how to interact with the containers.

Happy hackin’!

Connecting to MySQL 8

published on March 24, 2018.

I’ve used recently PHPDocker.io to generate a set of Docker files for a pet project and it had the option to use MySQL 8 and of course I went with that. The problem was when I wanted to connect to the database that was on this MySQL 8 server.

I had locally installed the MySQL 5.7 client version and when trying to connect to the MySQL 8 server it complained about a missing authentication plugin:

ERROR 2059 (HY000): Authentication plugin 'caching_sha2_password' cannot be loaded:
/usr/lib64/mysql/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Turns out, in MySQL 8 this caching_sha2_password is the default authentication plugin instead of the mysql_native_password. This new authentication plugin is described in the documentation. I didn’t want to change anything in the MySQL docker image, so instead I decided to upgrade to MySQL 8 client on my Fedora. First I removed all traces of MySQL I had:

sudo dnf remove mysql

Then I installed the RPM from MySQL:

sudo dnf install https://dev.mysql.com/get/mysql57-community-release-fc27-10.noarch.rpm

And finally installed the MySQL 8 client:

sudo dnf --enablerepo=mysql80-community install mysql-community-client

And now I can connect to the MySQL 8 server inside the Docker container:

mysql -P 8082 -h 127.0.0.1 --protocol=tcp -utest -p test

Happy hackin’!

Tags: mysql, authentication.
Categories: Software, Development.
Robert Basic

Robert Basic

Software engineer, consultant, open source contributor.

Let's work together!

If you require outsourcing or consulting help on your projects, I'm available!

Robert Basic © 2008 — 2019
Get the feed