Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 18 hours 43 min ago

TimOnWeb.com: JQuery.cookie in Drupal 7

5 August 2016 - 5:06am

A quick tip for all Drupalistas outhere: if you want to use jQuery.cookie in your project, you actually don't have to download and install the library. jQuery.cookie is a part of Drupal 7 and can be included as easy as typing: 

  1. drupal_add_library('system', 'jquery.cookie');

Wondering to ...

Read now

Categories: Drupal

Kristof De Jaeger: Stop telling users that another user has modified the content in Drupal 8

5 August 2016 - 12:55am
Written on August 5, 2016 - 09:55

Every Drupal developer knows the following error message (maybe some by heart): The content has been modified by another user, changes cannot be saved. In Drupal 8 the message is even a bit longer: The content has either been modified by another user, or you have already submitted modifications. As a result, your changes cannot be saved. While this inbuilt mechanism is very useful to preserve data integrity, the only way to get rid of the message is to reload the form and then redo the changes you want to make. This can be (or should I say 'is') very frustrating for users, especially when they have no idea why this is happening. In an environment where multiple users modify the same content, there are solutions like the Content locking module to get overcome this nagging problem. But what if your content changes a lot by backend calls ?

On a big project I'm currently working on, Musescore.com (D6 to D8), members can upload their scores to the website. On save, the file is send to Amazon where it will be processed so you can play and listen to the music in your browser. Depending on the length of a score, the processing might take a couple of minutes before it's available. In the meantime, you can edit the score because the user might want to update the title, body content, or add some new tags. While the edit form is open, the backend might be pinging back to our application notifying the score is now ready for playing and will update field values, thus saving the node. At this very moment, the changed time has been updated to the future, so when the user wants to save new values, Drupal will complain. This is just a simple example, in reality, the backend workers might be pinging a couple of times back on several occasions doing various operations and updating field values. And ironically, the user doesn't even have any permission to update one or more of these properties on the form itself. If you have ever uploaded a video to YouTube, you know that while your video is processing you can happily update your content and tags without any problem at all. That's what we want here too.

In Drupal 8, validating an entity is now decoupled from form validation. More information can be found on the Entity Validation API handbook and how they integrate with Symfony. Now, the validation plugin responsible for that message lives in EntityChangedConstraint and EntityChangedConstraintValidator. Since they are plugins, we can easily swap out the class and depending on our needs only add the violation when we really want to. What we also want is to preserve values of fields that might have been updated by a previous operation, in our case a backend call pinging back to tell us that the score is now ready for playing. Are you ready ? Here goes!

Step 1. Swap the class

All plugin managers in Core (any plugin manager should do that!) allow you to alter the definitions, so let's change the class to our own custom class.

<?php
/**
* Implements hook_validation_constraint_alter().
*/
function project_validation_constraint_alter(array &$definitions) {
  if (isset($definitions['EntityChanged'])) {
    $definitions['EntityChanged']['class'] = 'Drupal\project\Plugin\Validation\Constraint\CustomEntityChangedConstraint';
  }
}
?>

For the actual class itself, you can copy the original one, but without the annotation. The constraint plugin manager doesn't need to know about an additional new one (unless you want it to of course).

<?php
namespace Drupal\project\Plugin\Validation\Constraint;

use Symfony\Component\Validator\Constraint;

/**
* Custom implementation of the validation constraint for the entity changed timestamp.
*/
class CustomEntityChangedConstraint extends Constraint {
  public $message = 'The content has either been modified by another user, or you have already submitted modifications. As a result, your changes cannot be saved. In case you still see this, then you are really unlucky this time!';
}
?> Step 2: alter the node form

We want to be able to know that a validation of an entity is happening when an actual form is submitted. For this, we're adding a hidden field which stores a token based on the node id which we can then use later.

<?php
/**
* Implements hook_form_BASE_FORM_ID_alter() for \Drupal\node\NodeForm.
*/
function project_form_node_form_alter(&$form, &$form_state) {
  /** @var \Drupal\Node\NodeInterface $node */
  $node = $form_state->getFormObject()->getEntity();
  if (!$node->isNew() && $node->bundle() == 'your_bundle' && $node->getOwnerId() == \Drupal::currentUser()->id()) {
    $form['web_submission'] = [
      '#type' => 'hidden',
      '#value' => \Drupal::csrfToken()->get($node->id()),
    ];
  }
}
?> Step 3: Validating the entity and storing an id for later

We're getting to the tricky part. Not adding a violation is easy, but the entity that comes inside the constraint can't be changed. The reason is that ContentEntityForm rebuilts the entity when it comes in the submission phase, which means that if you would make any changes to the entity during validation, they would be lost. And it's a good idea anyway as other constraints might add violations which are necessary. To come around that, our constraint, in case the changed time is in the past, will verify if there is a valid token and call a function to store the id of the node in a static variable which can be picked up later.

<?php
namespace Drupal\project\Plugin\Validation\Constraint;

use Symfony\Component\Validator\Constraint;
use Symfony\Component\Validator\ConstraintValidator;

/**
* Validates the EntityChanged constraint.
*/
class CustomEntityChangedConstraintValidator extends ConstraintValidator {

  /**
   * {@inheritdoc}
   */
  public function validate($entity, Constraint $constraint) {
    if (isset($entity)) {
      /** @var \Drupal\Core\Entity\EntityInterface $entity */
      if (!$entity->isNew()) {
        $saved_entity = \Drupal::entityManager()->getStorage($entity->getEntityTypeId())->loadUnchanged($entity->id());
        // A change to any other translation must add a violation to the current
        // translation because there might be untranslatable shared fields.
        if ($saved_entity && $saved_entity->getChangedTimeAcrossTranslations() > $entity->getChangedTimeAcrossTranslations()) {
          $add_violation = TRUE;
          if ($entity->getEntityTypeId() == 'node' && $entity->bundle() == 'your_bundle' &&
            $this->isValidWebsubmission($entity->id())) {
            $add_violation = FALSE;

            // Store this id.
            project_preserve_values_from_original_entity($entity->id(), TRUE);
          }

          // Add the violation if necessary.
          if ($add_violation) {
            $this->context->addViolation($constraint->message);
          }
        }
      }
    }
  }

  /**
   * Validate the web submission.
   *
   * @param $value
   *   The value.
   *
   * @see project_form_node_form_alter().
   *
   * @return bool
   */
  public function isValidWebsubmission($value) {
    if (!empty(\Drupal::request()->get('web_submission'))) {
      return \Drupal::csrfToken()->validate(\Drupal::request()->get('web_submission'), $value);
    }

    return FALSE;
  }

}

/**
* Function which holds a static array with ids of entities which need to
* preserve values from the original entity.
*
* @param $id
*   The entity id.
* @param bool $set
*   Whether to store the id or not.
*
* @return bool
*   TRUE if id is set in the $ids array or not.
*/
function project_preserve_values_from_original_entity($id, $set = FALSE) {
  static $ids = [];

  if ($set && !isset($ids[$id])) {
    $ids[$id] = TRUE;
  }

  return isset($ids[$id]) ? TRUE : FALSE;
}
?> Step 4: copy over values from the original entity

So we now passed validation, even if the submitted changed timestamp is in the past of the last saved version of this node. Now we need to copy over values that might have been changed by another process that we want to preserve. In hook_node_presave() we can call project_preserve_values_from_original_entity() to ask if this entity is eligible for this operation. If so, we can just do our thing and happily copy those values, while keeping the fields that the user has changed in tact.

<?php
/**
* Implements hook_ENTITY_TYPE_presave().
*/
function project_node_presave(NodeInterface $node) {
  if (!$node->isNew() && isset($node->original) && $node->bundle() == 'your_bundle' && project_preserve_values_from_original_entity($node->id())) {
    $node->set('your_field', $node->original->get('your_field')->value);
    // Do many more copies here.
  }
}
?> A happy user!

Not only the user is happy: backends can update whenever they want and customer support does not have to explain anymore where this annoying user facing message is coming from.

Categories: Drupal

Freelock : Fixing broken URL Aliases on a Drupal 6 -> Drupal 8 migration

4 August 2016 - 2:10pm

So there are definite "gotchas" to migrating content from Drupal 6 to Drupal 8, when you take away the assumption that the ids throughout the system will remain the same. We hit another one on a recent launch: the URL aliases imported from Drupal 6 did not get rewritten with new node ids, after the migration had started using a map.

Drupal MigrationDrupal 8Drupal PlanetDrupal upgradeMigrate
Categories: Drupal

Chromatic: In Search of a Better Local Development Server

4 August 2016 - 11:06am
The problem with development environments

If you're a developer and you're like me, you have probably tried out a lot of different solutions for running development web servers. A list of the tools I've used includes:

That's not even a complete list — I know I've also tried other solutions from the wider LAMP community, still others from the Drupal community, and I've rolled my own virtual-machine based servers too.

All of these tools have their advantages, but I was never wholly satisfied with any of them. Typically, I would encounter problems with stability when multiple sites on one server needed different configurations, or problems customizing the environment enough to make it useful for certain projects. Even the virtual-machine based solutions often suffered from the same kinds of problems — even when I experimented with version-controlling critical config files such as vhost configurations, php.ini and my.cnf files, and building servers with configuration management tools like Chef and Puppet.

Drupal VM

Eventually, I found Drupal VM, a very well-thought-out virtual machine-based development tool. It’s based on Vagrant and another configuration management tool, Ansible. This was immediately interesting to me, partly because Ansible is the tool we use internally to configure project servers, but also because the whole point of configuration management is to reliably produce identical configuration whenever the software runs. (Ansible also relies on YAML for configuration, so it fits right in with Drupal 8).

My VM wishlist

Since I've worked with various VM-based solutions before, I had some fairly specific requirements, some to do with how I work, some to do with how the Chromatic team works, and some to do with the kinds of clients I'm currently working with. So I wanted to see if I could configure Drupal VM to work within these parameters:

1. The VM must be independently version-controllable

Chromatic is a distributed team, and I don't think any two of us use identical toolchains. Because of that, we don't currently want to include any development environment code in our actual project repositories. But we do need to be able to control the VM configuration in git. By this I mean that we need to keep every setting on the virtual server outside of the server in version-controllable text files.

Version-controlling a development server in this way also implies that there will be little or no need to perform administrative tasks such as creating or editing virtual host files or php.ini files (in fact, configuration of the VM in git means that we must not edit config files in the VM since they would be overridden if we recreate or reprovision it).

Furthermore, it means that there's relatively little need to actually log into the VM, and that most of our work can be performed using our day-to-day tools (i.e. what we've configured on our own workstations, and not whatever tools exist on the VM).

2. The VM must be manageable as a git submodule

On a related note, I wanted to be able to add the VM to the development server repository and never touch its files—I'm interested in maintaining the configuration of the VM, but not so much the VM itself.

It may help to explain this in Drupal-ish terms; when I include a contrib module in a Drupal project, I expect to be able to interact with that module without needing to modify it. This allows the module to be updated independently of the main project. I wanted to be able to work with the VM in the same way.

3. The VM must be able to be recreated from scratch at any time

This is a big one for me. If I somehow mess up a dev server, I want to be able to check out the latest version of the server in git, boot it and go back to work immediately. Specifically, I want to be able to restore (all) the database(s) on the box more or less automatically when the box is recreated.

Similarly, I usually work at home on a desktop workstation. But when I need to travel or work outside the house, I need to be able to quickly set up the project(s) I'll be working on on my laptop.

Finally, I want the VM configuration to be easy to share with my colleagues (and sometimes with clients directly).

4. The VM must allow multiple sites per server

Some of the clients we work with have multiple relatively small, relatively similar sites. These sites sometimes require similar or identical changes. For these clients, I much prefer to have a single VM that I can spin up to work on one or several of their sites at once. This makes it easier to switch between projects, and saves a great deal of disk space (the great disadvantage to using virtual machines is the amount of disk space they use, so putting several sites on a single VM can save a lot of space).

And of course if we can have multiple sites per server, then we can also have a single site per server when that's appropriate.

5. The VM must allow interaction via the command line

I've written before about how I do most of my work in a terminal. When I need to interact with the VM, I want to stay in the terminal, and not have to find or launch a specific app to do it.

6. The VM must create drush aliases

The single most common type of terminal command for me to issue to a VM is drush @alias {something}. And when running the command on a separate server (the VM!), the command must be prefixed with an alias, so a VM that can create drush aliases (or help create them) is very, very useful (especially in the case where there are multiple sites on a single VM).

7. The VM must not be too opinionated about the stack

Given the variations in clients' production environments, I need to be able to use any current version of PHP, use Apache or Nginx, and vary the server OS itself.

My VM setup

Happily, it turns out that Drupal VM can not only satisfy all these requirements, but is either capable of all of them out of the box, or makes it very straightforward to incorporate the required functionality. Items 4, 5, 6, and 7, for example, are stock.

But before I get into the setup of items 1, 2, and 3, I should note that this is not the only way to do it.

Drupal VM is a) extensively documented, and b) flexible enough to accommodate any of several very different workflows and project structures than what I'm going to describe here. If my configuration doesn't work with your workflow, or my workflow won't work with your configuration you can probably still use Drupal VM if you need or want a VM-based development solution.

For Drupal 8 especially, I would simply use Composer to install Drupal, and install Drupal VM as a dependency.

Note also that if you just need a quick Drupal box for more generic testing, you don't need to do any of this, you can just get started immediately.

Structure

We know that Drupal VM is based on Ansible and Vagrant, and that both of those tools rely on config files (YAML and ruby respectively). Furthermore, we know that Vagrant can keep folders on the host and guest systems in sync, so we also know that we'll be able to handle item 1 from my wishlist--that is, we can maintain separate repositories for the server and for projects.

This means we can have our development server as a standalone directory, and our project repositories in another. For example, we might set up the following directory structure where example.com contains the project repository, and devserver contains the Drupal VM configuration.

Servers └── devserver/ Sites └── example.com/ Configuration files

Thanks to some recent changes, Drupal VM can be configured with an external config.yml file , a local config.yml file, and an external Vagrantfile.local file using a delegating Vagrantfile.

The config.yml file is required in this setup, and can be used to override any or all of the default configuration in Drupal VM's own default.config.yml file.

The Vagrantfile.local file is optional, but useful in case you need to alter Drupal VM's default Vagrant configuration.

The delegating Vagrantfile is the key to tying together our main development server configuration and the Drupal VM submodule. It defines the directory where configuration files can be found, and loads the Drupal VM Vagrantfile.

This makes it possible to create the structure we need to satisfy item 2 from my wishlist--that is, we can add Drupal VM as a git submodule to the dev server configuration:

Server/ ├── Configuration/ | ├── config.yml | └── Vagrantfile.local ├── Drupal VM/ └── Vagrantfile Recreating the VM

One motivation for all of this is to be able to recreate the entire development environment quickly. As mentioned above, this might be because the VM has become corrupt in some way, because I want to work on the site on a different computer, or because I want to share the site—server and all—with a colleague.

Mostly, this is simple. To the extent that the entire VM (along with the project running inside it!) is version-controlled, I can just ask my colleague to check out the relevant repositories and (at most!) override the vagrant_synced_folders option in a local.config.yml with their own path to the project directory.

In checking out the server repository (i.e. we are not sharing an actual virtual disk image), my colleague will get the entire VM configuration including:

  • Machine settings,
  • Server OS,
  • Databases,
  • Database users,
  • Apache or Nginx vhosts,
  • PHP version,
  • php.ini settings,
  • Whatever else we've configured, such as SoLR, Xdebug, Varnish, etc.

So, with no custom work at all—even the delegating Vagrant file comes from the Drupal VM documentation—we have set up everything we need, with two exceptions:

  1. Entries for /etc/hosts file, and
  2. Databases!

For these two issues, we turn to the Vagrant plugin ecosystem.

/etc/hosts entries

The simplest way of resolving development addresses (such as e.g. example.dev) to the IP of the VM is to create entries in the host system's /etc/hosts file:

192.168.99.99 example.dev

Managing these entries, if you run many development servers, is tedious.

Fortunately, there's a plugin that manages these entries automatically, Vagrant Hostsupdater. Hostsupdater simply adds the relevant entries when the VM is created, and removes them again (configurably) when the VM is halted or destroyed.

Databases

Importing the database into the VM is usually a one-time operation, but since I'm trying to set up an easy process for working with multiple sites on one server, I sometimes need to do this multiple times — especially if I've destroyed the actual VM in order to save disk space etc.

Similarly, exporting the database isn't an everyday action, but again I sometimes need to do this multiple times and it can be useful to have a selection of recent database dumps.

For these reasons, I partially automated the process with the help of a Vagrant plugin. "Vagrant Triggers" is a Vagrant plugin that allows code to be executed "…on the host or guest before and/or after Vagrant commands." I use this plugin to dump all non-system databases on the VM on vagrant halt, delete any dump files over a certain age, and to import any databases that can be found in the dump location on the first vagrant up.

Note that while I use these scripts for convenience, I don't rely on them to safeguard critical data.

With these files and a directory for database dumps to reside in, my basic server wrapper now looks like this:

Server/ ├── Vagrantfile ├── config/ ├── db_dump.sh ├── db_dumps/ ├── db_import.sh └── drupal-vm/ My workflow New projects

All of the items on my wishlist were intended to help me achieve a specific workflow when I needed to add a new development server, or move it to a different machine:

  1. Clone the project repo.
  2. Clone the server repo.
  3. Change config.yml:
    • Create/modify one or more vhosts.
    • Create/modify one or more databases.
    • Change VM hostname.
    • Change VM machine name.
    • Change VM IP.
    • Create/modify one or more cron jobs.
  4. Add a database dump (if there is one) to the db_dumps directory.
  5. Run vagrant up.
Sharing projects

If I share a development server with a colleagues, they have a similar workflow to get it running:

  1. Clone the project repo.
  2. Clone the server repo.
  3. Customize local.config.yml to override my settings:
    • Change VM hostname (in case of VM conflict).
    • Change VM machine name (in case of VM conflict).
    • Change VM IP (in case of VM conflict).
    • Change vagrant synced folders local path (if different from mine).
  4. Add a database dump to the dumps directory.
  5. Run vagrant up.
Winding down projects

When a project completes that either has no maintenance phase, or where I won't be involved in the ongoing maintenance, I like to remove the actual virtual disk that the VM is based on. This saves ≥10GB of hard drive space (!):

$ vagrant destroy

But since a) every aspect of the server configuration is contained in config.yml and Vagrantfile.local, and b) since we have a way of automatically importing a database dump, resurrecting the development server is as simple as pulling down a new database and re-provisioning the VM:

$ scp remotehost:/path/to/dump.sql.gz /path/to/Server/db_dumps/dump.sql.gz $ vagrant up Try it yourself

Since I wanted to reuse this structure for each new VM I need to spin up, I created a git repository containing the code. Download and test it--the README contains detailed setup instructions for getting the environment ready if you don't already use Vagrant.

Categories: Drupal

Chromatic: YouTube Field 7.x-1.7 and 8.x-1.0-beta3 Released!

4 August 2016 - 11:06am

We're happy to announce two new releases for the YouTube Field module:

Improvements include:

Once again, it was a community effort. The module has now given credit attribution to 28 different people. A number of them have been the community member's first attributed commit! Not to mention, endless others have contributed in the issue queue. Thanks to their help, the module has now reached over 30,000 installs. That's enough to land in the top 200!

Why the "beta" label on the 8.x release?

The 7.x-1.x module includes Colorbox support, but that support has not yet been ported to the 8.x-1.x branch. We'd love help with that! We're planning on removing the "beta" label once that support is committed. The rest of the module is a direct port of 7.x-1.x and it already reports a healthy number of installs.

How else can I help?

Hop in the issue queue and have a look at the outstanding issues for either branch. As previously mentioned, any and all contributions are greatly appreciated!

Categories: Drupal

Evolving Web: Creating Landing Pages with Drupal 8 and Paragraphs

4 August 2016 - 8:53am

As Drupal themers and site builders, we often have to look for creative solutions to build landing pages. Landing pages are special pages often used for marketing campaigns, to attract particular audiences, or to aggregate content about a certain topics.

We want lading pages to be attractive and entice users to click, but we often also need them to be flexible so we can communicate different things. We want landing pages to look great the day we launch a website, but also to be flexible so that a site admin can change the content or add a new page and it still looks great.

read more
Categories: Drupal

Jeff Geerling's Blog: How to attach a CSS or JS library to a View in Drupal 8

4 August 2016 - 8:21am

File this one under the 'it's obvious, but only after you've done it' category—I needed to attach a CSS library to a view in Drupal 8 via a custom module so that, wherever the view displayed on the site, the custom CSS file from my module was attached. The process for CSS and JS libraries is pretty much identical, but here's how I added a CSS file as a library, and made sure it was attached to my view:

Add the CSS file as a library

In Drupal 8, drupal_add_css(), drupal_add_js(), and drupal_add_library() were removed (for various reasons), and now, to attach CSS or JS assets to views, nodes, etc., you need to use Drupal's #attached functionality to 'attach' assets (like CSS and JS) to rendered elements on the page.

In my custom module (custom.module), I added the CSS file css/custom_view.css:

Categories: Drupal

OSTraining: There Will Never Be a Drupal 9

4 August 2016 - 6:09am

Yes, that's a big statement in the title, so let me explain.

Lots of OSTraining customers are looking into Drupal 8 and they have questions about Drupal 8's future. If they invest in the platform today, how long will that invesment last. 

This is just my personal opinion, but I think an investment in Drupal 8 will last a long, long time.

Drupal 8 took five years. It was a mammoth undertaking, and no-one in the Drupal community has the energy for a similar re-write. 

Categories: Drupal

Frederic Marand: How to display time and memory use for Drush commands

4 August 2016 - 5:08am

When you use Drush, especially in crontabs, you may sometimes be bitten by RAM or duration limits. Of course, running Drush with the "-d" option will provide this information, but it will only do so at the end of an annoyingly noisy output debugging the whole command run.

On the other hand, just running the Drush command within a time command won't provide fine memory reporting. Luckily Drush implements hooks to make acquiring this information easily, so here is a small gist you can use as a standalone Drush plugin or add to a module of your own:

read more

Categories: Drupal

lakshminp.com: Drupal composer workflow - part 2

4 August 2016 - 12:41am

In the previous post, we saw how to add and manage modules and module dependencies in Drupal 8 using Composer.

In this post we shall see how to use an exclusive composer based Drupal 8 workflow. Let's start with a vanilla Drupal install. The recommended way to go about it is to use Drupal Composer project.

$ composer create-project drupal-composer/drupal-project:8.x-dev drupal-8.dev

If you are a careful observer(unlike me), you will notice that a downloaded Drupal 8 package ships with the vendor/ directory. In other words, we need not install the composer dependencies when we download it from d.o. On the other hand, if you "git cloned" Drupal 8, it won't contain the vendor/ directory, hence the extra step to run `composer install` in root directory. The top level directory contains a composer.json and the name of the package is drupal/drupal, which is more of a wrapper for the drupal/core package inside the core/ directory. The drupal/core package installs Drupal core and its dependencies. The drupal/drupal helps you build a site around Drupal core, maintains dependencies related to your site and modules etc.

Drupal project takes a slightly different project structure.It installs core and its dependencies similar to drupal/drupal. It also installs the latest stable versions of drush and drupal console.

$ composer create-project drupal-composer/drupal-project:8.x-dev d8dev --stability dev --no-interaction New directory structure

Everything Drupal related goes in the web/ directory, including core, modules, profiles and themes. Contrast this with the usual structure where there is a set of top level directories named core, modules, profiles and themes.

drush and drupal console(both latest stable versions) gets installed inside vendor/bin directory.The reason Drush and Drupal console are packaged on a per project basis is to avoid any dependency issues which we might normally face if they are installed globally.

How to install Drupal

Drupal can be installed using the typical site-install command provided by drush.

$ cd d8dev/web $ ../vendor/bin/drush site-install --db-url=mysql://<db-user-name>:<db-password>@localhost/<db-name> -y Downloading modules

Modules can be downloaded using composer. They get downloaded in the web/modules/contrib directory.

$ cd d8dev $ composer require drupal/devel:8.1.x-dev

The following things happen when we download a module via composer.

  1. Composer updates the top level composer.json and adds drupal/devel:8.1.x-dev as a dependency.
"require": { "composer/installers": "^1.0.20", "drupal-composer/drupal-scaffold": "^2.0.1", "cweagans/composer-patches": "~1.0", "drupal/core": "~8.0", "drush/drush": "~8.0", "drupal/console": "~1.0", "drupal/devel": "8.1.x-dev" },
  1. Composer dependencies(if any) for that module get downloaded in the top level vendor directory. These are specified in the composer.json file of that module. At the time of writing this, Devel module does not have any composer dependencies.
"license": "GPL-2.0+", "minimum-stability": "dev", "require": { } }

Most modules in Drupal 8 were(are) written without taking composer into consideration. We use the drush dl command every time which parses our request and downloads the appropriate version of the module from drupal.org servers. Downloading a module via composer requires the module to have a composer.json as a minimal requirement. So how does composer download all Drupal contrib modules if they don't have any composer.json? The answer lies in a not so secret sauce ingredient we added in our top level composer.json:

"repositories": [ { "type": "composer", "url": "https://packagist.drupal-composer.org" } ],

Composer downloads all packages from a central repository called Packagist. It is the npmjs equivalent of PHP. Drupal provides its own flavour of Packagist to serve modules and themes exclusively hosted at Drupal.org. Drupal packagist ensures that contrib maintainers need not add composer.json to their project.

Let's take another module which does not have a composer.json, like Flag(at the time of writing this). Let's try and download flag using composer.

$ composer require drupal/flag:8.4.x-dev ./composer.json has been updated > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) - Installing drupal/flag (dev-8.x-4.x 16657d8) Cloning 16657d8f84b9c87144615e4fbe551ad9a893ad75 Writing lock file Generating autoload files > DrupalProject\composer\ScriptHandler::createRequiredFiles

Neat. Drupal Packagist parses contrib modules and serves the one which matches the name and version we gave when we ran that "composer require" command.

Specifying package sources

There is one other step you need to do to complete your composer workflow, i.e., switching to the official Drupal.org composer repository. The actual composer.json contains Drupal packagist as the default repository.

"repositories": [ { "type": "composer", "url": "https://packagist.drupal-composer.org" } ],

Add the Drupal.org composer repo using the following command:

$ composer config repositories.drupal composer https://packages.drupal.org/8

Now, your repositories entry in composer.json should look like this:

"repositories": { "0": { "type": "composer", "url": "https://packagist.drupal-composer.org" }, "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

To ensure that composer indeed downloads from the new repo we specified above, let's remove the drupal packagist entry from composer.json.

$ composer config --unset repositories.0

The repositories config looks like this now:

"repositories": { "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

Now, let's download a module from the new repo.

$ composer require drupal/token -vvv

As a part of the verbose output, it prints the following:

... Loading composer repositories with package information Downloading https://packages.drupal.org/8/packages.json Writing /home/lakshmi/.composer/cache/repo/https---packages.drupal.org-8/packages.json into cache ...

which confirms that we downloaded from the official package repo.

Custom package sources

Sometimes, you might want to specify your own package source for a custom module you own, say, in Github. This follows the usual conventions for adding VCS package sources in Composer, but I'll show how to do it in Drupal context.

First, add your github URL as a VCS repository using the composer config command.

$ composer config repositories.restful vcs "https://github.com/RESTful-Drupal/restful"

Your composer.json will look like this after the above command is run successfully:

"repositories": { "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" }, "restful": { "type": "vcs", "url": "https://github.com/RESTful-Drupal/restful" } }

If you want to download a package from your custom source, you might want it to take precedence to the official package repository, as order really matters for composer. I haven't found a way to do this via cli, but you can edit the composer.json file and swap both package sources to look like this:

"repositories": { "restful": { "type": "vcs", "url": "https://github.com/RESTful-Drupal/restful" }, "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

Now, lets pick up restful 8.x-3.x. We can specify a Github branch by prefixing with a "dev-".

$ composer require "drupal/restful:dev-8.x-3.x-not-ready"

Once restful is downloaded, composer.json is updated accordingly.

"require": { "composer/installers": "^1.0.20", "drupal-composer/drupal-scaffold": "^2.0.1", "cweagans/composer-patches": "~1.0", "drupal/core": "~8.0", "drush/drush": "~8.0", "drupal/console": "~1.0", "drupal/devel": "8.1.x-dev", "drupal/flag": "8.4.x-dev", "drupal/mailchimp": "8.1.2", "drupal/token": "1.x-dev", "drupal/restful": "dev-8.x-3.x-not-ready" }, Updating drupal core

Drupal core can be updated by running:

$ composer update drupal/core > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) - Removing drupal/core (8.1.7) - Installing drupal/core (8.1.8) Downloading: 100% Writing lock file Generating autoload files Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% > DrupalProject\composer\ScriptHandler::createRequiredFiles

As the output reads, we updated core from 8.1.7 to 8.1.8. We will revisit the "Writing lock file" part in a moment. After this step is successful, we have to run drush updatedb to do any database updates. This applies to even updating modules.

$ cd d8dev/web $ ../vendor/bin/drush updatedb Updating modules

One of the cool things I like about composer workflow is, I can update selective modules, or even a single module. This is not possible using drush. The command for updating a module, say, devel is:

$ composer update drupal/devel > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) Nothing to install or update Generating autoload files > DrupalProject\composer\ScriptHandler::createRequiredFiles

Hmmm. Looks like devel is already the latest bleeding edge version. To quickly revise and ensure what composer related artifacts we need to check in to version control,

Should you check in the vendor/ directory?

Composer recommends that you shouldn't, but there are some environments that don't support composer(ex. Acquia Cloud), in which case you have to check in your vendor folder too.

Should you check in the composer.json file?

By now, you should know the answer to this question :)

Should you check in the composer.lock file?

Damn yes. composer.lock contains the exact version of the dependencies which are installed. For example, if your project depends on Acme 1.*, and you install 1.1.2 and your co-worker runs composer install after a month or so, it might install Acme 1.1.10, which might introduce version discrepancies in your project. To prevent this, composer install will check if a lock file exists, and install only that specific version recorded or "locked" down in the lock file. The only time the lock file changes is when you run a composer update to update your project dependencies to their latest versions. When that happens, composer updates the lock file with the newer version that got installed.

Drupal, Drupal 8, Drupal Planet
Categories: Drupal

Metal Toad: Avoiding Drupal 7 #AJAX Pitfalls

3 August 2016 - 7:10pm
Avoiding Drupal 7 #AJAX Pitfalls August 3rd, 2016 Marcus Bernal

Rather than provide a basic how-to tutorial on Drupal's form API #AJAX functionality, I decided to address a few pitfalls that often frustrate developers, both junior and senior alike. To me, it seems that most of the problems arise from the approach rather than the direct implementation of the individual elements.

TL;DR
  • Try to find a reasonable argument for not using #ajax.
  • Do not do any processing in the callback function, it's too late, I'm sorry.
  • Force button names that are semantic and scalable.
  • Template buttons and remove unnecessary validation from #ajax actions.
  • Use '#theme_wrappers' => array('container') rather than '#preffix' and '#suffix'.
Is AJAX Even Needed?

Since #ajax hinders accessibility and adds that much more complexity, before continuing on with the approach, reconsider others. Drupal will automatically handle the "no js" accessibility issue, providing full page refreshes with unsubmitted forms, but issues will still exist for those using screen readers. Because the time to request and receive the new content is indeterminate, screen readers will fail at providing the users with audible descriptions of the new content. Simply by choosing to use #ajax, you will automatically exclude those needing visual assistance. So, if it is simply hiding/showing another field or sets of fields, then #states would be a better fit. If the task is to select something out of a large selection, a multiple page approach or even an entity reference with an autocomplete field could suffice.

This example is a simplified version of a new field type used to select data from a Solr index of another site's products. The number of products was in the 200k's and the details needed to decide on a selection was more than just the product names, so building checkboxes/radios/select box would be too unwieldy and an autocomplete could not provide enough information. Also, the desired UX was to use a modal rather than multiple pages.

Callback is a Lie

An misconception that many developers, including past myself, have is that the AJAX callback function is the place to perform bulk of the logic. I have come to approach this function as just one that returns the portion of the form that I want. Any logic that changes the structure or data of a form should be handled in the form building function, because there it will be persistent as Drupal will store those changes but ignore any done within AJAX callback. So, the role of the callback function is simply a getter for a portion of the $form array. At first, it may seem easier to just hardcode the logic to return the sub array, but I recommend a dynamic solution that relies on the trigger's nested position relative to the AJAX container.

function product_details_selector_field_widget_form(&$form, &$form_state, $field, $instance, $langcode, $items, $delta, $element) { ... // Add a property to nested buttons to declare the relative depth // of the trigger to the AJAX targeted container $form['container']['modal']['next_page']['#nested_depth'] = 1; ... }

Then, for the callback, some "blind" logic can easily return the portion of form to render and return.

/** * AJAX callback to replace the container of the product_details_selector. */ function product_details_selector_ajax_return($form, $form_state) { // Trim the array of array parents for the trigger down to the container $array_parents = $form_state['triggering_element']['#array_parents']; $pop_count = 1; // The trigger is always included, so always have to pop if (isset($form_state['triggering_element']['#nested_depth'])) { $pop_count += $form_state['triggering_element']['#nested_depth']; } for ($i = 0; $i < $pop_count; $i++) { if (empty($array_parents)) { break; // Halt the loop whenever there are no more items to pop } array_pop($array_parents); } // Return the nested array return drupal_array_get_nested_value($form, $array_parents); // This function is so awesome }

With this approach, any future modifications to the $form array outside of the container are inconsequential to this widget. And if this widget's array is modified outside of the module, the modifier will just have to double check the #nested_depth values rather than completely overriding the callback function.

Name the Names

For clarity, from here on name will refer to what will be used for the HTML attributes id and name for containers (divs) and buttons, respectively.

Like with everything in programming, naming is the initial task that can make development, current and future, a simple walk through the business logic or a spaghetti mess of "oh yeahs". This is especially true for #ajax which requires the use of HTML ID attributes to place the new content as well as handling user actions (triggers). For most developers, this step is brushed over because the idea of their work being used in an unconventional or altered way is completely out of their purview. But a solid approach will reduce the frustration of future developers including yourself for this #ajax widget right now.

In this example and most cases these triggers will be buttons, but Drupal 7 also allowed for other triggering elements, such as the select box or radio buttons. Now, this leaves a weird situation where these other triggers have semantic names, but buttons will simply be named 'op'. For a simple form, this is no big deal, but for something complex, determining which action to take relies on the comparison of the button values. This gets much harder to do when you have multiple fields of the same type, bring in translation, and/or the client decides to change the wording later on in the project. So, my suggestion is to override the button names and base the logic on them.

// drupal_html_class() converts _ to - as well as removing dangerous characters $trigger_prefix = drupal_html_class($field['field_name'] . '-' . $langcode . '-' . $delta); // Short trigger names $show_trigger = $trigger_prefix . '-modal-open'; $next_trigger = $trigger_prefix . '-modal-next'; $prev_trigger = $trigger_prefix . '-modal-prev'; $search_trigger = $trigger_prefix . '-modal-search'; $add_trigger = $trigger_prefix . '-add'; $remove_trigger = $trigger_prefix . '-remove'; $cancel_trigger = $trigger_prefix . '-cancel'; // Div wrapper $ajax_container = $trigger_prefix . '-ajax-container';

The prefix in the example is built as a field form widget example. It is unique to field's name, language, and delta so that multiple instances can exist in the same form. But if your widget is not a field, it is still best to start with someting that is dynamically unique. Then, semantics are used to fill out the rest of the trigger names needed as well as the container's ID.

Button Structure

Ideally, every button within the #ajax widget should simply cause a rebuild of the same container, regardless of the changes triggered within the nested array. Since the callback is reduced to a simple getter for the container's render array, the majority of trigger properties can be templated. Now, all buttons that are built off of this template, barring intentional overrides, will prevent validation of elements outside of the widget, prevent submission, and have the same #ajax command to run.

$ajax_button_template = array( '#type' => 'button', // Not 'submit' '#value' => t('Button Template'), // To be replaced '#name' => 'button-name', // To be replaced '#ajax' => array( 'callback' => 'product_details_selector_ajax_return', 'wrapper' => $ajax_container, 'method' => 'replace', 'effect' => 'fade', ), '#validate' => array(), '#submit' => array(), '#limit_validation_errors' => array(array()), // Prevent standard Drupal validation '#access' => TRUE, // Display will be conditional based on the button and the state );   // Limit the validation errors down to the specific item's AJAX container // Once again, the field could be nested in multiple entity forms // and the errors array must be exact. If the widget is not a field, // then use the '#parents' key if available. if (!empty($element['#field_parents'])) { foreach ($element['#field_parents'] as $field_parent) { $ajax_button_template['#limit_validation_errors'][0][] = $field_parent; } } $ajax_button_template['#limit_validation_errors'][0][] = $field['field_name']; $ajax_button_template['#limit_validation_errors'][0][] = $langcode; $ajax_button_template['#limit_validation_errors'][0][] = $delta; $ajax_button_template['#limit_validation_errors'][0][] = 'container';

Limiting the validation errors will prevent other, unrelated fields from affecting the modal's functionality. Though, if certain fields are a requirement they can be specified here. This example will validate any defaults, such as required fields, that exist within the container.

$form['container']['modal']['page_list_next'] = array( '#value' => t('Next'), '#name' => $next_trigger, '#access' => FALSE, '#page' => 1, // For page navigation of Solr results ) + $ajax_button_template; // Keys not defined in the first array will be set from the values in the second   // Fade effect within the modal is disorienting $element['container']['modal']['search_button']['#ajax']['effect'] = 'none'; $element['container']['modal']['page_list_prev']['#ajax']['effect'] = 'none'; $element['container']['modal']['page_list_next']['#ajax']['effect'] = 'none';

The #page key is arbitrary and simply used to keep track of the page state without having to clutter up the $form_state, especially since the entire array of the triggering element is already stored in that variable. Other buttons within the widget do not need to track the page other than previous and next. Clicking the search button should result in the first page of a new search while cancel and selection buttons will close the modal anyway.

Smoking Gun

Determining the widget's state can now start easily with checks on the name and data of the trigger.

$trigger = FALSE; if (!empty($form_state['triggering_element']['#name'])) { $trigger = $form_state['triggering_element']; } $trigger_name = $trigger ? $trigger['#name'] : FALSE;   $open_modal = FALSE; if (strpos($trigger_prefix . '-modal', $trigger_name) === 0)) { $open_modal = TRUE; }   ... // Hide or show modal $form['container']['modal']['#access'] = $open_modal;   ... // Obtain page number regardless of next or previous $search_page = 1; if (isset($trigger['#page'])) { $search_page = $trigger['#page']; }   ... // Calculate if a next page button should be shown $next_offset = ($search_page + 1) * $per_page; if ($next_offset > $search_results['total']) { $form['container']['modal']['next_page']['#access'] = TRUE; // or '#disabled' if the action is too jerky } Theming

Now, to where most developers start their problem solving, how to build the AJAX-able portion. Drupal requires an element with an ID attribute to target where the new HTML is inserted. Ideally, it is best to make the target element and the AJAX content one and the same. There are a couple of ways for doing this, the most common that I see is far too static and therefore difficult to modify or extend.

// Div wrapper for AJAX replacing $element['contianer'] = array( '#prefix' => '<div id="' . $ajax_container . '">', '#suffix' => '</div>', );

This does solve the solution for the time being. It renders any child elements properly while wrapping with the appropriate HTML. But if another module, function, or developer wants to add other information, classes for instance, they would have to recreate the entire #prefix string. What I propose is to use the #theme_wrappers key instead.

// Div wrapper for AJAX replacing $element['container'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'id' => $ajax_container, ), );   if (in_array($trigger_name, $list_triggers)) { $element['container']['#attributes']['class'][] = 'product-details-selector-active-modal'; }   // Div inner-wrapper for modal styling $element['container']['modal'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'class' => array('dialog-box'), ), );   $element['container']['product_details'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'class' => array('product-details'), ), '#access' => TRUE, );

I have experienced in the past that using #theme, causes the form elements to be rendered "wrong," losing their names and their relationships with the data. The themes declared within #theme_wrappers will render later in the pipeline, so form elements will not lose their identity and the div container can be built dynamically. That is, simply to add a class, one just needs to add another array element to $element['container']['#attributes']['class'].

Conclusion

I do not propose the above to be hard-set rules to follow, but they should be helpful ideas that allow for more focus to be put into the important logic rather than basic functional logistics. View the form as transforming throughout time as the user navigates while the AJAX functionality is simply a way to refresh a portion of that form and the complexity of building your form widget will reduce down to the business logic needed.

Categories: Drupal

Mediacurrent: Pixel &#039;Perfection&#039; front-end development. Or, Avoiding awkward conversations with the Quality Assurance team

3 August 2016 - 12:32pm
What is 'pixel perfect'?

Pixel perfection is when the finished coded page and the design file are next to each other, you can not tell them apart. To quote Brandon Jones :

...so precise, so pristine, so detailed that the casual viewer can’t tell the difference.

Categories: Drupal

Chromatic: In Search of a Better Local Development Server

3 August 2016 - 10:26am
The problem with development environments

If you're a developer and you're like me, you have probably tried out a lot of different solutions for running development web servers. A list of the tools I've used includes:

That's not even a complete list — I know I've also tried other solutions from the wider LAMP community, still others from the Drupal community, and I've rolled my own virtual-machine based servers too.

All of these tools have their advantages, but I was never wholly satisfied with any of them. Typically, I would encounter problems with stability when multiple sites on one server needed different configurations, or problems customizing the environment enough to make it useful for certain projects. Even the virtual-machine based solutions often suffered from the same kinds of problems — even when I experimented with version-controlling critical config files such as vhost configurations, php.ini and my.cnf files, and building servers with configuration management tools like Chef and Puppet.

Drupal VM

Eventually, I found Drupal VM, a very well-thought-out virtual machine-based development tool. It’s based on Vagrant and another configuration management tool, Ansible. This was immediately interesting to me, partly because Ansible is the tool we use internally to configure project servers, but also because the whole point of configuration management is to reliably produce identical configuration whenever the software runs. (Ansible also relies on YAML for configuration, so it fits right in with Drupal 8).

My VM wishlist

Since I've worked with various VM-based solutions before, I had some fairly specific requirements, some to do with how I work, some to do with how the Chromatic team works, and some to do with the kinds of clients I'm currently working with. So I wanted to see if I could configure Drupal VM to work within these parameters:

1. The VM must be independently version-controllable

Chromatic is a distributed team, and I don't think any two of us use identical toolchains. Because of that, we don't currently want to include any development environment code in our actual project repositories. But we do need to be able to control the VM configuration in git. By this I mean that we need to keep every setting on the virtual server outside of the server in version-controllable text files.

Version-controlling a development server in this way also implies that there will be little or no need to perform administrative tasks such as creating or editing virtual host files or php.ini files (in fact, configuration of the VM in git means that we must not edit config files in the VM since they would be overridden if we recreate or reprovision it).

Furthermore, it means that there's relatively little need to actually log into the VM, and that most of our work can be performed using our day-to-day tools (i.e. what we've configured on our own workstations, and not whatever tools exist on the VM).

2. The VM must be manageable as a git submodule

On a related note, I wanted to be able to add the VM to the development server repository and never touch its files—I'm interested in maintaining the configuration of the VM, but not so much the VM itself.

It may help to explain this in Drupal-ish terms; when I include a contrib module in a Drupal project, I expect to be able to interact with that module without needing to modify it. This allows the module to be updated independently of the main project. I wanted to be able to work with the VM in the same way.

3. The VM must be able to be recreated from scratch at any time

This is a big one for me. If I somehow mess up a dev server, I want to be able to check out the latest version of the server in git, boot it and go back to work immediately. Specifically, I want to be able to restore (all) the database(s) on the box more or less automatically when the box is recreated.

Similarly, I usually work at home on a desktop workstation. But when I need to travel or work outside the house, I need to be able to quickly set up the project(s) I'll be working on on my laptop.

Finally, I want the VM configuration to be easy to share with my colleagues (and sometimes with clients directly).

4. The VM must allow multiple sites per server

Some of the clients we work with have multiple relatively small, relatively similar sites. These sites sometimes require similar or identical changes. For these clients, I much prefer to have a single VM that I can spin up to work on one or several of their sites at once. This makes it easier to switch between projects, and saves a great deal of disk space (the great disadvantage to using virtual machines is the amount of disk space they use, so putting several sites on a single VM can save a lot of space).

And of course if we can have multiple sites per server, then we can also have a single site per server when that's appropriate.

5. The VM must allow interaction via the command line

I've written before about how I do most of my work in a terminal. When I need to interact with the VM, I want to stay in the terminal, and not have to find or launch a specific app to do it.

6. The VM must create drush aliases

The single most common type of terminal command for me to issue to a VM is drush @alias {something}. And when running the command on a separate server (the VM!), the command must be prefixed with an alias, so a VM that can create drush aliases (or help create them) is very, very useful (especially in the case where there are multiple sites on a single VM).

7. The VM must not be too opinionated about the stack

Given the variations in clients' production environments, I need to be able to use any current version of PHP, use Apache or Nginx, and vary the server OS itself.

My VM setup

Happily, it turns out that Drupal VM can not only satisfy all these requirements, but is either capable of all of them out of the box, or makes it very straightforward to incorporate the required functionality. Items 4, 5, 6, and 7, for example, are stock.

But before I get into the setup of items 1, 2, and 3, I should note that this is not the only way to do it.

Drupal VM is a) extensively documented, and b) flexible enough to accommodate any of several very different workflows and project structures than what I'm going to describe here. If my configuration doesn't work with your workflow, or my workflow won't work with your configuration you can probably still use Drupal VM if you need or want a VM-based development solution.

For Drupal 8 especially, I would simply use Composer to install Drupal, and install Drupal VM as a dependency.

Note also that if you just need a quick Drupal box for more generic testing, you don't need to do any of this, you can just get started immediately.

Structure

We know that Drupal VM is based on Ansible and Vagrant, and that both of those tools rely on config files (YAML and ruby respectively). Furthermore, we know that Vagrant can keep folders on the host and guest systems in sync, so we also know that we'll be able to handle item 1 from my wishlist--that is, we can maintain separate repositories for the server and for projects.

This means we can have our development server as a standalone directory, and our project repositories in another. For example, we might set up the following directory structure where example.com contains the project repository, and devserver contains the Drupal VM configuration.

Servers └── devserver/ Sites └── example.com/ Configuration files

Thanks to some recent changes, Drupal VM can be configured with an external config.yml file , a local config.yml file, and an external Vagrantfile.local file using a delegating Vagrantfile.

The config.yml file is required in this setup, and can be used to override any or all of the default configuration in Drupal VM's own default.config.yml file.

The Vagrantfile.local file is optional, but useful in case you need to alter Drupal VM's default Vagrant configuration.

The delegating Vagrantfile is the key to tying together our main development server configuration and the Drupal VM submodule. It defines the directory where configuration files can be found, and loads the Drupal VM Vagrantfile.

This makes it possible to create the structure we need to satisfy item 2 from my wishlist--that is, we can add Drupal VM as a git submodule to the dev server configuration:

Server/ ├── Configuration/ | ├── config.yml | └── Vagrantfile.local ├── Drupal VM/ └── Vagrantfile Recreating the VM

One motivation for all of this is to be able to recreate the entire development environment quickly. As mentioned above, this might be because the VM has become corrupt in some way, because I want to work on the site on a different computer, or because I want to share the site—server and all—with a colleague.

Mostly, this is simple. To the extent that the entire VM (along with the project running inside it!) is version-controlled, I can just ask my colleague to check out the relevant repositories and (at most!) override the vagrant_synced_folders option in a local.config.yml with their own path to the project directory.

In checking out the server repository (i.e. we are not sharing an actual virtual disk image), my colleague will get the entire VM configuration including:

  • Machine settings,
  • Server OS,
  • Databases,
  • Database users,
  • Apache or Nginx vhosts,
  • PHP version,
  • php.ini settings,
  • Whatever else we've configured, such as SoLR, Xdebug, Varnish, etc.

So, with no custom work at all—even the delegating Vagrant file comes from the Drupal VM documentation—we have set up everything we need, with two exceptions:

  1. Entries for /etc/hosts file, and
  2. Databases!

For these two issues, we turn to the Vagrant plugin ecosystem.

/etc/hosts entries

The simplest way of resolving development addresses (such as e.g. example.dev) to the IP of the VM is to create entries in the host system's /etc/hosts file:

192.168.99.99 example.dev

Managing these entries, if you run many development servers, is tedious.

Fortunately, there's a plugin that manages these entries automatically, Vagrant Hostsupdater. Hostsupdater simply adds the relevant entries when the VM is created, and removes them again (configurably) when the VM is halted or destroyed.

Databases

Importing the database into the VM is usually a one-time operation, but since I'm trying to set up an easy process for working with multiple sites on one server, I sometimes need to do this multiple times — especially if I've destroyed the actual VM in order to save disk space etc.

Similarly, exporting the database isn't an everyday action, but again I sometimes need to do this multiple times and it can be useful to have a selection of recent database dumps.

For these reasons, I partially automated the process with the help of a Vagrant plugin. "Vagrant Triggers" is a Vagrant plugin that allows code to be executed "…on the host or guest before and/or after Vagrant commands." I use this plugin to dump all non-system databases on the VM on vagrant halt, delete any dump files over a certain age, and to import any databases that can be found in the dump location on the first vagrant up.

Note that while I use these scripts for convenience, I don't rely on them to safeguard critical data.

With these files and a directory for database dumps to reside in, my basic server wrapper now looks like this:

Server/ ├── Vagrantfile ├── config/ ├── db_dump.sh ├── db_dumps/ ├── db_import.sh └── drupal-vm/ My workflow New projects

All of the items on my wishlist were intended to help me achieve a specific workflow when I needed to add a new development server, or move it to a different machine:

  1. Clone the project repo.
  2. Clone the server repo.
  3. Change config.yml:
    • Create/modify one or more vhosts.
    • Create/modify one or more databases.
    • Change VM hostname.
    • Change VM machine name.
    • Change VM IP.
    • Create/modify one or more cron jobs.
  4. Add a database dump (if there is one) to the db_dumps directory.
  5. Run vagrant up.
Sharing projects

If I share a development server with a colleagues, they have a similar workflow to get it running:

  1. Clone the project repo.
  2. Clone the server repo.
  3. Customize local.config.yml to override my settings:
    • Change VM hostname (in case of VM conflict).
    • Change VM machine name (in case of VM conflict).
    • Change VM IP (in case of VM conflict).
    • Change vagrant synced folders local path (if different from mine).
  4. Add a database dump to the dumps directory.
  5. Run vagrant up.
Winding down projects

When a project completes that either has no maintenance phase, or where I won't be involved in the ongoing maintenance, I like to remove the actual virtual disk that the VM is based on. This saves ≥10GB of hard drive space (!):

$ vagrant destroy

But since a) every aspect of the server configuration is contained in config.yml and Vagrantfile.local, and b) since we have a way of automatically importing a database dump, resurrecting the development server is as simple as pulling down a new database and re-provisioning the VM:

$ scp remotehost:/path/to/dump.sql.gz /path/to/Server/db_dumps/dump.sql.gz $ vagrant up Try it yourself

Since I wanted to reuse this structure for each new VM I need to spin up, I created a git repository containing the code. Download and test it--the README contains detailed setup instructions for getting the environment ready if you don't already use Vagrant.

Categories: Drupal

Miloš Bovan: Final code polishing of Mailhandler

3 August 2016 - 8:54am
Final code polishing of Mailhandler

This blog post summarizes week #11 of the Google Summer of Code 2016 project - Mailhandler. 

Time flies and it is already the last phase of this year’s Google Summer of Code 2016. The project is not over yet and I would like to update you on the progress I made last week. In the last blog post I was writing about the problems I faced in week 10 and how we decided to do code refactoring instead of UI/UX work. The plan for the last week was to update Mailhandler with the newly introduced changes in Inmail as well as work on new user-interface related issues. Since this was the last week of issues work before doing the project documentation, I used the time to polish the code as much as possible.

As you may know, Inmail got new features on the default analyzer result. Since this change was suggested by Mailhandler, the idea was to remove Mailhandler specific analyzer result and use the default one instead. It allows the core module (and any other Inmail-based module) to use the standardized result across all enabled analyzers. The main benefit of this is to support better collaboration between analyzer plugins.
Even though Mailhandler updates were not planned to take a lot of time, it turned to be opposite. Hopefully, the long patch passed all the tests and was fixed in Use DefaultAnalyzerResult instead of Mailhandler specific one issue.
It was needed to not only replace the Mailhandler-specific analyzer result but to use user context and context concept in general as well. Each of the 5 Mailhandler analyzers were updated to “share” their result. Also, non-standard features of each of the analyzers are available as contexts. Later on, in the handler processing phase, handler plugins can access those contexts and extract the needed information.

The second part of the available work time was spent on user interface issues, mostly on improving Inmail. Mailhandler as a module is set of Inmail plugins and configuration files and in discussion with mentors, we agreed that improving the user interface of Inmail is actually an improvement to Mailhandler too.
IMAP (Internet Message Access Protocol) as a standard message protocol is supported by Inmail. It is the main Inmail deliverer and UI/UX improvements were really needed there. In order to use it, valid credentials are requested. One of the DX validation patterns is to validate those credentials via a separate “Test connection” button.
 

IMAP test connection button

In the previous blog posts, I mentioned a power of Monitoring module. It provides an overall monitoring of a Drupal website via nice UI. Since it is highly extensible, making Inmail support it would be a nice feature. Among the most important things to monitor was a quota of an IMAP plugin. This allows an administrator to see the "health state" of this plugin and to react timely. The relevant issue needs a few corrections, but it is close to be finished too.

Seeing that some of the issues mentioned above are still in “Needs review” or “Needs work” state, I will spend additional time this week in order to finish them. The plan for the following week is to finish the remaining issues we started and focus on the module documentation. The module documentation consist of improving plugin documentation (similary to api.drupal.org), Drupal.org project page, adding a Github read me, installation manuals, code comments, demo article updates and most likely everything related to describing the features of the module.

 

 

 

Milos Wed, 08/03/2016 - 17:54 Tags Google Summer of Code Drupal Open source Drupal Planet Add new comment
Categories: Drupal

OSTraining: How to Validate Field Submissions in Drupal

3 August 2016 - 7:25am

As OSTraining member asked us how to validate fields in Drupal 8.

In this particular example, they wanted to make sure that every entry in a text field was unique. 

For this tutorial, you will need to download, install and enable the following modules.

Categories: Drupal

Drop Guard: 1..2..3 - continuous security! A business guide for companies & individuals

3 August 2016 - 6:00am

A lot of Drupal community members, who are interested in or already use Drop Guard, were waiting for this ultimate guide on continuous security in Drupal. Using Drop Guard in a daily routine improved update workflows and increased the efficiency of the website support for all of our users. But there were still a lot of blind spots and unexplored capabilities such as using Drop Guard as an "SLA catalyser". So we've stuck our heads together and figured out how to share this information with you in a professional and condensed way.

Drupal Drupal Planet Drupal Community Security Drupal shops Business
Categories: Drupal

GVSO Blog: [GSoC 2016: Social API] Week 10: A Social Post implementer

2 August 2016 - 10:42pm
[GSoC 2016: Social API] Week 10: A Social Post implementer

Week 10 is over, and we are only two weeks away from Google Summer of Code final evaluation. During these ten weeks, we have been rebuilding social networking ecosystem in Drupal. Thus, we created the Social API project and divided it into three components: Social Auth, Social Post and Social Widgets.

gvso Wed, 08/03/2016 - 01:42 Tags Drupal Drupal Planet GSoC 2016
Categories: Drupal

Galaxy: GSoC’ 16: Port Search Configuration module; coding week #10

2 August 2016 - 3:47pm

Google Summer of Code 2016 is into its final lap. I have been porting the search configuration module to Drupal 8 as part of this program and I am into the last stage, fixing some of the issues reported and improving the module port I have been doing for the past two months.
Finally, I have set up my Drupal blog. I should have done it much more earlier. A quick advice to to those who are interested in creating a blog powered by Drupal and run it online, I made use of the openshift to host my blog. It gives the freedom to run maximum of three applications for free. Select your favorite Drupal theme and start blogging.
So, now lets come back to my project status. If you would like to have a glance at my past activities on this port , please refer these posts.
Last week I was mainly concentrating on fixing some of the issues reported in the module port. It was really a wonderful learning experience, creating new issues, getting reviewed the patches, updating the patches if required and finally the happiness of getting the patches committed into the Drupal core is a different feeling. Moreover, I could also get suggestions from other developers who are not directly part of my project which I find as the real blessing of being part of this wonderful community.

The module is now shaping up well and is moving ahead in the right pace. Last week, I had faced some issues with the twig and I was resolving it. The module is currently available for testing. I could work on some key aspects of the module in the past week. I worked on the namespace issues. Some of the functions were not working as intended due to the wrong usage of the PSR namespace. I could fix some of these issues. Basically, PSR namespaces helps to reuse certain standard functions of the Drupal API framework. They are accessed using the 'use' keyword. We can name the class locations using the namespace property.
For instance, if I want to use the Html escape function for converting special characters to HTML format,
use Drupal\Component\Utility\Html;
Now, $role_option s= array_map('Html::escape', user_role_names())
Hope you got the idea. Here I could have written the entire route/path of thee escape function. But through the usage of the namespace, I just need to define the route at the beginning and later on it can be used for further implementations n number of times.
The user_role_names() retrieved the names of the roles involved. This is an illustration of the usage of namespace. This is really an area to explore more. Please do read more on this, for better implementation of the Drupal concepts.

In the coming days, I would like to test the various units of the module ported, fix the issues if any and bring up a secure, user friendly search configuration module for Drupal.
Hope all the students are enjoying the process and exploring the Drupal concepts. Stay tuned for the future updates on this port process.

Tags: drupal-planet
Categories: Drupal

Cocomore: „Memories“ and more: These new features make Snapchat even more attractive for businesses

2 August 2016 - 3:00pm

Until recently one of the biggest contradictions in social media was called: Snapchat and consistency. In early July Snapchat put an end to this. The new feature "Memories" now allows users to save images. Next to "Memories" Snapchat further developed the platform also on other positions. We show what opportunities the new changes offer for businesses.

Categories: Drupal

Janez Urevc: Release of various Drupal 8 media modules

2 August 2016 - 1:36pm
Release of various Drupal 8 media modules

Today we released new versions of many Drupal 8 media modules. This release is specially important for Entity browser and Entity embed modules since we released the last planned alpha version of those modules. If there will be no critical bugs reported in next two weeks we'll release first beta versions of those modules.

List of all released modules:

slashrsm Tue, 02.08.2016 - 22:36 Tags Drupal Media Enjoyed this post? There is more! We had great and productive time at NYC sprint! Sam Morenson is thinking about media in Drupal core Presentations about various Drupal 8 media modules

View the discussion thread.

Categories: Drupal

Pages