Planet Drupal

Subscribe to Planet Drupal feed - aggregated feeds in category Planet Drupal
Updated: 9 hours 27 min ago

Anchal: GSoC'16 - Porting Comment Alter Module

20 August 2016 - 5:00pm

For the last 3 months I’ve been working on Porting the Comment Alter module to Drupal 8 as my GSoC’16 project under mentorship of boobaa and czigor. This blog is an excerpt of the work I did during this time period. Weekly blog posts for the past 12 weeks can be accessed here.


Creating schema for the module: Implemented hook_schema() to store the old and new entity revision IDs along with the parent entity type as altered by a comment. The revision IDs are used to show the differences over comments. The parent entity type is used to delete the entries from the comment_alter table when any revision of the parent entity is deleted, because in Drupal 8 we can have same revision IDs from different entity types. So to remove entries of particular entity type we need the entity type.

Using ThirdPartySettings to alter field config edit form - Implemented hook_form_FORM_ID_alter() for field_config_edit_form. This provides an interface to:

  1. Make any field comment alterable - Makes it possible to select any field we want to be altered by the comment. Currently all the Drupal core provided fields works well with the module.

  2. Hide alteration from diff - If the comment alterable option is enabled, then this option hides the differences shown over the comments. Instead of the differences, a link to the revision comparison is displayed for nodes. For the rest of the entities a “Changes are hidden” message is shown.

  3. Use latest revision - When a module like Workbench makes the current revision of an entity not the latest one, this option forces the Comment Alter module to use the latest revision instead of the current one. This option is present on the comment field settings.

  4. Adds Diff link on comments - Adds a Diff link on comments which takes us to the comparison page of two revisions, generated by the Diff module.

  5. Comment altering while replying to a comment - By default comment alterable fields can not be altered while replying to a comment. This option allows altering of fields even while replying to comments.

Adding pseudo fields: Implemented hook_entity_extra_field_info() to add one pseudo field for each comment alterable field on respective comment form display, and one pseudo field on comment display to show the changes made at the comment. Using these pseudo fields the position of the comment alterable fields can be re-ordered in the comment form. This gives site-builders flexibility to alter the position of any alterable fields.

Attaching comment alterable fields’ widgets on comment form: Comment alterable field widgets are retrieved from the form-display of the parent entity and they are attached to the comment form only after ensuring that there are no side effects. To support same name fields on both comment and parent entity, #parent property is provided so that the submitted field values for our alterable field widgets appears at a different location, not at the top level of $form_state->getValues(). All these added fields are re-orderable. Column informations and old values are also stored to the form at this stage, to later check if there were any changes made on the comment alterable fields.

Adding submit and validation callback for the altered comment form: First the submitted values are checked against the old values to see if the values of the alterable field changed at all or not. If they changed, then the parent entity values are updated and this is done by building the parent entity form display and copying the altered field values into it. Then the form is saved. In case the parent entity doesn’t support revisions, then do nothing else just save the parent entity with altered values. Otherwise create a revision and store the comment ID, old and new revision IDs and parent entity type in the comment alter database table, which is used to show the differences on comments using the Diff module.

Showing differences on comments: Using the Diff module and comment alter database table, the differences are shown over a particular comment. Only possible if the parent entity supports revisions. Diff module is used to get the differences between the two revisions and then those differences are rendered on comments in table format along with some custom styling.

Adding PHPUnit tests: Added automated unit tests to check the functionality of the module for different field types and widgets. The tests are written for EntityTestRev entity types to keep them as generic as possible. This was the toughest part for me as I was stuck at some places in tests for quite a while as thses tests took lot of time to run and debugging them really is hard. But in the end I’m happy that I was able to complete all the tests.

Screencast/Demo video: Created a demo video showing how the Comment Alter module works, along with a simple use case.

What’s left?

My mentors asked me to skip the Rules Integration part because the Rules module doesn’t have a stable or a beta release yet, only their first alpha release is there. So, the Rules Integration part is postponed till we get a stable or beta release.

Some important links

Thank you!

Categories: Drupal

Roy Scholten: Getting something in the box

20 August 2016 - 2:53pm

First impressions matter. The first glance has a lot if impact on further expectations. Drupal core doesn’t do well there. As webchick points out, after installation the opening line is “you have no content”.

Yeah, thanks.

This empty canvas makes Drupal appear very limited in functionality. Which is the exact opposite of how Drupal is advertised (flexible, extensible, there’s a module for that!)

This is not news. The issue for adding sample content is over 10 years old. The image I posted earlier is from a core conversation 6 years ago. Eaton and moi presented on Snowman in Prague 3 years ago.

But now we do have Drupal 8, with essential features available right out of the box. We have a new release schedule that encourages shipping new features and improvements every 6 months. And we’re settling on a better process for figuring out the part from initial idea to fleshed out plan first, before implementing that plan.

So lets work together and come up with a plan for sample content in core. Which means answering product focussed questions like:

  • Audience: who do we make this for?
  • Goals: what do these people want to achieve?
  • Features: which tools (features) to provide to make that possible?
  • Priorities: which of those tools are essential, which are nice to haves?

But purpose first: audience and goals.

We’re always balancing product specifics with framework generics in what core provides. Pretty sure we can do something more opiniated than our current default “Article” and “Page” content types without painting ourselves in a corner.

We’ll discuss this topic during upcoming UX meetings and in the UX channel in (get your automatic invite here).

Tags: drupalonboardingdrupalplanetSub title: Tabula rasa is not an effective onboarding strategy
Categories: Drupal

ImageX Media: Complete Content Marketing with Drupal

19 August 2016 - 5:11pm

At its most basic, content marketing is about maintaining or changing consumer behaviour. Or more elaborately, it’s “a marketing technique of creating and distributing valuable, relevant and consistent content to attract and acquire a clearly defined audience -- with the objective of driving profitable customer action.”

Categories: Drupal

ImageX Media: Want to be a Content Marketing Paladin? Then Automate Your Content Production Workflows with These (Free) Tools

19 August 2016 - 5:07pm

Flat-lining content experiences and withering conversion rates can be the kiss of death to almost any website. When content experiences deteriorate one issue seems to make an appearance time and time again: the amount of time and resources required to produce and manage content marketing initiatives. Among the many best practices and strategies that will accelerate growth includes the all-powerful move towards productivity automation. 

Categories: Drupal Getting started with a Core Initiative

19 August 2016 - 2:00pm
Driesnote where GraphQL was featured. Picture from Josef Jerabek

After some time contributing to the Drupal project in different ways, I finally decided to step up and get involved in one of the Core Initiatives. I was on IRC when I saw an announcement about the JSON API / GraphQL initiative weekly meeting and it seemed like a great chance to join. So, this blog post is about how you can get involved in a Core Initiative and more specifically, how can you get involved in the JSON API / GraphQL Initiative.

Continue reading…

Categories: Drupal

ImageX Media: Debugging Your Migrations in Drupal 8

19 August 2016 - 12:04pm

One of the most useful features of Drupal 8 is the migration framework in core, and there are already plenty of plugins to work with different sources that are available in contributed modules. 

When writing your own code, it must always be debugged. As migrations can only be started with Drush, the debugging can be a bit challenging. And it gets even more interesting when you develop your website in a Vagrant box. 

In this tutorial, we will go through setting up xDebug and PhpStorm to debug your migrations.

Categories: Drupal

OSTraining: How to Use the Drupal Group Module

19 August 2016 - 6:22am

In this tutorial, I'm going to explain how you can use the new Group module to organize your site's users. Group is extremely powerful Drupal 8 module.

At the basic level, Group allows you to add extra permissions to content. 

At the more advanced level, this module is potentially a Drupal 8 replacement for Organic Groups.

Categories: Drupal

Mediacurrent: Friday 5: 5 Ways to Use Your Browser Developer Tools

19 August 2016 - 5:12am

TGIF! We hope the work week has treated you well.

Categories: Drupal

Nuvole: Optimal deployment workflow for Composer-based Drupal 8 projects

19 August 2016 - 4:20am
Considerations following our Drupal Dev Day Milan and Drupalaton presentations; and a preview of our DrupalCon training.

This post is an excerpt from the topics covered by our DrupalCon Dublin training: Drupal 8 Development - Workflows and Tools.

During the recent Nuvole presentations at Drupal Dev Days Milan 2016 and Drupalaton Hungary 2016 we received a number of questions on how to properly setup a Drupal 8 project with Composer. An interesting case where we discovered that existing practices are completely different from each other is: "What is the best way to deploy a Composer-based Drupal 8 project?".

We'll quickly discuss some options and describe what works best for us.

What to commit

You should commit:

  • The composer.json file: this is obvious when using Composer.
  • The composer.lock file: this is important since it will allow you to rebuild the entire codebase at the same status it was at a given point in the past.

The fully built site is commonly left out of the repository. But this also means that you need to find a way for rebuilding and deploying the codebase safely.

Don't run Composer on the production server

You would clearly never run composer update on the production server, as you want to be sure that you will be deploying the same code you have been developing upon. For a while, we considered it to be enough to have Composer installed on the server and run composer install to get predictable results from the (committed) composer.lock file.

Then we discovered that this approach has a few shortcomings:

  • The process is not robust. A transient network error or timeout might result in a failed build, thus introducing uncertainty factors in the deploy scripts. Easy to handle, but still not desirable as part of a delicate step such as deployment.

  • The process will inevitably take long. If you run composer install in the webroot directly, your codebase will be unstable for a few minutes. This is orders of magnitude longer than a standard update process (i.e., running drush updb and drush cim) and it may affect your site availability. This can be circumvented by building in a separate directory and then symlinking or moving directories.

  • Even composer install can be unpredictable, especially on servers with restrictions or running different versions of Composer or PHP; in rare circumstances, a build may succeed but yield a different codebase. This can be mitigated by enforcing (e.g., through Docker or virtualization) a dev/staging environment that matches the production environment, but you are still losing control on a relatively lengthy process.

  • You have no way of properly testing the newly built codebase after building it and before making it live.

  • Composer simply does not belong in a production server. It is a tool with a different scope, unrelated to the main tasks of a production server.

Where to build the codebase? CI to the rescue

After ruling out the production server, where should the codebase be built then?

Building it locally (i.e., using a developer's environment) can't work: besides the differences between the development and the production (--no-dev) setup, there is the risk of missing possible small patches applied to the local codebase. And a totally clean build is always necessary anyway.

We ended up using Continuous Integration for this task. Besides the standard CI job, which operates after any push operation to the branches under active development, performs a clean installation and runs automated tests, another CI job builds the full codebase based on the master branch and the composer.lock file. This allows sharing it between developers, a fast deployment to production through a tarball or rsync, and opportunities for actually testing the upgrade (with a process like: automatically import the production database, run database updates, import the new configuration, run a subset of automated tests to ensure that basic site functionality has no regressions) for maximum safety.

Slides from our recent presentations, mostly focused on Configuration Management but covering part of this discussion too, are below.

Tags: Drupal PlanetDrupal 8DrupalConTrainingAttachments:  Slides: Configuration Management in Drupal 8
Categories: Drupal

Jim Birch: Styling Views Exposed Filters Selects in Drupal 8

19 August 2016 - 2:20am

Styling the HTML <select> tag to appear similar in all the different browsers is a task unto itself.  It seems on each new site , I find myself back visiting this post by Ivor Reić for a CSS only solution.  My task for today is to use this idea to theme an exposed filter on a view.

The first thing we need to do is add a div around the select.  We can do this by editing the select's twig template from Drupal 8 core's stable theme.  Copy the file from

/core/themes/stable/templates/form/select.html.twig to


Then add the extra <div class="select-style"> and closing </div> as so.

Here is the LESS file that I compile which includes Ivor's CSS, but also some adjustments I added to event the exposed filter out. Each rule is commented, explaining what they do.

I will compile this into my final CSS and we are good to go.  The display of the form, and the select list should be pretty accurate to what I want across all modern browsers.  Adjust as needed for your styles and design.

Read more

Categories: Drupal

Cocomore: The Central Data Hub of VDMA - Tango REST Interface (TRI)

18 August 2016 - 3:00pm

On the VDMA website (Association of German Machinery and Plant Engineering) various professional associations are specifically listed with their individual information. To provide each page with information from the Tango Backend, a specific interface has been developed: The so-called Tango REST interface. In the seventh part of our series “The Central Data Hub of VDMA” we will introduce this interface, its technical realization and its functions. 

Categories: Drupal

Zivtech: Staff Augmentation and Outsourced Training: Do You Need It?

18 August 2016 - 12:53pm
The goal of any company is to reduce costs and increase profit, especially when it comes to online and IT projects. When an IT undertaking is a transitional effort, it makes sense to consider staff augmentation and outsourcing.

Consider the marketing efforts of one worldwide corporation. Until recently, each brand and global region built and hosted its own websites independently, often without a unified coding and branding standard. The result was a disparate collection of high maintenance, costly brand websites. A Thousand Sites: One Goals
The organization has created nearly a thousand sites in total, but those sites were not developed at the same time or with the same goals. That’s a pain point. To solve this problem, the company decided to standardize all of its websites onto a single reference architecture, built on Drupal.

The objective of the new proprietary platform includes universal standards, a single platform that can accommodate regional feature sets, automated testing, and sufficient features that work for 95% of use cases for the company’s websites globally.

While building a custom platform is a great step forward, it must then be implemented, and staff needs to be brought up to speed. To train staff on technical skills and platforms, often the best solution is to outsource the training to experts who step in, take over training and propel the effort forward quickly.

As part of an embedded team, an outsourced trainer is an adjunct team member, attending all of the scrum meetings, with a hand in the future development of the training materials. Train Diverse Audiences
A company may invest a lot of money into developing custom features, and trainers become a voice for the company, showing people how easy it is to implement, how much it is going to help, and how to achieve complex tasks such as activation processes. The goal is to get people to adopt the features and platform. Classroom style training allows for exercises on live sites and familiarity with specific features. The Training Workflow
Trainers work closely with the business or feature owner to build a curriculum. It’s important to determine the business needs that inspired the change or addition.

Starting with an initial outline, trainers and owners work together. Following feedback, more information gets added to flesh it out. This first phase can take four to five sessions to get the training exactly right for the business owner. For features that follow, the process becomes streamlined. It's more intuitive because the trainer has gotten through all the steps and heard the pain points, but it’s important to always consult the product owner. Once there is a plan, the trainers rehearse the curriculum to see what works, what doesn’t work, what’s too long, and where they need to cut things. Training Now & Future
Training sessions may be onsite or remote. It is up to the business to decide if attendance is mandatory. Some staffers may wish to attend just to keep up with where the business is going.

Sessions are usually two hours with a lot of time for Q&A. With trainings that are hands-on, it’s important to factor in time for technical difficulties and different levels of digital competence.

Remote trainings resemble webinars. Trainers also create videos to enable on demand trainings. They may be as simple as screencasts with a voiceover, but others have a little more work involved. Some include animations to demo tasks in a friendlier way before introducing a more static backend form. It is the job of the trainer to tease out what’s relevant to a wide net of audiences.

The training becomes its own product that can live on. The recorded sessions are valuable to onboard and train up future employees. Trainers add more value to existing products and satisfy management goals.
Categories: Drupal

Chromatic: Migrating (away) from the Body Field

18 August 2016 - 10:46am

As we move towards an ever more structured digital world of APIs, metatags, structured data, etc., and as the need for content to take on many forms across many platforms continues to grow, the humble “body” field is struggling to keep up. No longer can authors simply paste a word processing document into the body field, fix some formatting issues and call content complete. Unfortunately, that was the case for many years and consequently there is a lot of valuable data locked up in body fields all over the web. Finding tools to convert that content into useful structured data without requiring editors to manually rework countless pieces of content is essential if we are to move forward efficiently and accurately.

Here at Chromatic, we recently tackled this very problem. We leveraged the Drupal Migrate module to transform the content from unstructured body fields into re-usable entities. The following is a walkthrough.


On this particular site, thousands of articles from multiple sources were all being migrated into Drupal. Each article had a title and body field with all of the images in each piece of content embedded into the body as img tags. However, our new data model stored images as separate entities along with the associated metadata. Manually downloading all of the images, creating new image entities, and altering the image tags to point to the new image paths, clearly was not a viable or practical option. Additionally, we wanted to convert all of our images to lazy loaded images, so having programmatic control over the image markup during page rendering was going to be essential. We needed to automate these tasks during our migration.

Our Solution

Since we were already migrating content into Drupal, adapting Migrate to both migrate the content in and fully transform it all in one repeatable step was going to be the best solution. The Migrate module offers many great source classes, but none can use img elements within a string of HTML as a source. We quickly realized we would need to create a custom source class.

A quick overview of the steps we’d be taking:

  1. Building a new source class to find img tags and provide them as a migration source.
  2. Creating a migration to import all of the images found by our new source class.
  3. Constructing a callback for our content migration to translate the img tags into tokens that reference the newly created image entities.
Building the source class

Migrate source classes work by finding all potential source elements and offering them to the migration class in an iterative fashion. So we need to find all of the potential image sources and put them into an array that can used a source for a migration. Source classes also need to have a unique key for each potential source element. During a migration the getNextRow() method is repeatedly called from the parent MigrateSource class until it returns FALSE. So let's start there and work our way back to the logic that will identify the potential image sources.

** * Fetch the next row of data, returning it as an object. * * @return object|bool * An object representing the image or FALSE when there is no more data available. */ public function getNextRow() { // Since our data source isn't iterative by nature, we need to trigger our // importContent method that builds a source data array and counts the number // of source records found during the first call to this method. $this->importContent(); if ($this->matchesCurrent < $this->computeCount()) { $row = new stdClass(); // Add all of the values found in @see findMatches(). $match = array_shift(array_slice($this->matches, $this->matchesCurrent, 1)); foreach ($match as $key => $value) { $row->{$key} = $value; } // Increment the current match counter. $this->matchesCurrent++; return $row; } else { return FALSE; } }

Next let's explore our importContent() method called above. First, it verifies that it hasn't already been executed, and if it has not, it calls an additional method called buildContent().

/** * Find and parse the source data if it hasn't already been done. */ private function importContent() { if (!$this->contentImported) { // Build the content string to parse for images. $this->buildContent(); // Find the images in the string and populate the matches array. $this->findImages(); // Note that the import has been completed and does not need to be // performed again. $this->contentImported = TRUE; } }

The buildContent() method calls our contentQuery() method which allows us to define a custom database query object. This will supply us with the data to parse through. Then back in the buildContent() method we loop through the results and build the content property that will be parsed for image tags.

/** * Get all of the HTML that needs to be filtered for image tags and tokens. */ private function buildContent() { $query = $this->contentQuery(); $content = $query->execute()->fetchAll(); if (!empty($content)) { // This builds one long string for parsing that can done on long strings without // using too much memory. Here, we add fields ‘foo’ and ‘bar’ from the query. foreach ($content as $item) { $this->content .= $item->foo; $this->content .= $item->bar; } // This builds an array of content for parsing operations that need to be performed on // smaller chunks of the source data to avoid memory issues. This is is only required // if you run into parsing issues, otherwise it can be removed. $this->contentArray[] = array( 'title' => $item->post_title, 'content' => $item->post_content, 'id' => $item->id, ); } }

Now we have the the logic setup to iteratively return row data from our source. Great, but we still need to build an array of source data from a string of markup. To do that, we call our custom findImages() method from importContent(), which does some basic checks and then calls all of the image locating methods.

We found it is best to create methods for each potential source variation, as image tags often store data in multiple formats. Some examples are pre-existing tokens, full paths to CDN assets, relative paths to images, etc. Each often requires unique logic to parse properly, so separate methods makes the most sense.

/** * Finds the desired elements in the markup. */ private function findImages() { // Verify that content was found. if (empty($this->content)) { $message = 'No html content with image tags to download could be found.'; watchdog('example_migrate', $message, array(), WATCHDOG_NOTICE, 'link'); return FALSE; } // Find images where the entire source content string can be parsed at once. $this->findImageMethodOne(); // Find images where the source content must be parsed in chunks. foreach ($this->contentArray as $id => $post) { $this->findImageMethodTwo($post); } }

This example uses a regular expression to find the desired data, but you could also use PHP Simple HTML DOM Parser or the library of your choice. It should be noted that I opted for a regex example here to keep library-specific code out of my code sample. However, we would highly recommend using a DOM parsing library instead.

/** * This is an example of a image finding method. */ private function findImageMethodOne() { // Create a regex to look through the content. $matches = array(); $regex = '/regex/to/find/images/'; preg_match_all($regex, $this->content, $matches, PREG_SET_ORDER); // Set a unique row identifier from some captured pattern of the regex- // this would likely be the full path to the image. You might need to // perform cleanup on this value to standardize it, as the path // to /foo/bar/image.jpg,, and // should not create three unique // source records. Standardizing the URL is key for not just avoiding // creating duplicate source records, but the URL is also the ID value you // will use in your destination class mapping callback that looks up the // resulting image entity ID from the data it finds in the body field. $id = ‘’; // Add to the list of matches after performing more custom logic to // find all of the correct chunks of data we need. Be sure to set // every value here that you will need when constructing your entity later. $this->matches[$id] = array( 'url' => $src, 'alt' => $alttext, 'title' => $description, 'credit' => $credit, 'id' => $id, 'filename' => $filename, 'custom_thing' => $custom_thing, ); } Importing the images

Now that we have our source class complete, let's import all of the image files into image entities.

/** * Import images. */ class ExampleImageMigration extends ExampleMigration { /** * {@inheritdoc} */ public function __construct($arguments) { parent::__construct($arguments); $this->description = t('Creates image entities.'); // Set the source. $this->source = new ExampleMigrateSourceImage(); ...

The rest of the ExampleImageMigration is available in a Gist, but it has been omitted here for brevity. It is just a standard migration class that maps the array keys we put into the matches property of the source class to the fields of our image entity.

Transforming the image tags in the body

With our image entities created and the associated migration added as a dependency, we can begin sorting out how we will convert all of the image tags to tokens. This obviously assumes you are using tokens, but hopefully this will shed light on the general approach, which can then be adapted to your specific needs.

Inside our article migration (or whatever you happen to be migrating that has the image tags in the body field) we implement the callbacks() method on the body field mapping.

// Body. $this->addFieldMapping('body', 'post_content') ->callbacks(array($this, 'replaceImageMarkup'));

Now let's explore the logic that replaces the image tags with our new entity tokens. Each of the patterns references below will likely correspond to unique methods in the ExampleMigrateSourceImage class that find images based upon unique patterns.

/** * Converts images into image tokens. * * @param string $body * The body HTML. * * @return string * The body HTML with converted image tokens. */ protected function replaceImageMarkup($body) { // Convert image tags that follow a given pattern. $body = preg_replace_callback(self::IMAGE_REGEX_FOO, `fooCallbackFunction`, $body); // Convert image tags that follow a different pattern. $body = preg_replace_callback(self::IMAGE_REGEX_BAR, `barCallbackFunction`, $body); return $body;

In the various callback functions we need to do several things:

  1. Alter the source string following the same logic we used when we constructed our potential sources in our source class. This ensures that the value passed in the $source_id variable below matches a value in the mapping table created by the image migration.
  2. Next we call the handleSourceMigration() method with the altered source value, which will find the destination id associated with the source id.
  3. We then use the returned image entity id to construct the token and replace the image markup in the body data.
$image_entity_id = self::handleSourceMigration('ExampleImageMigration', $source_id); Implementation Details

Astute observers will notice that we called self::handleSourceMigration(), not $this->handleSourceMigration. This is due to the fact that the handleSourceMigration() method defined in the Migrate class is not static and uses $this within the body of the method. Callback functions are called statically, so the reference to $this is lost. Additionally, we can't instantiate a new Migration class object to get around this, as the Migrate class is an abstract class. You also cannot pass the current Migrate object into the callback function, due to the Migrate class not supporting additional arguments for the callbacks() method.

Thus, we are stuck trying to implement a singleton or global variable that stores the current Migrate object, or duplicating the handleSourceMigration() method and making it work statically. We weren’t a fan of either option, but we went with the latter. Other ideas or reasons to choose the alternate route are welcome!

If you go the route we chose, these are the lines you should remove from the handleSourceMigration method in the Migrate class when you duplicate it into one of your custom classes.

- // If no destination ID was found, give each source migration a chance to - // create a stub. - if (!$destids) { - foreach ($source_migrations as $source_migration) { - // Is this a self reference? - if ($source_migration->machineName == $this->machineName) { - if (!array_diff($source_key, $this->currentSourceKey())) { - $destids = array(); - $this->needsUpdate = MigrateMap::STATUS_NEEDS_UPDATE; - break; - } - } - // Break out of the loop if a stub was successfully created. - if ($destids = $source_migration->createStubWrapper($source_key, $migration)) { - break; - } - } - }

Before we continue, let's do a quick recap of the steps along the way.

  1. We made an iterative source of all images from a source data string by creating the ExampleMigrateSourceImage class that extends the MigrateSource class.
  2. We then used ExampleMigrateSourceImage as a migration source class the in the ExampleImageMigration class to import all of the images as new structured entities.
  3. Finally, we built our "actual" content migration and used the callbacks() method on the body field mapping in conjunction with the handleSourceMigration() method to convert the existing image markup to entity based tokens.
The end result

With all of this in place, you simply sit back and watch your migrations run! Of course before that, you get the joy of running it countless times and facing edge cases with malformed image paths, broken markup, new image sources you were never told about, etc. Then at the end of the day you are left with shiny new image entities full of metadata that are able to be searched, sorted, filtered, and re-used! Thanks to token rendering (if you go that route), you also gain full control over how your img tags are rendered, which greatly simplifies the implementation of lazy-loading or responsive images. Most importantly, you have applied structure to your data, and you are now ready to transform and adapt your content as needed for any challenge that is thrown your way!

Categories: Drupal

Jeff Geerling's Blog: Increase the Guzzle HTTP Client request timeout in Drupal 8

18 August 2016 - 9:56am

During some migration operations on a Drupal 8 site, I needed to make an HTTP request that took > 30 seconds to return all the data... and when I ran the migration, I'd end up with exceptions like:

Migration failed with source plugin exception: Error message: cURL error 28: Operation timed out after 29992 milliseconds with 2031262 out of 2262702 bytes received (see

The solution, it turns out, is pretty simple! Drupal's \Drupal\Core\Http\ClientFactory is the default way that plugins like Migrate's HTTP fetching plugin get a Guzzle client to make HTTP requests (though you could swap things out if you want via services.yml), and in the code for that factory, there's a line after the defaults (where the 'timeout' => 30 is defined) like:

Categories: Drupal

Valuebound: Get your Drupal8 Development platform ready with Drush8!

18 August 2016 - 6:17am

As we all know, we need Drush8 for our Drupal8 development platform. I have tried installing Drush 8 using composer, but sometimes it turns out to be a disaster, especially when you try to install Drush 8 on the Digital Ocean Droplet having Ubuntu 16.04.

I have faced the same issue in the last few months to get the Drupal8 development platform ready with Drush8. So I have decided to find a solution to fix that forever. Well, finally found one which are the following lines of commands.

cd ~ php -r "readfile('');" > drush chmod +x drush sudo mv drush /usr/bin drush init

If you are…

Categories: Drupal

Drupal core announcements: We can add big new things to Drupal 8, but how do we decide what to add?

18 August 2016 - 5:48am

Drupal 8 introduced the use of Semantic Versioning, which practically means the use of three levels of version numbers. The current release is Drupal 8.1.8. While increments of the last number are done for bugfixes, the middle number is incremented when we add new features in a backwards compatible way. That allows us to add big new things to Drupal 8 while it is still compatible with all your themes and modules. We successfully added some new modules like BigPipe, Place Block, etc.

But how do we decide what will get in core? Should people come up with ideas, build them, and once they are done, they are either added in core or not? No. Looking for feedback at the end is a huge waste of time, because maybe the idea is not a good fit for core, or it clashes with another improvement in the works. But how would one go about getting feedback earlier?

We held two well attended core conversations at the last DrupalCon in New Orleans titled The potential in Drupal 8.x and how to realize it and Approaches for UX changes big and small both of which discussed a more agile approach to avoid wasting time.

The proposal is to separate the ideation and prototyping process from implementation. Within the implementation section the potential use of experimental modules helps with making the perfecting process more granular for modules. We are already actively using that approach for implementation. On the other hand the ideation process is still to be better defined. That is where we need your feedback now.

See for the issue to discuss this. Looking forward to your feedback there.

Categories: Drupal

Mediacurrent: How Drupal won an SEO game without really trying

18 August 2016 - 5:23am

At Mediacurrent we architected and built a Drupal site for a department of a prominent U.S. university several years ago. As part of maintaining and supporting the site over the years, we have observed how well it has performed in search engine rankings, often out-performing other sites across campus built on other platforms.

Categories: Drupal

KnackForge: Drupal Commerce - PayPal payment was successful but order not completed

18 August 2016 - 3:00am
Drupal Commerce - PayPal payment was successful but order not completed

Most of us use PayPal as a payment gateway for our eCommerce sites. Zero upfront, No maintenance fee, API availability and documentation makes anyone easy to get started. At times online references offer out-dated documentation or doesn't apply to us due to account type (Business / Individual), Country of the account holder, etc. We had this tough time when we wanted to set up Auto return to Drupal website.

Thu, 08/18/2016 - 15:30 Tag(s) Drupal planet Drupal 7 drupal-commerce
Categories: Drupal

Unimity Solutions Drupal Blog: Video Annotations: A Powerful and Innovative Tool for Education

17 August 2016 - 11:51pm

According to John J Medina a famous molecular biologist “Vision trumps all other senses.” Human mind is attracted to remember dynamic pictures rather than listen to words or read long texts. Advancement in multimedia has enabled teachers to impart visual representation of content in the class room.

Categories: Drupal

Drupalize.Me: Learn by Mentoring at DrupalCon

17 August 2016 - 11:37pm

DrupalCon is a great opportunity to learn all kinds of new skills and grow professionally. For the 3 days of the main conference in Dublin (September 27–29) there will be sessions on just about everything related to Drupal that you could want. One amazing opportunity that you may not be aware of though is the Mentored Sprint on Friday, September 30th. This is a great place for new folks to learn the ropes of our community and how to contribute back. What may be less talked about is the chance to be a mentor.

Categories: Drupal