Agaric Collective: Migrating data into Drupal subfields

Planet Drupal - 4 August 2019 - 5:30pm

In the previous entry, we learned how to use process plugins to transform data between source and destination. Some Drupal fields have multiple components. For example, formatted text fields store the text to display and the text format to apply. Image fields store a reference to the file, alternative, and title text, width, and height. The migrate API refers to a field’s component as a subfield. Today we will learn how to migrate into them and know which subfields are available.

Getting the example code

Today’s example will consist of migrating data into the `Body` and `Image` fields of the `Article` content type that are available out of the box. This assumes that Drupal was installed using the `standard` installation profile. As in previous examples, we will create a new module and write a migration definition file to perform the migration. The code snippets will be compact to focus on particular elements of the migration. The full code snippet is available at The module name is `UD Migration Subfields` and its machine name is `ud_migrations_subfields`. The `id` of the example migration is `udm_subfields`. Refer to this article for instructions on how to enable the module and run the migration.

source: plugin: embedded_data data_rows: - unique_id: 1 name: 'Michele Metts' profile: 'freescholar on' photo_url: '' photo_description: 'Photo of Michele Metts' photo_width: '587' photo_height: '657'

Only one record is presented to keep snippet short, but more exist. In addition to having a unique identifier, each record includes a name, a short profile, and details about the image.

Migrating formatted text

The `Body` field is of type `Text (formatted, long, with summary)`. This type of field has three components: the full text (value) to present, a summary text, and a text format. The Migrate API allows you to write to each component separately defining subfields targets. The next code snippets shows how to do it:

process: field_text_with_summay/value: source_value field_text_with_summay/summary: source_summary field_text_with_summay/format: source_format

The syntax to migrate into subfields is the machine name of the field and the subfield name separated by a slash (/). Then, a colon (:), a space, and the value. You can set the value to a source column name for a verbatim copy or use any combination of process plugins. It is not required to migrate into all subfields. Each field determines what components are required, so it is possible that not all subfields are set. In this example, only the value and text format will be set.

process: body/value: profile body/format: plugin: default_value default_value: restricted_html

The `value` subfield is set to the `profile` source column. As you can see in the first snippet, it contains HTML markup. An `a` tag to be precise. Because we want the tag to be rendered as a link, a text format that allows such tag needs to be specified. There is no information about text formats in the source, but Drupal comes with a couple we can choose from. In this case, we use the `Restricted HTML` text format. Note that the `default_value` plugin is used and set to `restricted_html`. When setting text formats, it is necessary to use its machine name. You can find them in the configuration page for each text format. For `Restricted HTML` that is /admin/config/content/formats/manage/restricted_html.

Note: Text formats are a whole different subject that even has security implications. To keep the discussion on topic, we will only give some recommendations. When you need to migrate HTML markup, you need to know which tags appear in your source, which ones you want to allow in Drupal, and select a text format that accepts what you have whitelisted and filter out any dangerous tags like `script`. As a general rule, you should avoid setting the `format` subfield to use the `Full HTML` text format.

Migrating images

There are different approaches to migrating images. Today, we are going to use the Migrate Files module. It is important to note that Drupal treats images as files with extra properties and behavior. Any approach used to migrate files can be adapted to migrate images.

process: field_image/target_id: plugin: file_import source: photo_url reuse: TRUE id_only: TRUE field_image/alt: photo_description field_image/title: photo_description field_image/width: photo_width field_image/height: photo_height

When migrating any field you have to use their machine in the mapping section. For the `Image` field, the machine name is `field_image`. Knowing that, you set each of its subfields:

  • `target_id` stores an integer number which Drupal uses as a reference to the file.
  • `alt` stores a string that represents the alternative text. Always set one for better accessibility.
  • `title` stores a string that represents the title attribute.
  • `width` stores an integer number which represents the width in pixels.
  • `height` stores an integer number which represents the height in pixels.

For the `target_id`, the plugin `file_import` is used. This plugin requires a `source` configuration value with a URL to the file. In this case, the `photo_url` column from the source section is used. The `reuse` flag indicates that if a file with the same location and name exists, it should be used instead of downloading a new copy. When working on migrations, it is common to run them over and over until you get the expected results. Using the `reuse` flag will avoid creating multiple references or copies of the image file, depending on the plugin configuration. The `id_only` flag is set so that the plugin only returns that file identifier used by Drupal instead of an entity reference array. This is done because each subfield is being set manually. For the rest of the subfields (`alt`, `title`, `width`, and `height`) the value is a verbatim copy from the source.

Note: The Migrate Files module offers another plugin named `image_import`. That one allows you to set all the subfields as part of the plugin configuration. An example of its use will be shown in the next article. This example uses the `file_import` plugin to emphasize the configuration of the image subfields.

Which subfields are available?

Some fields have many subfields. Address fields, for example, have 13 subfields. How can you know which ones are available? The answer is found in the class that provides the field type. Once you find the class, look for the `schema` method. The subfields are contained in the `columns` array of the value returned by the `schema` method. Let’s see some examples:

  • The `Text (plain)` field is provided by the StringItem class.
  • The `Number (integer)` field is provided by the IntegerItem class.
  • The `Text (formatted, long, with summary)` is provided by the TextWithSummaryItem class.
  • The `Image` field is provided by the ImageItem class.

The `schema` method defines the database columns used by the field to store its data. When migrating into subfields, you are actually migrating into those particular database columns. Any restriction set by the database schema needs to be respected. That is why you do not use units when migrating width and height for images. The database only expects an integer number representing the corresponding values in pixels. Because of object-oriented practices, sometimes you need to look at the parent class to know all the subfields that are available.

Another option is to connect to the database and check the table structures. For example, the `Image` field stores its data in the `node__field_image` table. Among others, this table has five columns named after the field’s machine name and the subfield:

  • field_image_target_id
  • field_image_alt
  • field_image_title
  • field_image_width
  • field_image_height

Looking at the source code or the database schema is arguably not straightforward. This information is included for reference to those who want to explore the Migrate API in more detail. You can look for migrations examples to see what subfields are available. I might even provide a list in a future blog post. ;-)

Tip: You can use Drupal Console for code introspection and analysis of database table structure. Also, many plugins are defined by classes that end with the string `Item`. You can use your IDEs search feature to find the class using the name of the field as hint.

Default subfields

Every Drupal field has at least one subfield. For example, `Text (plain)` and `Number (integer)` defines only the `value` subfield. The following code snippets are equivalent:

process: field_string/value: source_value_string field_integer/value: source_value_integer process: field_string: source_value_string field_integer: source_value_integer

In examples from previous days, no subfield has been manually set, but Drupal knows what to do. As we have mentioned, the Migrate API offers syntactic sugar to write shorter migration definition files. This is another example. You can safely skip the default subfield and manually set the others as needed. For `File` and `Image` fields, the default subfield is `target_id`. How does the Migrate API know what subfield is the default? You need to check the code again.

The default subfield is determined by the return value of `mainPropertyName` method of the class providing the field type. Again, object oriented practices might require looking at the parent classes to find this method. In the case of the `Image` field, it is provided by ImageItem which extends FileItem which extends EntityReferenceItem. It is the latter that contains the `mainPropertyName` returning the string `target_id`.

What did you learn in today’s blog post? Were you aware of the concept of subfields? Did you ever wonder what are the possible destination targets (subfields) for each field type? Did you know that the Migrate API finds the default subfield for you? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at as well as here on, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Read more and discuss at

Categories: Drupal

Feeds Personify

New Drupal Modules - 4 August 2019 - 11:33am

This module provides a feeds fetcher and a parser for importing content from Personify.

Categories: Drupal

Video Game Deep Cuts: Fortnite World Cup's Fire-y Youngblood

Social/Online Games - Gamasutra - 3 August 2019 - 11:47am

This week's roundup includes Epic's Fortnite World Cup, analyses of Nintendo's new Fire Emblem and Wolfenstein: Youngblood, as well as time loops in games, Candyland, Elsinore and lots more besides. ...

Categories: Game Theory & Design

Agaric Collective: Using process plugins for data transformation in Drupal migrations

Planet Drupal - 3 August 2019 - 10:09am

In the previous entry, we wrote our first Drupal migration. In that example, we copied verbatim values from the source to the destination. More often than not, the data needs to be transformed in some way or another to match the format expected by the destination or to meet business requirements. Today we will learn more about process plugins and how they work as part of the Drupal migration pipeline.

Syntactic sugar

The Migrate API offers a lot of syntactic sugar to make it easier to write migration definition files. Field mappings in the process section are an example of this. Each of them requires a process plugin to be defined. If none is manually set, then the get plugin is assumed. The following two code snippets are equivalent in functionality.

process: title: creative_title process: title: plugin: get source: creative_title

The get process plugin simply copies a value from the source to the destination without making any changes. Because this is a common operation, get is considered the default. There are many process plugins provided by Drupal core and contributed modules. Their configuration can be generalized as follows:

process: destination_field: plugin: plugin_name config_1: value_1 config_2: value_2 config_3: value_3

The process plugin is configured within an extra level of indentation under the destination field. The plugin key is required and determines which plugin to use. Then, a list of configuration options follows. Refer to the documentation of each plugin to know what options are available. Some configuration options will be required while others will be optional. For example, the concat plugin requires a source, but the delimiter is optional. An example of its use appears later in this entry.

Providing default values

Sometimes, the destination requires a property or field to be set, but that information is not present in the source. Imagine you are migrating nodes. As we have mentioned, it is recommended to write one migration file per content type. If you know in advance that for a particular migration you will always create nodes of type Basic page, then it would be redundant to have a column in the source with the same value for every row. The data might not be needed. Or it might not exist. In any case, the default_value plugin can be used to provide a value when the data is not available in the source.

source: ... process: type: plugin: default_value default_value: page destination: plugin: 'entity:node'

The above example sets the type property for all nodes in this migration to page, which is the machine name of the Basic page content type. Do not confuse the name of the plugin with the name of its configuration property as they happen to be the same: default_value. Also note that because a (content) type is manually set in the process section, the default_bundle key in the destination section is no longer required. You can see the latter being used in the example of writing your Drupal migration blog post.

Concatenating values

Consider the following migration request: you have a source listing people with first and last name in separate columns. Both are capitalized. The two values need to be put together (concatenated) and used as the title of nodes of type Basic page. The character casing needs to be changed so that only the first letter of each word is capitalized. If there is a need to display them in all caps, CSS can be used for presentation. For example: FELIX DELATTRE would be transformed to Felix Delattre.

Tip: Question business requirements when they might produce undesired results. For instance, if you were to implement this feature as requested DAMIEN MCKENNA would be transformed to Damien Mckenna. That is not the correct capitalization for the last name McKenna. If automatic transformation is not possible or feasible for all variations of the source data, take notes and perform manual updates after the initial migration. Evaluate as many use cases as possible and bring them to the client’s attention.

To implement this feature, let’s create a new module ud_migrations_process_intro, create a migrations folder, and write a migration definition file called udm_process_intro.yml inside it. Follow the instructions in this entry to find the proper location and folder structure or download the sample module from It is the one named UD Process Plugins Introduction and machine name udm_process_intro. For this example, we assume a Drupal installation using the standard installation profile which comes with the Basic Page content type. Let’s see how to handle the concatenation of first an last name.

id: udm_process_intro label: 'UD Process Plugins Introduction' source: plugin: embedded_data data_rows: - unique_id: 1 first_name: 'FELIX' last_name: 'DELATTRE' - unique_id: 2 first_name: 'BENJAMIN' last_name: 'MELANÇON' - unique_id: 3 first_name: 'STEFAN' last_name: 'FREUDENBERG' ids: unique_id: type: integer process: type: plugin: default_value default_value: page title: plugin: concat source: - first_name - last_name delimiter: ' ' destination: plugin: 'entity:node'

The concat plugin can be used to glue together an arbitrary number of strings. Its source property contains an array of all the values that you want put together. The delimiter is an optional parameter that defines a string to add between the elements as they are concatenated. If not set, there will be no separation between the elements in the concatenated result. This plugin has an important limitation. You cannot use strings literals as part of what you want to concatenate. For example, joining the string Hello with the value of the first_name column. All the values to concatenate need to be columns in the source or fields already available in the process pipeline. We will talk about the latter in a future blog post.

To execute the above migration, you need to enable the ud_migrations_process_intro module. Assuming you have Migrate Run installed, open a terminal, switch directories to your Drupal docroot, and execute the following command: drush migrate:import udm_process_intro Refer to this entry if the migration fails. If it works, you will see three basic pages whose title contains the names of some of my Drupal mentors. #DrupalThanks

Chaining process plugins

Good progress so far, but the feature has not been fully implemented. You still need to change the capitalization so that only the first letter of each word in the resulting title is uppercase. Thankfully, the Migrate API allows chaining of process plugins. This works similarly to unix pipelines in that the output of one process plugin becomes the input of the next one in the chain. When the last plugin in the chain completes its transformation, the return value is assigned to the destination field. Let’s see this in action:

id: udm_process_intro label: 'UD Process Plugins Introduction' source: ... process: type: ... title: - plugin: concat source: - first_name - last_name delimiter: ' ' - plugin: callback callable: mb_strtolower - plugin: callback callable: ucwords destination: ...

The callback process plugin pass a value to a PHP function and returns its result. The function to call is specified in the callable configuration option. Note that this plugin expects a source option containing a column from the source or value of the process pipeline. That value is sent as the first argument to the function. Because we are using the callback plugin as part of a chain, the source is assumed to be the last output of the previous plugin. Hence, there is no need to define a source. So, we concatenate the columns, make them all lowercase, and then capitalize each word.

Relying on direct PHP function calls should be a last resort. Better alternatives include writing your own process plugins which encapsulates your business logic separate of the migration definition. The callback plugin comes with its own limitation. For example, you cannot pass extra parameters to the callable function. It will receive the specified value as its first argument and nothing else. In the above example, we could combine the calls to mb_strtolower() and ucwords() into a single call to mb_convert_case($source, MB_CASE_TITLE) if passing extra parameters were allowed.

Tip: You should have a good understanding of your source and destination formats. In this example, one of the values to want to transform is MELANÇON. Because of the cedilla (ç) using strtolower() is not adequate in this case since it would leave that character uppercase (melanÇon). Multibyte string functions (mb_*) are required for proper transformation. ucwords() is not one of them and would present similar issues if the first letter of the words are special characters. Attention should be given to the character encoding of the tables in your destination database.

Technical note: mb_strtolower is a function provided by the mbstring PHP extension. It does not come enabled by default or you might not have it installed altogether. In those cases, the function would not be available when Drupal tries to call it. The following error is produced when trying to call a function that is not available: The "callable" must be a valid function or method. For Drupal and this particular function that error would never be triggered, even if the extension is missing. That is because Drupal core depends on some Symfony packages which in turn depend on the symfony/polyfill-mbstring package. The latter provides a polyfill) for mb_* functions that has been leveraged since version 8.6.x of Drupal.

What did you learn in today’s blog post? Did you know that syntactic sugar allows you to write shorter plugin definitions? Were you aware of process plugin chaining to perform multiple transformations over the same data? Had you considered character encoding on the source and destination when planning your migrations? Are you making your best effort to avoid the callback process plugin? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at as well as here on, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Read more and discuss at

Categories: Drupal

Backup and Migrate with S3

New Drupal Modules - 3 August 2019 - 6:11am
Categories: Drupal

Drupal 8 Examples

New Drupal Modules - 3 August 2019 - 1:53am

Code examples for Drupal 8.

Covers major sub-systems of Drupal 8 feature with examples code.

How to use correctly in Drupal 8 projects.

Categories: Drupal

Video Game Deep Cuts: Fortnite World Cup's Fire-y Youngblood - by Simon Carless Blogs - 2 August 2019 - 11:29pm
This week's roundup includes a couple of looks at Epic's Fortnite World Cup, analyses of Nintendo's new Fire Emblem game and Wolfenstein: Youngblood, as well as time loops in games, Candyland, Elsinore and lots more besides.
Categories: Game Theory & Design

Delete files

New Drupal Modules - 2 August 2019 - 7:13pm

Delete files easily.

Categories: Drupal

Centarro: Decoupled Days 2019

Planet Drupal - 2 August 2019 - 5:01pm

Decoupled Days 2019 last month, the third edition of the conference, was fantastic. I had the privilege to speak and attend last year, as well. The conference has quickly risen to be one of my favorite conference of the year.

Not familiar with Decoupled Days? Spawned in the “Hallway Track” of DrupalCon between its founders, the conference originated as Decoupled Drupal Days in 2017. Last year saw the phasing out of the word “Drupal” as the conference became focused on decoupling in general, not just Drupal. That is one reason it has quickly become a favorite event. It is an engineering and design conference. The act of decoupling in a system requires specific system design and presents engineering challenges. The organizers identify it as:

The only conference on the future of CMS, headless CMS, and decoupled CMS.

Categories: Drupal

Zoom course object

New Drupal Modules - 2 August 2019 - 1:52pm

This module enables Zoom meetings to be used as Course objects.

The object allows for creation of new Zoom meetings or using existing ones.

A lightweight API is included.

Users attending these meetings will be marked complete in the object if they attend for the configured minimum amount of time.

Getting started

  1. Todo
Categories: Drupal

An in-game casino gave GTA Online its biggest player surge since launch

Social/Online Games - Gamasutra - 2 August 2019 - 12:23pm

Developer Rockstar Games says that ribbon-cutting has caused players to flock to the game in a way they hadn†™t seen since Online†™s launch in 2013. ...

Categories: Game Theory & Design

Simple XML News Sitemap

New Drupal Modules - 2 August 2019 - 12:04pm

This module adds a new sitemap type which is compliant with the guidelines for Google News

It is an extension to the Simple XML Sitemap module which is required.

Categories: Drupal


New Drupal Modules - 2 August 2019 - 11:58am

Applies the Select2 library to all select fields on the site similar to the Chosen module.

Categories: Drupal

Cyberpunk Red Jumpstart Kit

New RPG Product Reviews - 2 August 2019 - 11:54am
Publisher: R. Talsorian Games Inc.
Rating: 5
There's a whole heap of goodies in the download or in the box, depending on which version you have. Apparently the box also contains dice, otherwise there's no difference in the contents. Two books, reference cards, maps, pregenerated characters and some standees... everything you need to leap back into the near future.

Where to start? The 45-page Rulebook seems promising. This begins with The View From the Edge, an essay that sets out the stall of what it means to be cyberpunk. This paints the picture from the earliest days, when cyberpunk was the province of science-fiction authors, through the fictional alternate history that permeates the game from its first incarnation as *Cyberpunk 2013* and then *Cyberpunk 2020* - don't worry if you are not familiar with these games, you'll get the idea. However, the Fourth Corporate War has cut a swathe through everything, and much of what the cyberpunk of 2020 thought was normal is no more. Even Night City was a casualty, a nuke apparently. We're now in 2045, but there's still a place for the hip, freerolling, wired-in cyberpunk, operating more on the wrong side of the law than the right.

A brief precis of what a role-playing game is, for those who don't know, and a glossary of streetslang - you gotta sound right, choombatta, and then on to section 2: Soul and the New Machine. This takes a closer look at the philosophy, the look and feel, of cyberpunk... and reminds that, a major corporate war and the use of nuclear weapons later, there are few if any vestiges of civilization that would be familiar to people in society today. Players need to remember that it's personal, style over substance, attitude is everything, and you need to live on the edge. Oh, and rules are there to be broken. Then there's a look at Roles (read: character classes). There are nine: Rockerboys, Solos, Netrunners, Execs, Techs, Lawmen, Fixers, Medias, and Nomads (not all are covered in the Jumpstart). Next an overview of the character sheet, follwed by details of what everything means in terms of playing the character game mechanics-wise. The skills used for the pre-generated characters are explained.

Next up, 3: Lifepath. This is the system for generating a background for a character, and even with pre-generated ones there is scope for putting your own spin on the character that you are going to play. At each stage you may choose an option or roll for it. There's an example of how to do this, along with explanations of what this means for the player... and how it provides a bit of fun for the GM as well. All that backstory ready to exploit!

Then comes 4: Putting the Cyber into the Punk. This looks at the uses and abuses of cyberware, how to be stylish about your enhancements, and how the end-point of the exercise is survival - yours. With a few scary notes on cyberpsychosis, there are details of the various types of cybernetic enhancement you can have. Just remember: it's as much about fashion as it is about utility. We then move on to 5: Getting it Down. This covers how you actually play the game, when its time to use game mechanics rather than role-play to advance the plot. A lot covers combat because, let's face it, that's when you need to get the dice out... and of course it's a part of the game that most people enjoy. There's also a bit about task resolution, especially opposed tasks, when you want to use one of your skills to accomplish something.

Next, my favourite bit: 6: Netrunning in the Time of the Red. This explains the gear you need to go netrunning and how to use it, both in-game and in terms of game mechanics. This includes getting into brawls in the Net, which can be as deadly as doing so in the meat world. There are also times the Netrunner will have to go along with the rest of the infiltration team and brave the dangers of that sort of combat as well. This ends with an example Netrun, then back to real-world combat with 7: Thursday Night Throwdown, a variant on the original Friday Night Firefight rules. It's all an aid to streamline combat, to give you all the thrills without getting bogged down in the minutae of the rules. An alternate to brawling, the use of Reputation as a competitive sport, is also covered here. Finally there are summary cards of each of the pregenerated characters.

Speaking of pregenerated characters, there are 6 of them, with rather silly names - Torch the Tech, or Grease the Fixer... well, you may change those to something a bit more sensible if you prefer. Each comes with a page of backstory, character portrait and a full character sheet, as separate cards to give to each player.

The second book (or PDF) provided is the World Book. This provides 50-odd pages of background, setting, and adventure, starting with 1: Welcome to the Time of the Red. More detailed recent history explaining what the Fourth Corporate War was and how much damage it did to the world you now inhabit. The United States is fragmented, no longer a superpower. Night City, even 20 years later, is still a mess. The rest of the world is also in a state of flux. A good chance to make your mark, you might think, if you survive long enough, that is. Megacorps also suffered, but there are still corporations flexing the muscles pretty much unchecked. Then 2: Dark Future Countdown gives a detailed timeline of events from the 1990s onwards to the present day of 2045.

It may be battered, but Night City is still there, according to section 3. This gives a potted history from its foundation in 1994 to the present, bombs included. It's in the middle of a veritable fury of rebuilding, plenty of opportunity there. Just avoid the Hot Zone Wasteland, where the central business district used to be. Plenty here on politics, public services and law and order... yes, there is some! The next section 4: Everyday Things gives the lowdown on living there, aimed particularly at newcomers (which players will be, even if their characters are not... it's often best to play the characters as new arrivals too, so both can learn together about their new home). Vehicles, weapons, getting the news, shopping, it's all here. The food sounds terrible, though.

We then move into GM territory with 5: Running Cyberpunk Red. Plenty of good ideas about how to make the environment come to life for your group, opposition they might face, activities they can engage in. There are some sample encounters you can throw in whatever is happening, whatever the characters are trying to do. Finally, there is a fully-fledged adventure, The Apartment. The basic idea is that all the characters in the soon-to-become party live in the same apartment block, one of the few privately-owned (by one of them) blocks in the entire city. Someone wants to change that, gobble it up... and so the party needs to unite and fight for their home. There are notes on the other residents, and suggestions as to what might happen: pick and mix as you choose. There are some plans too. But that's not all. A collection of Screamsheets present more ideas for further adventures which you'll have to flesh out, three of them.

This contains all you need to get going, to see if the new version of *Cyberpunk* appeals. It doesn't matter if you don't know the original game, but if you do it moves the timeline along in a logical and believable manner. If you don't, just jump in and enjoy the delights that await!
Categories: Game Theory & Design

Hook 42: Meetings Recap - July 29th -31st, 2019

Planet Drupal - 2 August 2019 - 11:27am
Meetings Recap - July 29th -31st, 2019 Will Thurston-… Fri, 08/02/2019 - 18:27
Categories: Drupal

Agaric Collective: Writing your first Drupal migration

Planet Drupal - 2 August 2019 - 10:16am

In the previous entry, we learned that the Migrate API is an implementation of an ETL framework. We also talked about the steps involved in writing and running migrations. Now, let’s write our first Drupal migration. We are going to start with a very basic example: creating nodes out of hardcoded data. For this, we assume a Drupal installation using the standard installation profile which comes with the Basic Page content type.

As we progress through the series, the migrations will become more complete and more complex. Ideally, only one concept will be introduced at a time. When that is not possible, we will explain how different parts work together. The focus of today's lesson is learning the structure of a migration definition file and how to run it.

Writing the migration definition file

The migration definition file needs to live in a module. So, let’s create a custom one named ud_migrations_first and set Drupal core’s migrate module as dependencies in the *.info.yml file.

Now, let’s create a folder called migrations and inside it a file called udm_first.yml. Note that the extension is yml, not yaml. The content of the file will be:

type: module name: UD First Migration description: 'Example of basic Drupal migration. Learn more at' package: Understand Drupal core: 8.x dependencies: - drupal:migrate

The final folder structure will look like:

id: udm_first label: 'UD First migration' source: plugin: embedded_data data_rows: - unique_id: 1 creative_title: 'The versatility of Drupal fields' engaging_content: 'Fields are Drupal''s atomic data storage mechanism...' - unique_id: 2 creative_title: 'What is a view in Drupal? How do they work?' engaging_content: 'In Drupal, a view is a listing of information. It can a list of nodes, users, comments, taxonomy terms, files, etc...' ids: unique_id: type: integer process: title: creative_title body: engaging_content destination: plugin: 'entity:node' default_bundle: page

YAML is a key-value format with optional nesting of elements. They are very sensitive to white spaces and indentation. For example, they require at least one space character after the colon symbol (:) that separates the key from the value. Also, note that each level in the hierarchy is indented by two spaces exactly. A common source of errors when writing migrations is improper spacing or indentation of the YAML files.

A quick glimpse at the file reveals the three major parts: source, process, and destination. Other keys provide extra information about the migration. There are more keys that the ones shown above. For example, it is possible to define dependencies among migrations. Another option is to tag migrations so they can be executed together. We are going to learn more about these options in future entries.

Let’s review each key-value pair in the file. For id, it is customary to set its value to match the filename containing the migration definition, but without the .yml extension. This key serves as an internal identifier that Drupal and the Migrate API use to execute and keep track of the migration. The id value should be alphanumeric characters, optionally using underscores to separate words. As for the label key, it is a human readable string used to name the migration in various interfaces.

In this example we are using the embedded_data source plugin. It allows you to define the data to migrate right inside the definition file. To configure it, you define a data_rows key whose value is an array of all the elements you want to migrate. Each element might contain an arbitrary number of key-value pairs representing “columns” of data to be imported.

A common use case for the embedded_data plugin is testing of the Migrate API itself. Another valid one is to create default content when the data is known in advance. I often present introduction to Drupal workshops. To save time, I use this plugin to create nodes which are later used in the views creation explanation. Check this repository for an example of this. Note that it uses a different directory structure to define the migrations. That will be explained in future blog posts.

For the destination we are using the entity:node plugin which allows you to create nodes of any content type. The default_bundle key indicates that all nodes to be created will be of type “Basic page”, by default. It is important to note that the value of the default_bundle key is the machine name of the content type. You can find it at /admin/structure/types/manage/page In general, the Migrate API uses machine names for the values. As we explore the system, we will point out when they are used and where to find the right ones.

In the process section you map columns from the source to node properties and fields. The keys are entity property names or the field machine names. In this case, we are setting values for the title of the node and its body field. You can find the field machine names in the content type configuration page: /admin/structure/types/manage/page/fields. Values can be copied directly from the source or transformed via process plugins. This example makes a verbatim copy of the values from the source to the destination. The column names in the source are not required to match the destination property or field name. In this example they are purposely different to make them easier to identify.

You can download the example code from The example above is actually in a submodule in that repository. The same repository will be used for many examples throughout series. Download the whole repository into the ./modules/custom directory of the Drupal installation and enable the “UD First Migration” module.

Running the migration

Let’s use Drush to run the migrations with the commands provided by Migrate Run. Open a terminal, switch directories to Drupal’s webroot, and execute the following commands.

$ drush pm:enable -y migrate migrate_run ud_migrations_first $ drush migrate:status $ drush migrate:import udm_first

The first command enables the core migrate module, the runner, and the custom module holding the migration definition file. The second command shows a list of all migrations available in the system. Only one should be listed with the migration ID udm_first. The third command executes the migration. If all goes well, you can visit the content overview page at /admin/content and see two basic pages created. Congratulations, you have successfully run your first Drupal migration!!!

Or maybe not? Drupal migrations can fail in many ways and sometimes the error messages are not very descriptive. In upcoming blog posts we will talk about recommended workflows and strategies for debugging migrations. For now, let’s mention a couple of things that could go wrong with this example. If after running the drush migrate:status command you do not see the udm_first migration, make sure that the ud_migrations_first module is enabled. If it is enabled, and you do not see it, rebuild the cache by running drush cache:rebuild.

If you see the migration, but you get a yaml parse error when running the migrate:import command check your indentation. Copying and pasting from GitHub to your IDE/editor might change the spacing. An extraneous space can break the whole migration so pay close attention. If the command reports that it created the nodes, but you get a fatal error when trying to view one, it is because the content type was not set properly. Remember that the machine name of the “Basic page” content type is page, not basic_page. This error cannot be fixed from the administration interface. What you have to do is rollback the migration issuing the following command: drush migrate:rollback udm_first, then fix the default_bundle value, rebuild the cache, and import again.

Note: Migrate Tools could be used for running the migration. This module depends on Migrate Plus. For now, let’s keep module dependencies to a minimum to focus on core Migrate functionality. Also, skipping them demonstrates that these modules, although quite useful, are not hard requirements for running migration projects. If you decide to use Migrate Tools make sure to uninstall Migrate Run. Both provide the same Drush commands and conflict with each other if the two are enabled.

What did you learn in today’s blog post? Did you know that Migrate Plus and Migrate Tools are not hard requirements for Drupal migrations projects? Did you know you can place your YAML files in a migrations directory? What advice would you give to someone writing their first migration? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your friends and colleagues.

This blog post series, cross-posted at as well as here on, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Read more and discuss at

Categories: Drupal

Electric Citizen: Mastering Drupal 8 Multilingual: Part 1 of 3

Planet Drupal - 2 August 2019 - 9:08am

The web is constantly growing, evolving and—thankfully—growing more accessible and inclusive.

It is becoming expected that a user can interact with a website solely via keyboard or have the option to browse in their native language. There are many ways to serve the needs of non-native-language users, but one of the more robust is Drupal Multilingual.

Unlike 3rd party translation plugins like Google Translate or browser translation tools, Drupal's suite of core Multilingual tools allows you to write accurate and accessible translated content in the same manner as you write in your default language content. With no limit on the number languages, settings for right-to-left content, and the ability to translate any and all of your content, Drupal 8 can create a true multi-language experience like never before.

There is, however, a bit of planning and work involved.

Hopefully, this blog series will help smooth the path to truly inclusive content by highlighting some project management, design, site building, and development gotchas, as well as providing some tips and tricks to make the multilingual experience better for everyone. Part one will help you decide if you need multilingual as well as provide some tips on how to plan and budget for it.

Categories: Drupal

Domain based fields

New Drupal Modules - 2 August 2019 - 9:07am

This module provide additional functionality to the admin user or management for accessing fields based on domain.

The main functionality of this module to provide an admin interface from which we have to set particular fields based on the active domain for editing and / or adding node content.

You must have the Domain Access module enabled to use this module. This has only been tested against the latest Domain Access module, version, 7.x-3.16.

Categories: Drupal

Lionbridge Content API Test

New Drupal Modules - 2 August 2019 - 8:41am

This project is for testing only

Categories: Drupal

Avatar Field Formatter

New Drupal Modules - 2 August 2019 - 8:18am

This is an image field formatter, which inherited settings from the build-in default image field formatter in the Drupal core, acts the same as the built-in one, but if the image does not exist, display an avatar letter generated instead. The letter depends on the account's display name.

Dependencies, install it by composer

Categories: Drupal


Subscribe to As If Productions aggregator