Drupal

Drupal Association blog: Being an At-Large Board Member for the Drupal Association - What Does That Mean?

Planet Drupal - 7 August 2019 - 7:30am

We now know the list of candidates that form the election for the next At-Large Drupal Association Board member. Voting is now open until 16 August, 2019 and everyone who has used their Drupal.org account in the last year is eligible to vote. You do not need to have a Drupal Association Membership.

Cast your vote


Ryan speaking to Drupal Camp London about prioritizing giving back in Drupal. - photo Paul Johnson

To help everyone understand how an At-Large Director serves the Drupal community, we asked Ryan Szrama, currently serving as one of two community-elected board members, to share about his experience in the role.

I was elected to the board in 2017. While it would be my first time serving on a non-profit board, my personal and professional goals were closely aligned with the mission of the Drupal Association. I was happy to serve in whatever capacity would best advance that mission: uniting a global open source community to build, secure, and promote Drupal.

You'd been a contributor to the project for over 10 years by that point. How do you think your experience in the community shaped your time on the board?

Indeed! I'd been developing Ubercart since 2006 and Drupal Commerce since 2010. I had more Drupal development experience than most folks, helping me understand just how important the Association's leadership of DrupalCon and Drupal.org are to our development efforts. The year before joining the board, I also transitioned into a business leadership role, acquiring Commerce Guys in its split from Platform.sh. This exposed me to a whole new set of concerns related to financial management, marketing, sponsorship, etc. that Drupal company owners encounter in their business with the Association.

While the Association is led day-to-day by an Executive Director (Megan Sanicki when I joined, now Heather Rocker), the board is responsible for setting the strategic vision of the organization and managing its affairs through various committees. Most of our board meetings involve reports on things like finances, business performance, staff and recruitment, etc., but we often have strategic topics to discuss - especially at our two annual retreats. My experience as a long-time contributor turned business owner gave me confidence to speak to such topics as Drupal.org improvements, DrupalCon programming, community governance, and more.

What's the Association up to lately? Are there any key initiatives you expect an incoming board member may be able to join?

There's so much going on that it's hard to know where to focus my answer!

The Drupal Association has seen a lot of changes recently impact the business of the business itself. We welcomed a new Executive Director and are beginning to see the impact of her leadership. We've revised our financial reporting, allowing us to more clearly monitor our financial health, and we've streamlined sales, immediately helping us sell sponsorships for DrupalCon Minneapolis more effectively. Additionally, we've launched and continue to develop the Drupal Steward product in partnership with the Security team. This is part of a very important initiative to diversify our revenue for the sake of the Drupal Association's long term health.

From a community growth standpoint, we continue to follow-up on various governance related tasks and initiatives. We're working with Drupal leaders around the world to help launch and support more local Drupal associations. We continue to refine the DrupalCon program and planning process to attract a more diverse audience and create opportunities for a more diverse group of contributors and speakers. The retooling of Drupal.org to integrate GitLab for project is moving forward, lowering the barrier to entry for new project contributors.

Any one of these activities would benefit from additional consideration, insight, and participation by our new board member.

What’s the best part about being on the board?

One of the best things for me is getting to see firsthand just how much the staff at the Drupal Association do to help the community, maintaining the infrastructure that enables our daily, international collaboration. I've long known folks on the development team like Neil Drumm, but I didn't realize just how much time he and other team members have devoted to empowering the entire community until I got a closer look.

I'm also continually inspired by the other board members. Our interactions have helped expand my understanding of Drupal's impact and potential. I've obviously been a fan of the project for many years now, but my time on the board has given me even greater motivation to keep contributing and ensure more people can for years to come.

Thank you, Ryan! Hopefully folks reading this are similarly inspired to keep contributing to Drupal. Our current election is one way everyone can play a part, so I'd encourage you to head over to the nominations page, read up on the motivations and interests of our nominees, ask questions while you can, and…

Cast your vote!

Categories: Drupal

Embed view block

New Drupal Modules - 7 August 2019 - 6:51am

It is not support pass argument to views block in Drupal 8. This module make it possible.

This module allow you embed a view through a block. It allow you pass some argument to the view.
It is very usefull when use Layout builder module.

Categories: Drupal

Drudesk: Drupal 9 is coming: your website needs a plan

Planet Drupal - 7 August 2019 - 5:21am

Lately, you can often hear that Drupal 9 is coming. Drupal 8.7 was released in May and Drupal 8.8 is planned for December 2019. At the same time, D9 is becoming more and more hotly discussed topic in the Drupal world. 

Categories: Drupal

inline image token

New Drupal Modules - 7 August 2019 - 2:37am

Add token support for inline images and files upload directory.

After install this module, you could set the inline image upload directory from

inline-images

to:

inline-images/[date:custom:Y]-[date:custom:m]

There is patch to Drupal core:
https://www.drupal.org/project/drupal/issues/2830210

Categories: Drupal

HTTP Status Dogs

New Drupal Modules - 7 August 2019 - 2:07am

This Drupal 8 module will replace all your error responses with HTTP Status Dogs!

This is a Drupal 8 module that replaces all HTML error responses (like 403 and 404 pages, etc.) with images from HTTP Status Dogs. I expect it to become more popular than Views.

Categories: Drupal

Responsive Image Alternatives

New Drupal Modules - 6 August 2019 - 10:42pm

Have you ever had a use case when working with images or responsive media that you wanted to upload an image for Desktop and maybe an alternative image for Mobile (a portrait size for example)
Sure you want to use Responsive Image module and scale/crop the images to each breakpoint, but what if you need to crop your Mobile image differently than your Desktop (choose a different focal point).

Categories: Drupal

Tint connector

New Drupal Modules - 6 August 2019 - 9:33pm

TINT connector
------------------------
The TINT module integrates the TINT social media feed service with Drupal.

TINT is a service that integrates all of your brand's social media posts in
one beautiful stream, perfect for embedding on your website.

Install & set up the TINT Connector Drupal module, as described below.

Installation
------------

1) Download the module and place it with other contributed modules
(e.g. modules/contrib).

2) Enable the Tint connector module on the Modules list page.

Categories: Drupal

Agaric Collective: Tips for writing Drupal migrations and understanding their workflow

Planet Drupal - 6 August 2019 - 7:00pm

We have presented several examples as part of this migration blog post series. They started very simple and have been increasing in complexity. Until now, we have been rather optimistic. Get the sample code, install any module dependency, enable the module that defines the migration, and execute it assuming everything works on the first try. But Drupal migrations often involve a bit of trial and error. At the very least, it is an iterative process. Today we are going to talk about what happens after import and rollback operations, how to recover from a failed migration, and some tips for writing definition files.

Importing and rolling back migrations

When working on a migration project, it is common to write many migration definition files. Even if you were to have only one, it is very likely that your destination will require many field mappings. Running an import operation to get the data into Drupal is the first step. With so many moving parts, it is easy not to get the expected results on the first try. When that happens, you can run a rollback operation. This instructs the system to revert anything that was introduced when then migration was initially imported. After rolling back, you can make changes to the migration definition file and rebuild Drupal’s cache for the system to pick up your changes. Finally, you can do another import operation. Repeat this process until you get the results you expect. The following code snippet shows a basic Drupal migration workflow:

# 1) Run the migration. $ drush migrate:import udm_subfields # 2) Rollback migration because the expected results were not obtained. $ drush migrate:rollback udm_subfields # 3) Change the migration definition file. # 4) Rebuild caches for changes to be picked up. $ drush cache:rebuild # 5) Run the migration again $ drush migrate:import udm_subfields

The example above assumes you are using Drush to run the migration commands. Specifically, the commands provided by Migrate Run or Migrate Tools. You pick one or the other, but not both as the commands provided for two modules are the same. If you were to have both enabled, they will conflict with each other and fail.
Another thing to note is that the example uses Drush 9. There were major refactorings between versions 8 and 9 which included changes to the name of the commands. Finally, udm_subfields is the id of the migration to run. You can find the full code in this article.

Tip: You can use Drush command aliases to write shorter commands. Type drush [command-name] --help for a list of the available aliases.

Technical note: To pick up changes to the definition file, you need to rebuild Drupal’s caches. This is the procedure to follow when creating the YAML files using Migrate API core features and placing them under the migrations directory. It is also possible to define migrations as configuration entities using the Migrate Plus module. In those cases, the YAML files follow a different naming convention and are placed under the config/install directory. For picking up changes, in this case, you need to sync the YAML definition using configuration management workflows. This will be covered in a future entry.

Stopping and resetting migrations

Sometimes, you do not get the expected results due to an oversight in setting a value. On other occasions, fatal PHP errors can occur when running the migration. The Migrate API might not be able to recover from such errors. For example, using a non-existent PHP function with the callback plugin. Give it a try by modifying the example in this article. When these errors happen, the migration is left in a state where no import or rollback operations could be performed.

You can check the state of any migration by running the drush migrate:status command. Ideally, you want them in Idle state. When something fails during import or rollback, you would get the Importing or Rolling back states. To get the migration back to Idle, you stop the migration and reset its status. The following snippet shows how to do it:

# 1) Run the migration. $ drush migrate:import udm_process_intro # 2) Some non recoverable error occurs. Check the status of the migration. $ drush migrate:status udm_process_intro # 3) Stop the migration. $ drush migrate:stop udm_process_intro # 4) Reset the status to idle. $ drush migrate:reset-status udm_process_intro # 5) Rebuild caches for changes to be picked up. $ drush cache:rebuild # 6) Rollback migration because the expexted results were not obtained. $ drush migrate:rollback udm_process_intro # 7) Change the migration definition file. # 8) Rebuild caches for changes to be picked up. $ drush cache:rebuild # 9) Run the migration again. $ drush migrate:import udm_process_intro

Tip: The errors thrown by the Migrate API might not provide enough information to determine what went wrong. An excellent way to familiarize yourselves with the possible errors is by intentionally braking working migrations. In the example repository of this series, there are many migrations you can modify. Try anything that comes to mind: not leaving a space after a colon (:) in a key-value assignment; not using proper indentation; using wrong subfield names; using invalid values in property assignments; etc. You might be surprised by how Migrate API deals with such errors. Also, note that many other Drupal APIs are involved. For example, you might get a YAML file parse error, or an Entity API save error. When you have seen an error before, it is usually faster to identify the cause and fix it in the future.

What happens when you rollback a Drupal migration?

In an ideal scenario, when a migration is rolled back, it cleans after itself. That means, it removes any entity that was created during the import operation: nodes, taxonomy terms, files, etc. Unfortunately, that is not always the case. It is very important to understand this when planning and executing migrations. For example, you might not want to leave taxonomy terms or files that are no longer in use. Whether any dependent entity is removed or not has to do with how plugins or entities work.

For example, when using the file_import or image_import plugins provided by Migrate File, the created files and images are not removed from the system upon rollback. When using the entity_generate plugin from Migrate Plus, the create entity also remains in the system after a rollback operation.

In the next blog post, we are going to start talking about migration dependencies. What happens with dependent migrations (e.g., files and paragraphs) when the migration for host entity (e.g., node) is rolled back? In this case, the Migrate API will perform an entity delete operation on the node. When this happens, referenced files are kept in the system, but paragraphs are automatically deleted. For the curious, this behavior for paragraphs is actually determined by its module dependency: Entity Reference Revisions. We will talk more about paragraphs migrations in future blog posts.

The moral of the story is that the behavior migration system might be affected by other Drupal APIs. And in the case of rollback operations, make sure to read the documentation or test manually to find out when migrations clean after themselves and when they do not.

Note: The focus of this section was content entity migrations. The general idea can be applied to configuration entities or any custom target of the ETL process.

Re-import or update migrations

We just mentioned that Migrate API issues an entity delete action when rolling back a migration. This has another important side effect. Entity IDs (nid, uid, tid, fid, etc.) are going to change every time you rollback an import again. Depending on auto generated IDs is generally not a good idea. But keep it in mind in case your workflow might be affected. For example, if you are running migrations in a content staging environment, references to the migrated entities can break if their IDs change. Also, if you were to manually update the migrated entities to clean up edge cases, those changes would be lost if you rollback and import again. Finally, keep in mind test data might remain in the system, as described in the previous section, which could find its way to production environments.

An alternative to rolling back a migration is to not execute this operation at all. Instead, you run an import operation again using the update flag. This tells the system that in addition to migrating unprocessed items from the source, you also want to update items that were previously imported using their current values. To do this, the Migrate API relies on source identifiers and map tables. You might want to consider this option when your source changes overtime, when you have a large number of records to import, or when you want to execute the same migration many times on a schedule.

Note: On import operations, the Migrate API issues an entity save action.

Tips for writing Drupal migrations

When working on migration projects, you might end up with many migration definition files. They can set dependencies on each other. Each file might contain a significant number of field mappings. There are many things you can do to make Drupal migrations more straightforward. For example, practicing with different migration scenarios and studying working examples. As a reference to help you in the process of migrating into Drupal, consider these tips:

  • Start from an existing migration. Look for an example online that does something close to what you need and modify it to your requirements.
  • Pay close attention to the syntax of the YAML file. An extraneous space or wrong indentation level can break the whole migration.
  • Read the documentation to know which source, process, and destination plugins are available. One might exist already that does exactly what you need.
  • Make sure to read the documentation for the specific plugins you are using. Many times a plugin offer optional configurations. Understand the tools at your disposal and find creative ways to combine them.
  • Look for contributed modules that might offer more plugins or upgrade paths from previous versions of Drupal. The Migrate ecosystem is vibrant, and lots of people are contributing to it.
  • When writing the migration pipeline, map one field at a time. Problems are easier to isolate if there is only one thing that could break at a time.
  • When mapping a field, work on one subfield at a time if possible. Some field types like images and addresses offer many subfields. Again, try to isolate errors by introducing individual changes each time.
  • Commit to your code repository any and every change that produces right results. That way, you can go back in time and recover a partially working migration.
  • Learn about debugging migrations. We will talk about this topic in a future blog post.
  • See help from the community. Migrate maintainers and enthusiasts are very active and responsive in the #migrate channel of Drupal slack.
  • If you feel stuck, take a break from the computer and come back to it later. Resting can do wonders in finding solutions to hard problems.

What did you learn in today’s blog post? Did you know what happens upon importing and rolling back a migration? Did you know that in some cases, data might remain in the system even after rollback operations? Do you have a use case for running migrations with the update flag? Do you have any other advice on writing migrations? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: Drupal

GotoWebinar Sync

New Drupal Modules - 6 August 2019 - 5:09pm

This module integrates with the GotoWebinar API to sync webinars to your Drupal website.

Content Type setup

You will need a content type that will be used to sync the webinar data from GotoWebinar to your Drupal site. This content type will need some required fields.

Categories: Drupal

Hook 42: Down the Headless Rabbit Hole At Decoupled Days

Planet Drupal - 6 August 2019 - 12:49pm
Down the Headless Rabbit Hole At Decoupled Days Lindsey Gemmill Tue, 08/06/2019 - 19:49
Categories: Drupal

Promet Source: Why Drupal?

Planet Drupal - 6 August 2019 - 12:18pm
Drupal is certainly not only the open-source CMS game in town, but when consulting with clients about the best solution for the full range of their needs, it tends to be my go-to. Here’s why:  Architecture Scalability Database Views Flexibility Security Modules Search Migration   Architecture Drupal 8 is built on modern programming practices, and of course, the same will be true for June 2020 release of Drupal 9. 
Categories: Drupal

Entity Clone Template

New Drupal Modules - 6 August 2019 - 9:43am
Categories: Drupal

OSTraining: Add Collapsible Blocks to Text-Heavy Nodes in Drupal 8

Planet Drupal - 6 August 2019 - 7:29am

Long articles with many sections often discourage users from reading. They start reading and usually leave before reaching half of such articles.

To avoid this type of user experience, I recommend to group each section in your article into a collapsible tab. The article reader then will be able to digest the text in smaller pieces.

The Collapse Text Drupal 8 module adds a button to your editor. With this button you will be able to create collapsible text tabs with a tag system similar to HTML.

Read on to learn how to use this module!

Categories: Drupal

Locksmith

New Drupal Modules - 6 August 2019 - 6:42am

The goal of this module is to facilitate the access, storage and management of credentials from third party services. For a lot of projects, you will be given keys and credentials from multiple services that might not have an existing Drupal module.
Hopefully this module will save you time by preventing you from having to create a custom form page with permissions only to save two access keys that were given to you by a third party you are working with.

Categories: Drupal

Animista

New Drupal Modules - 6 August 2019 - 12:37am
Categories: Drupal

Agiledrop.com Blog: Top Drupal blog posts from July 2019

Planet Drupal - 5 August 2019 - 11:21pm

At the start of every month, we gather all the Drupal blog posts from the previous month that we’ve enjoyed the most. Here’s an overview of our favorite posts from July related to Drupal - enjoy the read!

READ MORE
Categories: Drupal

Agaric Collective: Using constants and pseudofields as data placeholders in the Drupal migration process pipeline

Planet Drupal - 5 August 2019 - 8:00pm

So far we have learned how to write basic Drupal migrations and use process plugins to transform data to meet the format expected by the destination. In the previous entry we learned one of many approaches to migrating images. In today’s example, we will change it a bit to introduce two new migration concepts: constants and pseudofields. Both can be used as data placeholders in the migration timeline. Along with other process plugins, they allow you to build dynamic values that can be used as part of the migrate process pipeline.

Setting and using constants

In the Migrate API, a constant is an arbitrary value that can be used later in the process pipeline. They are set as direct children of  the source section. You write a constants key whose value is a list of name-value pairs. Even though they are defined in the source section, they are independent of the particular source plugin in use. The following code snippet shows a generalization for settings and using constants:

source: constants: MY_STRING: 'http://understanddrupal.com' MY_INTEGER: 31 MY_DECIMAL: 3.1415927 MY_ARRAY: - 'dinarcon' - 'dinartecc' plugin: source_plugin_name source_plugin_config_1: source_config_value_1 source_plugin_config_2: source_config_value_2 process: process_destination_1: constants/MY_INTEGER process_destination_2: plugin: concat source: constants/MY_ARRAY delimiter: ' '

You can set as many constants as you need. Although not required by the API, it is a common convention to write the constant names in all uppercase and using underscores (_) to separate words. The value can be set to anything you need to use later. In the example above, there are strings, integers, decimals, and arrays. To use a constant in the process section you type its name, just like any other column provided by the source plugin. Note that you use the constant you need to name the full hierarchy under the source section. That is, the word constant and the name itself separated by a slash (/) symbol. They can be used to copy their value directly to the destination or as part of any process plugin configuration.

Technical note: The word constants for storing the values in the source section is not special. You can use any word you want as long as it does not collide with another configuration key of your particular source plugin. A reason to use a different name is that your source actually contains a column named constants. In that case you could use defaults or something else. The one restriction is that whatever value you use, you have to use it in the process section to refer to any constant. For example:

source: defaults: MY_VALUE: 'http://understanddrupal.com' plugin: source_plugin_name source_plugin_config: source_config_value process: process_destination: defaults/MY_VALUESetting and using pseudofields

Similar to constants, pseudofields stores arbitrary values for use later in the process pipeline. There are some key differences. Pseudofields are set in the process section. The name is arbitrary as long as it does not conflict with a property name or field name of the destination. The value can be set to a verbatim copy from the source (a column or a constant) or they can use process plugins for data transformations. The following code snippet shows a generalization for settings and using pseudofields:

source: constants: MY_BASE_URL: 'http://understanddrupal.com' plugin: source_plugin_name source_plugin_config_1: source_config_value_1 source_plugin_config_2: source_config_value_2 process: title: source_column_title my_pseudofield_1: plugin: concat source: - constants/MY_BASE_URL - source_column_relative_url delimiter: '/' my_pseudofield_2: plugin: urlencode source: '@my_pseudofield_1' field_link/uri: '@my_pseudofield_2' field_link/title: '@title'

In the above example, my_pseudofield_1 is set to the result of a concat process transformation that joins a constant and a column from the source section. The result value is later used as part of a urlencode process transformation. Note that to use the value from my_pseudofield_1 you have to enclose it in quotes (') and prepend an at sign (@) to the name. The new value obtained from URL encode operation is stored in my_pseudofield_2. This last pseudofield is used to set the value of the URI subfield for field_link. The example could be simplified, for example, by using a single pseudofield and chaining process plugins. It is presented that way to demonstrate that a pseudofield could be used as direct assignments or as part of process plugin configuration values.

Technical note: If the name of the subfield can be arbitrary, how can you prevent name clashes with destination property names and field names? You might have to look at the source for the entity and the configuration of the bundle. In the case of a node migration, look at the baseFieldDefinitions() method of the Node class for a list of property names. Be mindful of class inheritance and method overriding. For a list of fields and their machine names, look at the “Manage fields” section of the content type you are migrating into. The Field API prefixes any field created via the administration interface with the string field_. This reduces the likelihood of name clashes. Other than these two name restrictions, anything else can be used. In this case, the Migrate API will eventually perform an entity save operation which will discard the pseudofields.

Understanding Drupal Migrate API process pipeline

The migrate process pipeline is a mechanism by which the value of any destination property, field, or pseudofield that has been set can be used by anything defined later in the process section. The fact that using a pseudofield requires to enclose its name in quotes and prepend an at sign is actually a requirement of the process pipeline. Let’s see some examples using a node migration:

  • To use the title property of the node entity, you would write @title
  • To use the field_body field of the Basic page content type, you would write @field_body
  • To use the my_temp_value pseudofield, you would write @my_temp_value

In the process pipeline, these values can be used just like constants and columns from the source. The only restriction is that they need to be set before being used. For those familiar with the rewrite results feature of Views, it follows the same idea. You have access to everything defined previously. Anytime you use enclose a name in quotes and prepend it with an at sign, you are telling the migrate API to look for that element in the process section instead of the source section.

Migrating images using the image_import plugin

Let’s practice the concepts of constants, pseudofields, and the migrate process pipeline by modifying the example of the previous entry. The Migrate Files module provides another process plugin named image_import that allows you to directly set all the subfield values in the plugin configuration itself.

As in previous examples, we will create a new module and write a migration definition file to perform the migration. It is assumed that Drupal was installed using the standard installation profile. The code snippets will be compact to focus on particular elements of the migration. The full code is available at https://github.com/dinarcon/ud_migrations The module name is UD Migration constants and pseudofields and its machine name is ud_migrations_constants_pseudofields. The id of the example migration is udm_constants_pseudofields. Refer to this article for instructions on how to enable the module and run the migration. Make sure to download and enable the Migrate Files module. Otherwise, you will get an error like: “In DiscoveryTrait.php line 53: The "file_import" plugin does not exist. Valid plugin IDs for Drupal\migrate\Plugin\MigratePluginManager are:...”. Let’s see part of the source definition:

source: constants: BASE_URL: 'https://agaric.coop' PHOTO_DESCRIPTION_PREFIX: 'Photo of' plugin: embedded_data data_rows: - unique_id: 1 name: 'Michele Metts' photo_url: 'sites/default/files/2018-12/micky-cropped.jpg' photo_width: '587' photo_height: '657'

Only one record is presented to keep snippet short, but more exist. In addition to having a unique identifier, each record includes a name, a short profile, and details about the image. Note that this time, the photo_url does not provide an absolute URL. Instead, it is a relative path from the domain hosting the images. In this example, the domain is https://agaric.coop so that value is stored in the BASE_URL constant which is later used to assemble a valid absolute URL to the image. Also, there is no photo description, but one can be created by concatenating some strings. The PHOTO_DESCRIPTION_PREFIX constant stores the prefix to add to the name to create a photo description. Now, let’s see the process definition:

process: title: name psf_image_url: plugin: concat source: - constants/BASE_URL - photo_url delimiter: '/' psf_image_description: plugin: concat source: - constants/PHOTO_DESCRIPTION_PREFIX - name delimiter: ' ' field_image: plugin: image_import source: '@psf_image_url' reuse: TRUE alt: '@psf_image_description' title: '@title' width: photo_width height: photo_height

The title node property is set directly to the value of the name column from the source. Then, two pseudofields. psf_image_url stores a valid absolute URL to the image using the BASE_URL constant and the photo_url column from the source. psf_image_description uses the PHOTO_DESCRIPTION_PREFIX constant and the name column from the source to store a description for the image.

For the field_image field, the image_import plugin is used. This time, the subfields are not set manually. Instead, they are assigned using plugin configuration keys. The absence of the id_only configuration allows for this. The URL to the image is set in the source key and uses the psf_image_url pseudofield. The alt key allows you to set the alternative attribute for the image and in this case the psf_image_description pseudofield is used. For the title subfield sets the text of a subfield with the same name and in this case it is assigned the value of the title node property which was set at the beginning of the process pipeline. Remember that not only psedufields are available. Finally, the width and height configuration uses the columns from the source to set the values of the corresponding subfields.

What did you learn in today’s blog post? Did you know you can define constants in your source as data placeholders for use in the process section? Were you aware that pseudofields can be created in the process section to store intermediary data for process definitions that come next? Have you ever wondered what is the migration process pipeline and how it works? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: Drupal

Revision Purger

New Drupal Modules - 5 August 2019 - 5:43pm

Allows deleting revisions of entities. Inspired by #2925532: Provide bulk action to delete all old revisions of an entity

Categories: Drupal

Eelke Blok: Creating fields and other configuration during updates in Drupal

Planet Drupal - 5 August 2019 - 1:31pm

When deploying changes to a Drupal environment, you should be running database updates (e.g. drush updb, or through update.php) first, and only then import new configuration. The reason for this is that update hooks may want to update configuration, which will fail if that configuration is already structured in the new format (you can't do without updates either; update hooks don't just deal with configuration, but may also need to change the database schema and do associated data migrations). So, there really is no discussion; updates first, config import second. But sometimes, you need...

Categories: Drupal

Pages

Subscribe to As If Productions aggregator - Drupal