Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 6 hours 56 min ago

OSTraining: How to Embed Buy Now Stripe Button in Drupal

11 March 2018 - 11:25pm

Would you like to avoid a hassle of processing and keeping your online customers' card details? Stripe is a global online payment gateway you can quickly start using just for that.

In this tutorial, you will learn how to easily embed the "Buy Now" button from Stripe into your Drupal content. You will be able to integrate the Stripe Checkout even if you don't know how to write code.

Categories: Drupal

Agiledrop.com Blog: AGILEDROP: Top Drupal blog posts from February

11 March 2018 - 7:26pm
Each month, we revisit our top Drupal blog posts of the month, giving you the chance to check out some of our favourites. Here’s a look at the top blog posts from February.   First one on the list is Drupal 8 controller callback argument resolving explained, where Matt Glaman from Commerce guys show us how Drupal knows to pass proper arguments to your controller method. He discusses how the controller's callback arguments are resolved and put into proper order in our method.   We continue our list with Drupal 8 Development on Windows - Best Practices? by Michael Anello, co-owner of… READ MORE
Categories: Drupal

The Accidental Coder: Updating to Drupal 8.5 with Composer

11 March 2018 - 6:43pm
Updating to Drupal 8.5 with Composer j ayen green Sun, 03/11/2018 - 21:43
Categories: Drupal

PreviousNext: Configuration Override Inspector: Removing the Config Confusion

11 March 2018 - 6:03pm

Since the release of Drupal 8, it has become tricky to determine what and where override configuration is set.

Here are some of the options for a better user experience.

by Daniel Phin / 12 March 2018

Drupal allows you to override configuration by setting variables in settings.php. This allows you to vary configuration by which environment your site are served. In Drupal 7, when overrides are set, the overridden value is immediately visible in administration UI. Though the true value is transparent, when a user attempts to change configuration, the changes appear to be ignored. The changes are saved and stored. But Drupal exposes the overridden value when a configuration form is (re)loaded.

With Drupal 8, the behaviour of overridden configuration has reversed. You are always presented with active configuration, usually set by site builders. When configuration is accessed by code, overrides are applied on top of active configuration seamlessly. This setup is great if you want to deploy the active configuration to other environments. But it can be confusing on sites with overrides, since its not immediately obvious what Drupal is using.

An example of this confusion is: is your configuration forms show PHP error messages are switched-on, but no messages are visible. Or, perhaps you are overriding Swiftmailer with environment specific email servers. But emails aren't going to the servers displayed on the form.

A Drupal core issue exists to address these concerns. However this post aims to introduce a stopgap. In the form of a contrib module, of course.

Introducing Configuration Override Inspector (COI). This module makes configuration-overrides completely transparent to site builders. It provides a few ways overridden values can be exposed to site builders.

The following examples show error settings set to OFF in active configuration, but ON in overridden configuration. (such as a local.settings.php override on your dev machine)

// settings.php $config['system.logging']['error_level'] = 'verbose';

Hands-off: Allow users to modify active configuration, while optionally displaying a message with the true value. This is most like out-of-the-box Drupal 8 behaviour:

Expose and Disable: Choose whether to disable form fields with overrides display the true value as the field value:

Invisible: Completely hide form fields with overrides:

Unfortunately Configuration Override Inspector doesnt yet know how to map form-fields with appropriate configuration objects. Contrib module Config Override Core Fields exists to provide mapping for Drupal core forms. Further documentation is available for contrib modules to map fields to configuration objects. Which looks a bit like this:

$config = $this->config('system.logging'); $form['error_level'] = [ '#type' => 'radios', '#title' => t('Error messages to display'), '#default_value' => $config->get('error_level'), // ... '#config' => [ 'key' => 'system.logging:error_level', ], ];

Get started with Configuration Override Inspector (COI) and Config Override Core Fields:

composer require drupal/coi:^1.0@beta composer require drupal/config_override_core_fields:^1.0@beta

COI requires Drupal 8.5 and above, thanks to improvements in Drupal core API.

Have another strategy for handling config overrides? Let me know in the comments!

Tagged CMI, Contrib Modules
Categories: Drupal

Jeff Geerling's Blog: Two MidCamp Sessions: Local Dev for Dummies, Jenkins and Drupal

11 March 2018 - 5:38pm

MidCamp 2018 wrapped up with a bang today, as there was another year full of great training, sessions, and my favorite aspect, the 'hallway track' (where you go around and network between and during some sessions with tons of excellent Drupalists from the Midwest and around the country).

This year, I presented two sessions; one a co-presentation with Chris Urban titled Local Dev Environments for Dummies, the other a solo presentation titled Jenkins or: How I learned to stop worrying and love automation.

Embedded below are the video recordings of the sessions (recorded as always by the excellent Kevin Thull of Blue Drop Shop!):

Categories: Drupal

Dries Buytaert: That "passion + learning + contribution + relationships" feeling

11 March 2018 - 4:01pm

Talking about the many contributors to Drupal 8.5, a few of them shouted out on social media that they got their first patch in Drupal 8.5. They were excited but admitted it was more challenging than anticipated. It's true that contributing to Drupal can be challenging, but it is also true that it will accelerate your learning, and that you will likely feel an incredible sense of reward and excitement. And maybe best of all, through your collaboration with others, you'll forge relationships and friendships. I've been contributing to Open Source for 20 years and can tell you that that combined "passion + learning + contribution + relationships"-feeling is one of the most rewarding feelings there is.

Categories: Drupal

Dries Buytaert: Many small contributions add up to big results

11 March 2018 - 3:49pm

I just updated my site to Drupal 8.5 and spent some time reading the Drupal 8.5 release notes. Seeing all the different issues and contributors in the release notes is a good reminder that many small contributions add up to big results. When we all contribute in small ways, we can make a lot of progress together.

Categories: Drupal

Matt Glaman: Flush and run, using Kernel::TERMINATE to improve page speed performance

11 March 2018 - 9:00am
Flush and run, using Kernel::TERMINATE to improve page speed performance mglaman Sun, 03/11/2018 - 11:00

At DrupalCon Dublin I caught Fabianx’s presentation on streaming and other awesome performance techniques. His presentation explained how BigPipe worked to me, finally. It also made me aware of the fact that, in Drupal, we have mechanisms to do expensive procedures after output has been flushed to the browser. That means the end user sees all their markup but PHP can chug along doing some work without the page slowing down.

Categories: Drupal

Oliver Davies: How to split a new Drupal contrib project from within another repository

9 March 2018 - 4:00pm
Does it need to be part of the site repository?

An interesting thing to consider is, does it need to be a part of the site repository in the first place?

If from the beginning you intend to contribute the module, theme or distribution and it’s written as generic and re-usable from the start, then it could be created as a separate project on Drupal.org or as a private repository on your Git server from the beginning, and added as a dependency of the main project rather than part of it. It could already have the correct branch name and adhere to the Drupal.org release conventions and be managed as a separate project, then there is no later need to "clean it up" or split it from the main repo at all.

This is how I worked at the Drupal Association - with all of the modules needed for Drupal.org hosted on Drupal.org itself, and managed as a dependency of the site repository with Drush Make.

Whether this is a viable option or not will depend on your processes. For example, if your code needs to go through a peer review process before releasing it, then pushing it straight to Drupal.org would either complicate that process or bypass it completely. Pushing it to a separate private repository may depend on your team's level of familiarity with Composer, for example.

It does though avoid the “we’ll clean it up and contribute it later” scenario which probably happens less than people intend.

Create a new, empty repository

If the project is already in the site repo, this is probably the most common method - to create a new, empty repository for the new project, add everything to it and push it.

For example:

cd web/modules/custom/my_new_module # Create a new Git repository. git init # Add everything and make a new commit. git add -A . git commit -m 'Initial commit' # Rename the branch. git branch -m 8.x-1.x # Add the new remote and push everything. git remote add origin username@git.drupal.org:project/my_new_module.git git push origin 8.x-1.x

There is a huge issue with this approach though - you now have only one single commit, and you’ve lost the commmit history!

This means that you lose the story and context of how the project was developed, and what decisions and changes were made during the lifetime of the project so far. Also, if multiple people developed it, now there is only one person being attributed - the one who made the single new commit.

Also, if I’m considering adding your module to my project, personally I’m less likely to do so if I only see one "initial commit". I’d like to see the activity from the days, weeks or months prior to it being released.

What this does allow though is to easily remove references to client names etc before pushing the code.

Use a subtree split

An alternative method is to use git-subtree, a Git command that "merges subtrees together and split repository into subtrees". In this scenario, we can use split to take a directory from within the site repo and split it into it’s own separate repository, keeping the commit history intact.

Here is the description for the split command from the Git project itself:

Extract a new, synthetic project history from the history of the subtree. The new history includes only the commits (including merges) that affected , and each of those commits now has the contents of at the root of the project instead of in a subdirectory. Thus, the newly created history is suitable for export as a separate git repository.

Note: This command needs to be run at the top level of the repository. Otherwise you will see an error like "You need to run this command from the toplevel of the working tree.".

To find the path to the top level, run git rev-parse --show-toplevel.

In order to do this, you need specify the prefix for the subtree (i.e. the directory that contains the project you’re splitting) as well as a name of a new branch that you want to split onto.

git subtree split --prefix web/modules/custom/my_new_module -b split_my_new_module

When complete, you should see a confirmation message showing the branch name and the commit SHA of the branch.

Created branch 'split_my_new_module' 7edcb4b1f4dc34fc3b636b498f4284c7d98c8e4a

If you run git branch, you should now be able to see the new branch, and if you run git log --oneline split_my_new_module, you should only see commits for that module.

If you do need to tidy up a particular commit to remove client references etc, change a commit message or squash some commits together, then you can do that by checking out the new branch, running an interactive rebase and making the required amends.

git checkout split_my_new_module git rebase -i --root

Once everything is in the desired state, you can use git push to push to the remote repo - specifying the repo URL, the local branch name and the remote branch name:

git push username@git.drupal.org:project/my_new_module.git split_my_new_module:8.x-1.x

In this case, the new branch will be 8.x-1.x.

Here is a screenshot of example module that I’ve split and pushed to GitLab. Notice that there are multiple commits in the history, and each still attributed to it’s original author.

Also, as this is standard Git functionality, you can follow the same process to extract PHP libraries, Symfony bundles, WordPress plugins or anything else.

Categories: Drupal

Acquia Developer Center Blog: Securing Non-Production Environments

9 March 2018 - 7:23am

One of the common issues I've noticed when working with customers is the tendency to treat non-production environments, such as dev or stage, as less important with respect to security.

This is understandable since these environments are effectively disposable and could be rebuilt from production at any time. However an important consideration that should be taken into account is what data lives in these environments.

Tags: acquia drupal planet
Categories: Drupal

Valuebound: Componentizing Drupal Front End using Pattern Lab

9 March 2018 - 5:33am

Componentization has become a growing consideration in most of the web application development firms. The reasons are obvious, instead of reinventing the wheels again and again, why don’t we re-use them. This article will help you to understand the importance of componentizing your Drupal front end and how you can achieve that using Pattern Lab.

So what is Componentization?

In front-end perspective, components are a collection of HTML, CSS, and JS that combines together to form a display element and Component-Driven Development (CDD), a development methodology by which the web pages are built from the bottom up. 'Componentization' is the process of breaking things down into small and easily…

Categories: Drupal

OPTASY: What Are Some of The Best Free Drupal 7 E-commerce Themes?

9 March 2018 - 2:12am
What Are Some of The Best Free Drupal 7 E-commerce Themes? silviu.serdaru Fri, 03/09/2018 - 10:12 The “best” meaning “full-featured”, packed with plenty of built-in functionalities for eCommerce, granting your site both a visually-appealing and USABLE design. So, which are these top themes? To help you save valuable time, we've narrowed down all the options of free Drupal 7 eCommerce themes to a list of 5.
Categories: Drupal

Ixis.co.uk - Thoughts: Last Month in Drupal - February 2018

9 March 2018 - 2:00am
February has been and gone so here we take a look back at all the best bits of news that have hit the Drupal community over the last month.
Categories: Drupal

Lucius Digital: Always secure the files on your website properly | why (and how to do it in Drupal)

9 March 2018 - 1:24am
Per May 25th 2018, the General Data Protection Regulation comes into effect, making it advisable to have an extra check on the security of your data. Here are some tips on securing files in Drupal:
Categories: Drupal

Kalamuna Blog: Help! Why does Composer keep installing Drupal 8.5 "BETA" instead of the stable version?

8 March 2018 - 9:09pm
Help! Why does Composer keep installing Drupal 8.5 "BETA" instead of the stable version? Hawkeye Tenderwolf Thu, 03/08/2018 - 21:09

Drupal core 8.5.0-stable was released just a few days ago, and I imagine other folks may run into the same installation problem as I did when attempting to upgrade. If you think this might be you, then read on...

Problem: When trying to upgrade from any previous version of Drupal core to ~8.5, Composer delivers 8.5.0-beta1 instead of the latest stable version.

Categories Articles Drupal
Categories: Drupal

Roman Agabekov: Mysql Master-Slave Replication

8 March 2018 - 8:31pm
Mysql Master-Slave Replication Submitted by admin on Fri, 03/09/2018 - 04:31

Hey all! Today, we shall show you some examples of master-slave replication setups.

A bit of theory first

Why do you need replication in the first place? There are at least two reasons to set it up. First off, it is your insurance that helps avoid downtime when/if your master MySQL server goes down: with replication, slave server picks up and fills for the master. Secondly, replication allows decreasing load suffered by the master server: you use it for writing only and pass read queries to slave.

Tags
Categories: Drupal

Hook 42: Drupal 8 Interviews: Spotlight on Adam Bergstein

8 March 2018 - 5:05pm

Adam Bergstein is the VP of Engineering at Hook 42. Previously he was Associate Director of Engineering at Civic Actions and worked at Acquia as a Technical Architect. Adam is an active member of the Drupal Community. He recently took over the simplytest.me project, ported many modules to Drupal 8, is involved in Google Summer of Code, serves on the Drupal Governance Committee, and provides mentorship.
He has given multiple talks. Most of his talks focus on Drupal security, working with teams, or technical enablement.

Categories: Drupal

Hook 42: Drupal 8 Interviews: Spotlight on Adam Bergstein

8 March 2018 - 5:05pm

Adam Bergstein is the VP of Engineering at Hook 42. Previously he was Associate Director of Engineering at Civic Actions and worked at Acquia as a Technical Architect. Adam is an active member of the Drupal Community. He recently took over the simplytest.me project, ported many modules to Drupal 8, is involved in Google Summer of Code, serves on the Drupal Governance Committee, and provides mentorship.
He has given multiple talks. Most of his talks focus on Drupal security, working with teams, or technical enablement.

Categories: Drupal

agoradesign: Asset Packagist as State of the Art in 3rd part library integration

8 March 2018 - 2:47pm
The introduction of Composer in Drupal 8 was a great improvement over managing packages via Drush Make, but however did leave some questions about properly load 3rd party Javascirpt libraries - here's an advice how you should do it.
Categories: Drupal

CU Boulder - Webcentral: Drupal Deep Dives: Ignoring Your Slaves

8 March 2018 - 2:40pm

If you’re like me, you don’t know much about Drupal 7’s database layer besides a few functions you need to use daily. After scanning the comments for those API pages, I can usually get done what I need to, and I’ve never really have any weird database errors or issues that made me look more closely into the database APIs.

db_insert(); db_query(); db_merge(); // etc...

I work at an organization now that runs a service for 800+ sites with thousands of content editors. On any given day, the service performs more reads and writes than any application I’ve ever worked on before. Even with that caveat, our service doesn’t make all that many writes to the databases each day. However, our database infrastructure is set up (poorly) by another IT group (whose name shall not be mentioned), and because of that, we had to recently program defensively while performing database transactions.

Drupal Database API

Drupal has a nice overview page of documentation about how a developer ought to use the database APIs. Included in that section are topics I’ve never really explored.

For example, I’ve felt the pain of using Views as a query builder only to find out how slow and inefficient the default queries tend to be. Granted it is meant as a visual tool for site builders who can’t or don’t know how to use the database API functions, but it makes me sad sometimes.

Could I potentially use SQL Views to create some virtual tables and simplify my queries partially avoiding Drupal’s “join all the field tables together” issue? Probably, now that I know about SQL Views.

I won’t go over a lot of the functionality covered in the docs, but it’s not a bad idea to read through all of that API documentation if you never have before. That’s what Friday afternoons are for, right? Your Drupal application performs a lot of queries every request/response cycle, and by finding optimizations in these docs, you may drastically increase your app’s performance with only a few lines of code.

Master/Slave? Sounds Kinky

In the title of this post, I mentioned “slaves” mainly for the clickbait factor, but what I meant was in the context of a “master/slave” database relationship. Now people, put down the stones you are about to throw at me for my use of the word “slave” in 2018. In Drupal 7, that is the terminology used in the codebase, although in Drupal 8, it has been updated to “primary/replica” which is more semantic and descriptive. You can read a detailed discussion on the topic within the Drupal community, but I will still use “master/slave” at points in this post since Drupal 7 makes me use it.

Your site might only have one database, and for local development, my sites generally only have one database. The default.settings.php file shipped in Drupal 7 has a lengthy section on database configurations and what options are available to you.

For each database, you may optionally specify multiple "target" databases. A target database allows Drupal to try to send certain queries to a different database if it can but fall back to the default connection if not. That is useful for master/slave replication, as Drupal may try to connect to a slave server when appropriate and if one is not available will simply fall back to the single master server. The general format for the $databases array is as follows: @code $databases['default']['default'] = $info_array; $databases['default']['slave'][] = $info_array; $databases['default']['slave'][] = $info_array; $databases['extra']['default'] = $info_array; @endcode In the above example, $info_array is an array of settings described above. The first line sets a "default" database that has one master database (the second level default). The second and third lines create an array of potential slave databases. Drupal will select one at random for a given request as needed. The fourth line creates a new database with a name of "extra".

That segment of comments might be the only place you’ve seen “slave” mentioned in Drupal before. Normally, you’ve probably only used the “default” database info $databases['default']['default'] = $info_array; to set up a site. That’s all I was accustomed to using.

The “slave” database acts as a “replica” of the “master” or “default” or better yet “primary” database…you might be noticing why using “master/slave” was a bad idea regardless of the generally negative connotation of the word “slave”. It’s just not all that semantic when describing the responsibilities for each type of connection.

The “replica” database’s job is to sync with the default “primary” database so that there is only one canonical source of information. Replicas allow for failovers during times of high database load. Generally, reads are more important for the functionality of your application. Writes, on say saving a form, can always roll back transactions or provide feedback to a user on why the data can’t be saved. If an anonymous user goes to a page on your site and Drupal can’t read anything then everyone gets a fatal error.

If we go back to the comments above, you can see a “default” connection with one master and two slave databases. Drupal has some documentation on how that type of a database configuration works.

"This definition provides a single "default" server and two "slave" servers. Note that the "slave" key is an array. If any target is defined as an array of connection information, one of the defined servers will be selected at random for that target for each page request. That is, on one-page request, all slave queries will be sent to dbserver2 while on the next they may all be sent to dbserver3. This means that during any request one of the three default connections in that example might be used. On a site with high traffic, you can probably see how this database setup would come in handy for times of high load."

You can even tell Drupal to target one of the connections during a query.

$query = db_select('node', 'n', array('target' => 'slave')); Original DB Error

My initial foray into looking at master/slave replication in Drupal 7 came with a bug report.

PDOException: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '60-36' for key 'PRIMARY': INSERT INTO {linkchecker_bean} (bid, lid) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1); Array ( [:db_insert_placeholder_0] => 60 [:db_insert_placeholder_1] => 36 ) in _linkchecker_bean_add_bean_links() (line 196 of /data/code/profiles/express/express-2.8.3/modules/contrib/linkchecker/modules/linkchecker_bean/linkchecker_bean.module)

After some investigation, we thought that the slave database was being read before the sync from the master happened. When queried there was no entry in the slave database; however, the master database already had an entry. The master database always makes the writes and so a duplication error occurred during the next attempted insertion.

// Remove all links from the links array already in the database and only // add missing links to database. $missing_links = _linkchecker_bean_links_missing($bean->bid, $links); // Ignore slave database briefly. variable_set('maximum_replication_lag', 300); db_ignore_slave(); // Only add unique links to database that do not exist. $i = 0; foreach ($missing_links as $url) { $urlhash = drupal_hash_base64($url); $link = db_query('SELECT lid FROM {linkchecker_link} WHERE urlhash = :urlhash', array(':urlhash' => $urlhash))->fetchObject(); if (!$link) { $link = new stdClass(); $link->urlhash = $urlhash; $link->url = $url; $link->status = _linkchecker_link_check_status_filter($url); drupal_write_record('linkchecker_link', $link); } db_insert('linkchecker_bean') ->fields(array( 'bid' => $bean->bid, 'lid' => $link->lid, )) ->execute(); // ...

The original code makes a db query for $missing_links that must have gone to a replica database that hadn’t yet synced with the primary database. That is why later in the code when the db_insert() happens, the insert fails.

db_merge()?

My first thought when I looked at the code was to use db_merge() instead of db_insert(). By using a merge query you either make an update or insertion query to the database table. By providing the same primary keys as the ones you are inserting, you can ensure that the database query never fatal errors due to duplicate content existing in the table.

db_merge('linkchecker_bean') ->key(array( 'bid' => $bean->bid, 'lid' => $link->lid, )) ->fields(array( 'bid' => $bean->bid, 'lid' => $link->lid, )) ->execute();

However, this “solution” doesn’t really address the issue. The code isn’t supposed to update a value that could already exist in the table. In this case, the correct thing is happening by giving me a fatal error. The problem is that the error isn’t caught.

Proper Exception Handling

You should always wrap any function call that might fail terribly in a try/catch statement. The try block of code acts just as it would without try {} wrapped around it. The catch block allows any potential error in the try block to be caught and dealt with without breaking execution of the PHP script.

$txn = db_transaction(); try { db_insert('linkchecker_bean') ->fields(array( 'bid' => $bean->bid, 'lid' => $link->lid, )) ->execute(); } catch (Exception $e) { $txn->rollback(); watchdog_exception('linkchecker_bean', $e); }

Now we have preserved the original db_insert()while catching the original fatal error. You’ll also notice that adb_transaction() object is used to rollback any transaction if the insert fails.

I never knew about that functionality in Drupal 7, but I have grown accustomed to being able to rollback database transactions in other PHP frameworks. Too bad most module developers don’t integrate a rollback on erroneous database transactions. Drupal core could be taking care of this under-the-hood, but I’d rather see it explicitly defined in contributed code. From now on, I’ll probably be using those functions in my hook_update() code. You can read more about database error handling in the Drupal database documentation.

I was pretty satisfied with submitting a patch the Linkchecker project based on the code above, except that it didn’t fix our issue. Since our theory revolved around database replication being slow, we had to go one step further and explicitly define the relationship between primary and replica database connections at the time of the missing link's query.

Finally, Ignore The Slaves

We finally get to do it, folks. Ignore those stupid slaves…and Twitter has gone wild again with hateful tweets directed at me…okay, okay, back to calling them replicas. You can tell Drupal to ignore the replica databases and only interact with the primary connection if you need to.

// Ignore slave database briefly. variable_set('maximum_replication_lag', 300); db_ignore_slave(); // Remove all links from the links array already in the database and only // add missing links to database. $missing_links = _linkchecker_bean_links_missing($bean->bid, $links); // Only add unique links to database that do not exist. $i = 0; foreach ($missing_links as $url) { $urlhash = drupal_hash_base64($url); $link = db_query('SELECT lid FROM {linkchecker_link} WHERE urlhash = :urlhash', array(':urlhash' => $urlhash))->fetchObject(); if (!$link) { $link = new stdClass(); $link->urlhash = $urlhash; $link->url = $url; $link->status = _linkchecker_link_check_status_filter($url); drupal_write_record('linkchecker_link', $link); } $txn = db_transaction(); try { db_insert('linkchecker_bean') ->fields(array( 'bid' => $bean->bid, 'lid' => $link->lid, )) ->execute(); } catch (Exception $e) { $txn->rollback(); watchdog_exception('linkchecker_bean', $e); } // Go back to using the slave database. // db_ignore_slave() sets this session variable that another function uses to see if the slave should be ignored. unset($_SESSION['ignore_slave_server']); // ...

Our final code ignores the replicas for a brief time using db_slave_ignore() and then returns querying back to normal by unsetting $_SESSION['ignore_slave_server'] after all of the database queries have run.

Internally, Drupal uses the session variable, which is a timestamp, to check whether the slave server should be ignored. This is done via hook_init() in the System module usingDatabase:ignoreTarget('default', 'slave'). There is also a nice note in the comments about how the ignoring works.

function system_init() { $path = drupal_get_path('module', 'system'); // Add the CSS for this module. These aren't in system.info, because they // need to be in the CSS_SYSTEM group rather than the CSS_DEFAULT group. drupal_add_css($path . '/system.base.css', array('group' => CSS_SYSTEM, 'every_page' => TRUE)); if (path_is_admin(current_path())) { drupal_add_css($path . '/system.admin.css', array('group' => CSS_SYSTEM)); } drupal_add_css($path . '/system.menus.css', array('group' => CSS_SYSTEM, 'every_page' => TRUE)); drupal_add_css($path . '/system.messages.css', array('group' => CSS_SYSTEM, 'every_page' => TRUE)); drupal_add_css($path . '/system.theme.css', array('group' => CSS_SYSTEM, 'every_page' => TRUE)); // Ignore slave database servers for this request. // // In Drupal's distributed database structure, new data is written to the // master and then propagated to the slave servers. This means there is a // lag between when data is written to the master and when it is available on // the slave. At these times, we will want to avoid using a slave server // temporarily. For example, if a user posts a new node then we want to // disable the slave server for that user temporarily to allow the slave // server to catch up. That way, that user will see their changes immediately // while for other users we still get the benefits of having a slave server, // just with slightly stale data. Code that wants to disable the slave // server should use the db_ignore_slave() function to set // $_SESSION['ignore_slave_server'] to the timestamp after which the slave // can be re-enabled. if (isset($_SESSION['ignore_slave_server'])) { if ($_SESSION['ignore_slave_server'] >= REQUEST_TIME) { Database::ignoreTarget('default', 'slave'); } else { unset($_SESSION['ignore_slave_server']); } } // Add CSS/JS files from module .info files. system_add_module_assets(); } Wait, We’ve Already Executed hook_init()?

Since it happens in a hook_init(), then pray-tell how is the database ignored later in my Linkchecker code? I’m not sure either. Subsequent requests will ignore the replica for as long as the timeout is active, but the queries in my code could possibly still hit the slave database. Wait, so I haven’t fixed my issue. And you certainly don’t want to place db_ignore_slave() before the hook_init() is called, essentially always setting a timeout to ignore the replica.

In the comment above the session variable check, it explains that some users will see stale data. This is okay for the scenario where I save content via a node edit screen and expect it to show up on the next node view request. But what happens when there is no “user” saving content and the queries happen within a request cycle, not write on one request and then read on another.

I am one of the “users” who can’t get stale data because we are relying on it to make a subsequent database insert in the same request. What we really need to do is “target” the default connection when we make a query.

db_query('SELECT lid FROM {linkchecker_link} WHERE urlhash = :urlhash', array(':urlhash' => $urlhash), array('target' => 'default'))->fetchObject();

Up until this point, I had only used the $args array to pass in dynamic variables to database queries and avoid SQL injection, but there is another $options parameter you can use to identify the database target among other things. While the allowed values for $options can be hard to know from the db_query() API documentation, you can at least find the default values created when that parameter isn’t passed into db_query(). Based on the docs for the “target” key, you can have two values for the target: “slave” or< “default”.

"The database "target" against which to execute a query. Valid values are "default" or "slave". The system will first try to open a connection to a database specified with the user-supplied key. If one is not available, it will silently fall back to the "default" target. If multiple databases connections are specified with the same target, one will be selected at random for the duration of the request. So when you don’t explicitly specify a target and have more than one connection, e.g. adding a replica, the query will pick a target at random which might be a slave with stale data."

After adding a target in another Linckchecker query, my job was done…and I didn’t even have to ignore the slaves after all. Hopefully, you now know something about database replication in Drupal 7, how to use db_ignore_slave() properly, and how to explicitly target databases per query as well.

Developer Blog
Categories: Drupal

Pages