Skip to Content

Planet Drupal

Syndicate content
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 1 day 39 min ago

Freelock : Embed or integrate? The problem with widgets

1 July 2015 - 1:34pm

We hear it all the time:

Why do you recommend a 6 hour budget for a simple integration? Here's an embed widget right here -- if this were WordPress I could do it myself!

Well, in Drupal you can do it yourself, exactly the same way you might in WordPress. Add a block, use an input format that doesn't strip out Javascript, paste in your code, put the block where you want it on your page, and away you go.

Drupal PlanetPerformanceIntegrationEmbedSecurity
Categories: Drupal

Acquia: Improve Your Site’s Security by Adding Two-Factor Authentication

1 July 2015 - 12:42pm

Identity theft and site compromises are all-too-common occurrences -- it seems a day rarely goes by without a news story detailing the latest batch of user passwords which have been compromised and publicly posted.

Passwords were first used in the 1960s when computers were shared among multiple people via time-sharing methods. Allowing multiple users on a single system required a way for users to prove they were who they claimed to be. As computer systems have grown more complex over the last 50 years, the same password concept has been baked into them as the core authentication method. That includes today’s Web sites, including those built using Drupal.

Why Better Protection Is Needed

Today, great lengths are taken to both protect a user’s password, and to protect users from choosing poor passwords. Drupal 7, for example, provided a major security upgrade in how passwords are stored. It also provides a visual indicator as to how complex a password is, in an attempt to describe how secure it might be.

The password_policy module is also available to enforce a matrix of requirements when a user chooses a password.

Both of these methods have helped to increase password security, but there’s still a fundamental problem. A password is simply asking a user to provide a value they know. To make this easier, the vast majority of people reuse the same password across multiple sites.

That means that no matter how well a site protects a user’s password, any other site that does a poor job could provide an attacker all they need to comprise your users’ accounts.

How Multi-Factor Authentication Works

Multi-Factor authentication is one way of solving this problem. There are three ways for a person to prove he is who he claims to be. These are known as factors and are the following:

  • Something you know - typically a password or passphrase
  • Something you have - a physical device such as a phone, ID, or keyfob
  • Something you are - biometric characteristics of the individual

Multi-factor authentication requests two or more of these factors. It makes it much more difficult for someone to impersonate a valid user. With it, if a user’s username and password were to be compromised, the attacker still wouldn’t be able to provide the user’s second form of authentication.

Here, we’ll be focusing on the most common multi-factor solution, something you know (existing password) and something you have (a cell phone).

This is the easiest method of multi-factor authentication, and is also known as two-factor authentication (TFA) because it uses two out of three factors. It’s already used by a large number of financial institutions, as well as large social websites such as Google, Facebook, and Twitter. Acquia implemented TFA for it’s users in 2014 as discussed in this blog post: Secure Acquia accounts with two-step verification and strong passwords.

How To Do It

A suite of modules are available for Drupal for multi-factor authentication. The Two-Factor Authentication module provides integration into Drupal’s existing user- and password-based authentication system. The module is built to be the base framework of any TFA solution, and so does not provide a second factor method itself. Instead the TFA Basic plugins module provides three plugins which are Google Authenticator, Trusted Device, and SMS using Twilio. TFA Basic can also be used as a guide to follow when creating custom plugins. You can find the documentation here: TFA plugin development.

You’ll find that it’s easy to protect your site and its users with TFA, and given its benefits, it’s a good idea to start now.

But before you do, check out the TFA documentation. Acquia Cloud Site Factory users can find documentation for configuring two-factor authentication for their accounts here.

For more advice, check out TFA tips from Drupal.org.

Tags:  acquia drupal planet
Categories: Drupal

Red Crackle: Should you create a Page View or a Block View?

1 July 2015 - 12:08pm
You need to show a list of content on a page. You know that you need to use Views for it. But you are wondering whether to create a Page View or a Block View. Ask yourself this simple question to decide it for yourself.
Categories: Drupal

Another Drop in the Drupal Sea: How Much Will it Cost to Build this Website?

1 July 2015 - 11:59am

As professionals in the web tech sector, we know that there is no one set answer to that question. And yet, we really don't want to be spending our time having to explain over and over to prospects why we can't just answer the question.

We could provide an analogy and respond by saying, "That's like asking a random custom home builder how much it's going to cost to build a house." We could continue by explaining that the builder will need to know the location of the house, the square footage of the house, the desired finishes on the house, etc.

read more

Categories: Drupal

Drupal @ Penn State: A Significant Event

1 July 2015 - 8:36am

Yesterday something significant occurred, http://brandywine.psu.edu launched on the new Polaris 2 Drupal platform. And soon the Abington Campus web site will move to the same platform. And perhaps many more.

Categories: Drupal

Advomatic: Protecting ACLU.org’s Privacy and Security

1 July 2015 - 8:25am
Earlier this year, we launched a new site for the ACLU. The project required a migration from Drupal 6, building a library of interchangeable page components, complex responsive theming, and serious attention to accessibility, security and privacy. In this post, I’ll highlight some of the security and privacy-related features we implemented. Privacy As an organization, the... Read more »
Categories: Drupal

Agaric Collective: Performant bulk-redirection with Apache RewriteMap

1 July 2015 - 6:47am

When continuing development of a web site, big changes occur every so often. One such change that may occur, frequently as a result of another change, is a bulk update of URLs. When this is necessary, you can greatly improve the response time experienced by your users—as they are redirected from the old path to the new path—by using a handy directive offered in Apache's mod_rewrite called RewriteMap.

At Agaric we regularly turn to Drupal for it's power and flexibility, so one might question why we didn't leverage Drupal's support for handling redirects. When we see an opportunity for our software/system to respond "as early as it can", it is worth investigating how that is handled. Apache handles redirects itself, making it entirely unnecessary to hand-off to PHP, never mind Drupal bootstrapping and retrieving up a redirect record in a database, just to tell a browser (or a search engine) to look somewhere else.

There were two conditions that existed making use of RewriteMap a great candidate. For one, there will be no changes to the list of redirects once they are set: these are for historical purposes only (the old URLs are no longer exposed anywhere else on the site). Also, because we could make the full set of hundreds of redirects via a single RewriteRule—thanks to the substitution capability afforded by RewriteMap—this solution offered a fitting and concise solution.

So, what did we do, and how did we do it?

We started with an existing set of URLs that followed the pattern: http://example.com/user-info/ID[/tab-name]. Subsequently we implemented a module on the site that produced aliases for our user page URLs. The new patten to the page was then (given exceptions for multiple J Smiths, etc via the suffix): http://example.com/user-info/firstname-lastname[-suffix#][/tab-name]. The mapping of ID to firstname-lastname[-suffix#] was readily available within Drupal, so we used an update_hook to write out the existing mappings to a file (in the Drupal public files folder, since we know that's writable by Drupal) . This file (which I called 'staffmapping.txt') is what we used for a simple text-based rewrite map. Sample output of the update hook looked like this:

# User ID to Name mapping: 1 admin-admin 2 john-smith 3 john-smith-2 4 jane-smith

The format of this file is pretty straight-forward: comments can be started on any line with a #, and the mapping lines themselves are composed of {lookupValue}{whitespace}{replacementValue}.

To actually consume this mapping somewhere in our rules, we must let Apache know about the mapping file itself. This is done with a RewriteMap directive, which can be placed in the Server config or else inside a VirtualHost directive. The format of the RewriteMap looks like this: RewriteMap MapName MapType:MapSource. In our case, the file is a simple text file mapping, so the MapType is 'txt'. The resulting string added to our VirtualHost section is then: RewriteMap staffremap txt:/path/to/staffmapping.txt This directive makes this rewrite mapping file available under the name "staffremap" in our RewriteRules. There are other MapTypes, including ones that uses random selection for the replacement values from a text file, using a hash map rather than a text file, using an internal function, or even using an external program or script to generate replacement values.

Now it's time to actually change incoming URLs using this mapping file, providing the 301 redirect we need. The rewrite rule we used, looks like this:

RewriteRule ^user-detail/([0-9]+)(.*) /user-detail/${staffremap:$1}$2 [R=301,L]

The initial argument to the rewrite rule identifies what incoming URLs this rule applies to. This is the string: "^user-detail/([0-9]+)(.*)". This particular rule looks for URLs starting with (signified by the special character ^) the string "user-detail/", then followed by one or more numbers: ([0-9]+), and finally, anything else that might appear at the end of the string: "(.*)". There's a particular feature of regex being used here as well: each of search terms in parenthesis are captured (or tagged) by the regex processor which then provides some references that can be used in the replacement string portion. These are available with $<captured position>—so, the first value captured by parenthesis is available in "$1"—this would be the user ID, and the second in "$2"—which for this expression would be anything else appearing after the user ID.

Following the whitespace is our new target URL expression: "/user-detail/${staffremap:$1}$2". We're keeping the beginning of the URL the same, and then following the expression syntax "${rewritemap:lookupvalue}", which in our case is: "${staffremap:$1}" we find the new user-name URL. This section could be read as: take the value from the rewrite map called "staffremap", where the lookup value is $1 (the first tagged expression in the search: the numeric value) and return the substitution value from that map in place of this expression. So, if we were attempting to visit the old URL /user-detail/1/about, the staffremap provides the value "admin-admin" from our table. The final portion of the replacement URL (which is just $2) copies everything else that was passed on the URL through to the redirected URL. So, for example, /user-detail/1/about includes the /about portion of the URL in the ultimate redirect URL: /user-detail/admin-admin/about

The final section of the sample RewriteRule is for applying additional flags. In this case, we are specifying the response status of 301, and the L indicates to mod_rewrite that this is the last rule it should process.
That's basically it! We've gone from an old URL pattern, to a new one with a redirect mapping file, and only two directives. For an added performance perk, especially if your list of lookup and replacement values is rather lengthy, you can easily change your text table file (type txt) with a HashMap (type dbm) that Apache's mod_rewrite also understands using a quick command and directive adjustment. Following our example, we'll first run:

$> httxt2dbm -i staffrepam.txt -o staffremap.map

Now that we have a hashmap file, we can adjust our RewriteMap directive accordingly, changing the type to map, and of course updating the file name, which becomes:

RewriteMap staffremap dbm:/path/to/staffremap.map

RewriteMap substitutions provide a straight-forward, and high-performance method for pretty extensive enhancement of RewriteRules. If you are not familiar with RewriteRules generally, at some point you should consider reviewing the Apache documentation on mod_rewrite—it's worthwhile knowledge to have.

Categories: Drupal

Dries Buytaert: One year later: the Acquia Certification Program

1 July 2015 - 6:31am

A little over a year ago we launched the Acquia Certification Program for Drupal. We ended up the first year with close to 1,000 exams taken, which exceeded our goal of 300-600. Today, I'm pleased to announce that the Acquia Certification Program passed another major milestone with over 1,000 exams passed (not just taken).

People have debated the pros and cons of software certifications for years (including myself) so I want to give an update on our certification program and some of the lessons learned.

Acquia's certification program has been a big success. A lot of Drupal users require Acquia Certification; from the Australian government to Johnson & Johnson. We also see many of our agency partners use the program as a tool in the hiring process. While a certification exam can not guarantee someone will be great at their job (e.g. we only test for technical expertise, not for attitude), it does give a frame of reference to work from. The feedback we have heard time and again is how the Acquia Certification Program is tough, but fair; validating skills and knowledge that are important to both customers and partners.

We also made the Certification Magazine Salary Survey as having one of the most desired credentials to obtain. To be a first year program identified among certification leaders like Cisco and Red Hat speaks volumes on the respect our program has established.

Creating a global certification program is resource intensive. We've learned that it requires the commitment of a team of Drupal experts to work on each and every exam. We know have four different exams: developer, front-end specialist, backend specialist and site builder. It roughly takes 40 work days for the initial development of one exam, and about 12 to 18 work days for each exam update. We update all four of our exams several times per year. In addition to creating and maintaining the certification programs, there is also the day-to-day operations for running the program, which includes providing support to participants and ensuring the exams are in place for testing around the globe, both on-line and at test centers. However, we believe that effort is worth it, given the overall positive effect on our community.

We also learned that benefits are an important part to participants and that we need to raise the profile of someone who achieves these credentials, especially those with the new Acquia Certified Grand Master credential (those who passed all three developer exams). We have a special Grand Master Registry and look to create a platform for these Grand Masters to help share their expertise and thoughts. We do believe that if you have a Grand Master working on a project, you have a tremendous asset working in your favor.

At DrupalCon LA, the Acquia Certification Program offered a test center at the event, and we ended up having 12 new Grand Masters by the end of the conference. We saw several companies stepping up to challenge their best people to achieve Grand Master status. We plan to offer the testing at DrupalCon Barcelona, so take advantage of the convenience of the on-site test center and the opportunity to meet and talk with Peter Manijak, who developed and leads our certification efforts, myself and an Acquia Certified Grand Master or two about Acquia Certification and how it can help you in your career!

Categories: Drupal

Jim Birch: Essential Drupal: Stage File Proxy

1 July 2015 - 2:20am

We can easily checkout code from our git repositories for our local, development, and staging servers.  We can get a database from the live site through Backup and Migrate, drush, or a number of other ways.  But getting the files of the site, the images, pdfs, and everything else in /sites/default/files is not on the top of the list of most developers.  In recent versions of Backup and Migrate, you can export the files, but often times, this can be a huge archive file.  There is an easier way.

The Stage File Proxy module saves the day by sending requests for files to the live server if it does not exist yet in your local environment.  This saves you space on your non-production environment since it only grabs files from the pages you visit.  Great for us that have dozens of sites on our local. 

As simple as can be, it gets the files you need on your local server, as you need them.  No more navigating broken looking dev sites.  This will get your environment looking as it should so you can concentrate on your task at hand.

Read more

Categories: Drupal

Sooper Drupal Themes: YoutTube Video of Drupal Theme integrated Drag and Drop builder

1 July 2015 - 1:22am
After I posted a case study last week I had a number of readers ask me if they could try a demo and see how it works. There is no try-out demo yet but in the meanwhile I produced a video that demonstrates the basic controls: .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } If you have any questions about integration and the open source library that powers it feel free to contact or comment! Tags planet drag and drop glazed planet drag and drop glazed Drupal 7.x
Categories: Drupal

KatteKrab: Comparing D7 and D8 outta the box

30 June 2015 - 6:53pm
Wednesday, July 1, 2015 - 11:53

I did another video the other day. This time I've got a D7 and D8 install open side by side, and compare the process of adding an article.

Categories: Drupal

Pixelite: Debugging Drupal performance with Cache Debug module

30 June 2015 - 5:00pm
Preface

This blog post is for developers, not site builders, as the analysis for cache debugging requires knowledge about the runtime stack of Drupal.

The Problem with Caching in Drupal 7

To obtain performance in Drupal 7, Drupal relies heavily on caching. That is, to process something and cache the end result so that same work doesn’t need to be repeated. Conditions also have to be created for when that cache expires or is invalidated. Drupal has a caching layer to help with this. When you want to store something in cache, you use cache_set, to retrieve it, you use cache_get and to wipe the cache bin clean, you can use cache_clear_all.

Often, modules can implicitly clear or set cache unintentionally. This can lead to more caching overhead than you need. For example, theme registry clearing, use of the variable_set function or calls to other modules that call cache_clear_all. The problem is, how do you track down culprits to fix the issue?

Enter Cache Debug

Cache Debug is a module that wraps around the caching layer and adds logging. Including stacktrace information. It means when a cache_set or cache_clear_all is called, you can trace back to what called it - understand the problem and fix it. Very quickly.

It comes with three logging options:

  • watchdog - good if you’re using syslog module but deadly if you’re using dblog.
  • error_log - logs to your php error log
  • arbitrary file - specify your own log file to log to
Configuring Cache Debug

Because the caching system is so highly utilized, cache logging can be incredibly verbose. Perhaps this is why there is no logging around this in Drupal core. Fortunately, Cache Debug is highly configurable to control what to log.

NOTE: Because the caching system is loaded and used before Drupal’s variable system which manages configuration, it is best to set configuration in settings.php rather than in the database. However, there is a web UI that does set configuration in the database for ease of use.

Basic configuration

If you’ve used the memcache module before, this should feel familiar. In order to use Cache Debug, you need to set it as the cache handler:

<?php $conf['cache_backends'][] = 'sites/all/modules/cache_debug/cache_debug.inc'; $conf['cache_default_class'] = 'DrupalDebugCache'; ?>

This tells Drupal that there is a cache backend located in the path provided (make sure its correct for your Drupal site!) and that the default class for all cache bins is the DrupalDebugCache class. If you only want to monitor a single bin you may want to omit this option.

Since Cache Debug is a logger and not an actual caching system, it needs to pass cache requests onto a real cache system. By default, Debug Cache will use Drupal core’s Database Cache system for cache storage, but if you’re using memcache, redis or similar, you may want to set that as the handler for Cache Debug:

<?php $conf['cache_debug_class'] = 'MemCacheDrupal'; $conf['cache_cache_debug_form'] = 'DrupalDatabaseCache'; ?>

You need to also configure those modules accordingly.

At this point, you’ll be logging all cache calls and stack traces to set and clear calls to the php error log.

Configure the logging location

You may want to choose your own logging location. For example, if you use dblog, then you won’t want to log to watchdog because it will bloat your database. Likewise, if you don’t want to bloat our php error log, then you may want to log to an arbitrary file. You can choose your logging location by setting cache_debug_logging_destination to error_log (default), watchdog or file. For file you will also need to provide the location:

<?php $conf['cache_debug_logging_destination'] = 'file'; $conf['cache_debug_log_filepath'] = '/tmp/cachedebug.log'; ?> Configuring logging options

You can choose to log calls to cache get, getMulti, set and clear. You can also choose to log a stacktrace of these calls to show the stack that triggered the call. This is most useful for calls to SET and CLEAR. For a minimal logging option with the most about of insight, you might want to try this:

<?php $conf['cache_debug_log_get'] = FALSE; $conf['cache_debug_log_getMulti'] = FALSE; $conf['cache_debug_log_set'] = TRUE; $conf['cache_debug_log_clear'] = TRUE; $conf['cache_debug_stacktrace_set'] = TRUE; $conf['cache_debug_stacktrace_clear'] = TRUE; ?> Logging per cache bin

You don’t have to log the entire caching layer if you know which bin to look at for the caching issue you’re observing. For example, if you’re looking for misuse of variable_set, you only need to log the cache_bootstrap bin. In which case you could do this:

<?php # Do not log to all cache bins so ensure this line is removed (from above): # $conf['cache_default_class'] = 'DrupalDebugCache'; $conf['cache_bootstrap_class'] = 'DrupalDebugCache'; ?> Configure for common issues

Variable set calls and theme registry rebuilds are the two most common issues and so Cache Debug has use cases for these issues built in. So long as Cache Debug is the cache handler for the bin, you can turn off logging and turn on these features and Cache Debug will only log when these issues occur:

<?php $conf['cache_default_class'] = 'DrupalDebugCache'; $conf['cache_debug_common_settings'] = array( 'variables' => TRUE, 'theme_registry' => TRUE, ); // Turn off logging $conf['cache_debug_log_get'] = FALSE; $conf['cache_debug_log_getMulti'] = FALSE; $conf['cache_debug_log_set'] = FALSE; $conf['cache_debug_log_clear'] = FALSE; ?> Analysing the logged data

Cache debug logs to a log file like the example below:

In this snapshot of log output you can see both how cache debug logs cache calls and the stacktracing in action.

Log format structure

A log line starts with a value that describes the cache bin, the cache command and the cache id. E.g. cache_bootstrap->set->variables would bet a cache_set call to the cache_bootstrap cache bin to set the variables cache key. Some calls also log additional data, for example, cache clear also indicates if the call was a wildcard clear. Set calls also log how much data (length) was set.

Stack trace logs

When stack tracing is enabled for specific commands, a stack trace will be logged immediately after the log event that triggered it. The trace rolls back through each function that led to the current cache command being triggered. In the example above you can see that cache_clear_all was called by drupal_theme_rebuild which was called by an include from phptemplate_init. If you look at the source code in phptemplate_init, you’ll see that this means a cache rebuild was triggered from including template.php. In this case it was that Zen base theme had the theme registry rebuild left on.

Categories: Drupal

Open Source Training: Using Administration Menu and Shortcuts in Drupal

30 June 2015 - 10:26am

An OSTraining members took our recommendation and installed the Administration Menu module, which makes it much easier to navigate through your Drupal site.

However, after enabling Administration Menu, they found that they lost their Shortcuts menu. This is normally the grey bar under Drupal's default admin toolbar.

Here's how to use Administration Menu and Shortcuts together.

First, make sure that "Administration menu" is enabled, but also make sure that the "Administration menu Toolbar style" module is enabled:

Now go to Configuration > Administration Menu and you'll now be able to check the "Shortcuts" box, as in the image below. This will re-enable the Shortcuts menu.

 

If you don't see the Shortcuts menu immediately, try clicking the down arrow in the top-right corner:

 
Categories: Drupal

Pantheon Blog: An Example Repository to Build Drupal with Composer on Travis

30 June 2015 - 9:59am
A robust Continuous Integration system with good test coverage is the best way to ensure that your project remains maintainable; it is also a great opportunity to enhance your development workflow with Composer. Composer is a dependency management system that collects and organizes all of the software that your project needs in order to run. 
Categories: Drupal

Acquia: Caching to Improve Drupal Performance: The Three Levels You Should Know

30 June 2015 - 9:07am

In our continuing mission (well, not a mission; it’s actually a blog series) to help you improve your Drupal website, let’s look at the power of caching.

In our previous post, we debunked some too common Drupal performance advice. This time we're going positive, with a simple, rock-solid strategy to get you started: caching is the single best way to improve Drupal performance without having to fiddle with code.

At the basic level, it is easy enough for a non-technical user to implement. Advanced caching techniques might require some coding experience, but for most users, basic caching alone will bring about drastic performance improvements.

Caching in Drupal happens at three separate levels: application, component, and page. Let’s review each level in detail.

Application-level caching

This is the caching capability baked right into Drupal. You won't see it in action unless you dig deep into Drupal's internal code. It is enabled by default and won't ever show older, cached pages.

With application-level caching, Drupal essentially stores cached pages separately from the site content (which goes into the database). You can't really configure this, except for telling Drupal where to save cached pages explicitly. You might see improved performance if you use Memcached on cached pages, but the effect is not big enough to warrant the effort.

Drupal stores many of its internal data and structures in efficient ways to improve frequent access when application-level caching. This isn’t information that a site visitor will see per se, but it is critical for constructing any page. Basically, the only enhancements that can be made at this level are improving where this cached information is stored, like using Memcached instead of the database.

You just need to install Drupal and let the software take care of caching at the application-level.

Component-level caching

This works on user-facing components such as blocks, panels, and views. For example, you might have a website with constantly changing content but a single block remains the same. In fact, you may have the same block spread across dozens of pages. Caching it can result in big performance improvements.

Component-level caching is usually disabled by default, though you can turn it on with some simple configuration changes. For the best results, identify blocks, panels, and views that remain the same across your site, and then cache them aggressively. You will see strong speedups for authenticated users.

Page-level caching

This is exactly what it sounds like: The entire page is cached, stored and delivered to a user. This is the most efficient type of caching. Instead of generating pages dynamically with Drupal bootstrap, your server can show static HTML pages to users instead. Site performance will improve almost immeasurably.

Page-level caching gives you a lot of room to customize. You can use any number of caching servers, including Varnish, which we use at Acquia Cloud. You can also use CDNs like Akamai, Fastly, or CloudFlare to deliver cached pages from servers close to the user's location. With CDNs, you are literally bringing your site closer to your users.

Keep in mind that forced, page-level caching works only for anonymous users by default. Fortunately, this forms the bulk of traffic to any website.

It bears repeating: Caching should be your top priority for boosting Drupal performance. By identifying and caching commonly repeated components and using a CDN at page-level, you’ll see site speed improvements that you can write home about.

Next time: How to Evaluate Drupal Modules for Performance Optimization.

Tags:  acquia drupal planet
Categories: Drupal

Acquia: Seamless Migration to Drupal 8: Make it Yours

30 June 2015 - 8:26am

Hi there. I’m Adam from Acquia. And I want YOU to adopt Drupal 8!

I’ve been working on this for months. Last year, as an Acquia intern, I wrote the Drupal Module Upgrader to help people upgrade their code from Drupal 7 (D7) to Drupal 8 (D8). And now, again as an Acquia intern, I’m working to provide Drupal core with a robust migration path for your content and configuration from D6 and D7 to Drupal 8. I’m a full-service intern!

The good news is that Drupal core already includes the migration path from D6 to D8. The bad news is that the (arguably more important) migration path from D7 to D8 is quite incomplete, and Drupal 8 inches closer with each passing day. That’s why I want -- nay, need -- your help.

We need to get this upgrade path done.

If you want core commits with your name on them (and why wouldn’t you?), this a great way to get some, regardless of your experience level. Noob, greybeard, or somewhere in between, there is a way for you to help. (Besides, the greybeards are busy fixing critical issues.)

What’s this about?

Have you ever tried to make major changes to a Drupal site using update.php and a few update_N hooks? If you haven’t, consider yourself lucky; it’s a rapid descent into hell. Update hooks are hard to test, and any number of things can go wrong while running them. They’re not adaptable or flexible. There’s no configurability -- you just run update.php and hope for the best. And if you’ve got an enormous site with hundreds of thousands of nodes or users, you’ll be staring anxiously at that progress bar all night. So if the idea of upgrading an entire Drupal site in a single function terrifies you, congratulations: you’re sane.

No, when it comes to upgrading a full Drupal site, hook_update_N() is the wrong tool for the job. It’s only meant for making relatively minor modifications to the database. Greater complexity demands something a lot more powerful.

The Migrate API is that something. This well-known contrib module has everything you need to perform complex migrations. It can migrate content from virtually anything (WordPress, XML, CSV, or even a Drupal site) into Drupal. It’s flexible. It’s extensible. And it’s in Drupal 8 core. Okay, not quite -- the API layer has been ported into core, but the UI and extras provided by the Drupal 7 Migrate module are in a (currently sandboxed) contrib module called Migrate Plus.

Also in core is a new module called Migrate Drupal, which uses the Migrate API to provide upgrade paths from Drupal 6 and 7. This is the module that new Drupal 8 users will use to move their old content and configuration into Drupal 8.

At the time of this writing, Migrate Drupal contains a migration path for Drupal 6 to Drupal 8, and it’s robust and solid thanks to the hard work of many contributors. It was built before the Drupal 7 migration path because Drupal 6 security support will be dropped not long after Drupal 8 is released. It covers just about all bases -- it migrates your content into Drupal 8, along with your CCK fields (and their values). It also migrates your site’s configuration into Drupal 8, right down to configuration variables, field widget and formatter settings, and many other useful tidbits that together comprise a complete Drupal 6 site.

Here’s a (rather old) demo video by @benjy, one of the main developers of the Drupal 6 migration path:

Awesome, yes? I think so. Which brings me to what Migrate Drupal doesn’t yet have -- a complete upgrade path from Drupal 7 to Drupal 8. We’re absolutely going to need one. It’s critical if we’re going to get people onto Drupal 8!

This is where you come in. The Drupal 7 migration path is one of the best places to contribute to Drupal core, even at this late stage of the game. The D7 upgrade path has been mapped out in a meta-issue on drupal.org, and a large chunk of it is appropriate for novice contributors!

Working on the Migrate API involves writing migrations, which are YAML files (if you’re not familiar with YAML, the smart money says that you will pick it up in, honestly, thirty seconds flat). You’ll also write automated tests, and maybe a plugin or two -- a crucial skill when it comes to programming Drupal 8! If you’re a developer, contributing migrations is a gentle, very useful way to prepare for D8.

A very, very quick overview of how this works

Migrations are a lot simpler than they look. A migration is a piece of configuration, like a View or a site slogan. It lives in a YAML file.

Migrations have three parts: the source plugin, the processing pipeline, and the destination plugin. The source plugin is responsible for reading rows from some source, like a Drupal 7 database or a CSV file. The processing pipeline defines how each field in each row will be massaged, tweaked, and transformed into a value that is appropriate for the destination. Then the destination plugin takes the processed row and saves it somewhere -- for example, as a node or a user.

There’s more to it, of course, but that’s the gist. All migrations follow this source-process-destination flow.

id: d6_url_alias label: Drupal 6 URL aliases migration_tags: - Drupal 6 # The source plugin is an object which will read the Drupal 6 # database directly and return an iterator over the rows of the # {url_alias} table. source: plugin: d6_url_alias # Define how each field in the source row is mapped into the destination. # Each field can go through a “pipeline”, which is just a chain of plugins # that transform the original value into the destination value, one step at # a time. Source values can go through any number of transformations # before being added to the destination row. In this case, there are no # transformations -- it's just direct mapping. process: source: src alias: dst langcode: language # The destination row will be saved by the url_alias destination plugin, which # knows how to create URL aliases. There are many other destination plugins, # including ones to create content entities (nodes, users, terms, etc.) and # configuration (fields, display settings, etc.) destination: plugin: url_alias # Migrations can depend on specific modules, configuration entities, or even # other migrations. dependencies: module: - migrate_drupal I <3 this, how can I help?

The first thing to look at is the Drupal 7 meta-issue. It divvies up the Drupal 7 upgrade path by module, and divides them further by priority. The low-priority ones are reasonably easy, so if you’re new, you should grab one of those and start hacking on it. (Hint: migrating variables to configuration is the easiest kind of migration to write, and there are plenty of examples.) The core Migrate API is well-documented too.

If you need help, we’ve got a dedicated IRC channel (#drupal-migrate). I’m phenaproxima, and I’m one of several nice people who will be happy to help you with any questions you’ve got.

If you’re not a developer, you can still contribute. Do you have a Drupal 6 site? Migrate it to Drupal 8, and see what happens! Then tell us how it went, and include any unexpected weirdness so we can bust bugs. As the Drupal 7 upgrade path shapes up, you can do the same thing on your Drupal 7 site.

If you want to learn about meatier, more complicated issues, the core Migrate team meets every week in a Google Hangout-on-air, to talk about larger problems and overarching goals. But if you’d rather focus on simpler things, don’t worry about it. :)

And with that, my fellow Drupalist(a)s, I invite you to step up to the plate. Drupal 8 is an amazing release, and everyone deserves it. Let’s make its adoption widespread. Upgrading has always been one of the major barriers to adopting a new version of Drupal, but the door is open for us to fix that for good. I know you can help.

Besides, core commits look really good with your name tattooed on ‘em. Join us!

Tags:  acquia drupal planet
Categories: Drupal

Drupal Watchdog: Testing 1.2.3...

30 June 2015 - 7:15am
Column

The introduction of Behat 3, and the subsequent release of the Behat Drupal Extension 3, opened up several new features with regards to testing Drupal sites. The concept of test suites, combined with the fact that all contexts are now treated equally, means that a site can have different suites of tests that focus on specific areas of need.

Background

Behat is a PHP framework for implementing Behavior Driven Development (BDD). The aim is to use ubiquitous language to describe value for everybody involved, from the stake-holders to the developers. A quick example:

In order to encourage visitors to become more engaged in the forums Visitors who choose to post a topic or comment Will earn a 'Communicator' badge

This is a Behat feature, there need be no magic or structure to this. The goal is to simply and concisely describe a feature of the site that provides true value. In Behat, features are backed up with scenarios. Scenarios are written in Gherkin and are mapped directly to step-definitions which execute against a site, and determine if, indeed, a given scenario is working.

Continuing with the above example:

Scenario: A user posts a comment to an existing topic and earns the communicator badge Given a user is viewing a forum topic "Getting started with Behat" When they post a comment They should immediately see the "Communicator" badge

Each of the Given, When, and Then steps are mapped to code using either regex, or newly in Behat 3, Turnip syntax:

Categories: Drupal

Amazee Labs: Debug Solr queries

30 June 2015 - 4:00am
Debug Solr queries Vasi Chindris Tue, 06/30/2015 - 13:00

Solr is great! When you have a site even with not so much content and you want to have a full text search, then using Solr as a search engine will improve a lot the speed of the search itself and the accuracy of the results. But, as most of the times happen, all the good things also come with a drawback too. In this case, we talk about a new system which our web application will communicate to. This means that, even if the system is pretty good by default, you have to be able in some cases to understand more deeply how the system works.This means that, besides being able to configure the system, you have to know how you can debug it. We'll see in the following how we can debug the Solr queries which our applications use for searching, but first let’s think of a concrete example when we need to debug a query.

An example use case

Let’s suppose we have 2 items which both contain in the title a specific word (let’s say ‘building’). And we have a list where we show search results ordered by their score first, and when they have equal scores by the creation date, desceding. At a first sight, you would say that, because both of them have the word in the title, they have the same score, so you should see the newest item first. Well, it could be that this is not true, and even if they have the word in the title, the scores are not the same.

Preliminaries

Let’s suppose we have a system which uses Solr as a search server. In order to be able to debug a query, we first have to be able to run it directly on Solr. The easiest is when Solr is accessible via http from your browser. If not, the Solr must be reached from the same server where your application sits, so you call it from there. I will not insist on this thing, if you managed to get the Solr running for you application you should be able to call it.

Getting your results

The next thing you do is to try to make a query with the exact same parameters as your application is doing. To have a concrete example, we will consider here that we have a Drupal site which uses the Search API module with the Apache Solr as the search server. One of the possibilities to get the exact query which is made is to check the SearchApiSolrConnection::makeHttpRequest() method which makes a call to drupal_http_request() using an URL. You could also use the Solr logs to check the query if it is easier. Let's say we search for the word “building”. An example query should look like this:

http://localhost:8983/solr/select?fl=item_id%2Cscore&qf=tm_body%24value%5E5.0&qf=tm_title%5E13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&wt=json&json.nl=map&q=%22building%22

If you take that one and run it in the browser, you should see a JSON output with the results, something like:

To make it look nicer, you can just remove the “wt=json” (and optionally “json.nl=map”) from your URL, so it becomes something like:

http://localhost:8983/solr/select?fl=item_id%2Cscore&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A"articles"&fq=hash%3Ao47rod&start=0&rows=10&sort=score desc%2C ds_created desc&q="building"

which should result in a much nicer, xml output:

List some additional fields

So now we have the results from Solr, but all they are containing are the internal item id and the score. Let's add some fields which will help us to see exactly what texts do the items contain. The fields you are probably more interested in are the ones which are in the “qf” variable, in your URL. In this case we have:

qf=tm_body%24value^5.0&qf=tm_title^13.0

which means we are probably interested in the “tm_body%24value” and the “ tm_title” fields. To make them appear in the results, we add them to the “fl” variable, so the URL becomes something like:

http://localhost:8983/solr/select?fl=item_id%2Cscore%2Ctm_body%24value%2Ctm_title&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&q=%22building%22

And the result should look something like:

Debug the query

Now everything is ready for the final step in getting the debug information: adding the debug flag. It is very easy to do that, all you have to do is to add the “debugQuery=true” to your URL, which means it will look like this:

http://localhost:8983/solr/select?fl=item_id%2Cscore%2Ctm_body%24value%2Ctm_title&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&q=%22building%22&debugQuery=true

You should see now more debug information, like how the query is parsed, how much time does it take to run, and probably the most important one, how the score of each result is computed. If your browser does not display the formula in an easy-readable way, you can copy and paste it into a text editor, it should look something like:

As you can see, computing the score of an item is done using a pretty complex formula, with many variables as inputs. A few more details about these variables you can find here: Solr Search Relevancy

Further reading and useful links

Categories: Drupal

ERPAL: How we’re building our SaaS business with Drupal

30 June 2015 - 2:00am

Have you ever thought about building your own Software-as-a-Service (SaaS) business based on Drupal? I don't mean selling Drupal as a service but selling your Drupal-based software under a subscription model and using Drupal as the basis for your accounting, administration, deployment and the tool that serves and controls all the business processes of your SaaS business. Yes, you have? That's great! We’ve done the same thing over the last 12 months, and in this blog post I want to share my experiences with you (and we’d be delighted if you shared your experiences in the comments). I’ll show you the components we used to build Drop Guard – a Drupal-auto-updater-as-a-service (DAUaaS ;-)) that includes content delivery and administration, subscription handling, CRM and accounting, all based on ERPAL Platform.

I’m not talking about a full-featured, mature SaaS business yet, but about a start-up in which expense control matters a lot and where agility is one of the most important parameters for driving growth. Of course, there are many services out there for CRM, payment, content, mailings, accounting, etc. But have you added up all the expenses for those individual services, as well as the time and money you need to integrate them properly? And are you sure you’ve made a solid choice for the future? I want to show you how Drupal, as a highly flexible open source application framework, brings (almost) all those features, saves you money in the early stages of your SaaS business and keeps you flexible and agile in the future. Below you’ll find a list of the tools we used to build the components of the Drop Guard service.

Components of a SaaS business application

Content: This is the page where you present all benefits of your service to potential clients. This page is mostly content-driven and provides a list of plans your customers can subscribe to. There’s nothing special about this as Drupal provides you with all the features right out of the box. The strength of Drupal is that it integrates with all the other features listed below, in one system. With the flexible entity structure of Drupal and the Rules module, you can automate your content and mailings to keep users on board during the trail period and convince them of your service to purchase a full subscription.

Trial registration: Once your user has signed up using just her email address, she’ll want to start and test using your service for free during the trial period. Drupal provides this registration feature right out of the box. To deploy your application (if you run single instances for every user), you could trigger the deployment with Rules. With the commerce_license module you can create an x-day trial license entity and replace it with the commercial entity once the user has bought and paid for a license.

Checkout: After the trial period is over, your user needs to either buy the service or quit using it. The process can be just like the checkout process in an online store. This step includes a subscription to a recurring payment provider and the completion of a contact form (to create a complete CRM entry for this subscriber). We used Drupal commerce to build a custom checkout process and commerce products to model the subscription plans. To notify the user about the expiration of her trial period, you can send her one or more emails and encourage her to get in touch. Again, Rules and the flexible entity structure of Drupal work perfectly for this purpose.

Accounting: Your customer data need to be managed in a CRM as they're one of the most valuable information in your SaaS business. If you’ve just started your SaaS business, you don't need a full-featured and expensive CRM system, but one that scales with your business as it grows and can be extended later with additional features, if needed. The first and only required feature is a list of customers (your subscribers) and a list of their orders and related invoices (paid or unpaid). As we use CRM Core to build the CRM, we can extend the contact entities with fields, build filterable lists with views, reference subscriptions (commerce orders) to contacts and create invoices (a bundle of the commerce order entity pre-configured as the ERPAL invoice module).

Recurring payment: If you run your SaaS business on a subscription-based model where your clients pay for the service periodically, you have two options to process recurring payments. Handling payments by yourself is not worth trying as it’s too risky, insecure and expensive. So, either you use Stripe to handle recurring payments for you or you can use any payment provider to process one time payments and implement the recurring feature in Drupal. There are some other SaaS payment services worth looking at. We've chosen the second option using Paymill to process payments in combination with commerce_license and commerce_license_billing to implement the recurring feature. For every client with an active subscription, an invoice is created every month and the amount is charged via the payment provider. Then the invoice is set to "paid" and the service continues. The invoice can be downloaded in the portal and is accessible for both the SaaS operator and the client as a dataset and/or a PDF file.

Deployment: Without going into deep details of application deployment, Docker is a powerful tool for deploying single-instance apps for your clients. You may also want to have a look at different API-based Drupal hosting platforms, such as Platform.sh or Pantheon or Acquia Cloud if you want to sell Drupal-based applications via a SaaS model. They will make your deployment very comfortable and easy to integrate. You can use Drupal multi-site instances or the Drupal access system to separate user-related content (the last one can be very tricky and exert performance impacts on big data!). If your app produces a huge amount of data (entities or nodes) I recommend single instances with Docker or a Drupal hosting platform. As Drop Guard automates deployment and therefore doesn’t produce that much data, we manage all our subscribers in one Drupal instance but keep the decoupled update server horizontally scalable.

Start building your own SaaS business

If you’re considering building your own SaaS business, there’s no need to start from scratch. ERPAL Platform is freely available, easy-to-customize and uses Drupal contrib modules such as Commerce, CRM Core and Rules to connect all the components necessary to operate a SaaS business process. With ERPAL Platform you have a tool for developing your SaaS business in an agile way, and you can adapt it to whatever comes in the near future. ERPAL Platform includes all the components for CRM and accounting and integrates nicely with Stripe (and many others, thanks to Drupal Commerce) as well as your (recurring) payment provider. We can modify the default behavior with entities, fields, rules and views to extend the SaaS business platform. We used several contrib modules to extend ERPAL Platform to manage licensed products (commerce license and commerce license billing). If you want more information about the core concepts of ERPAL Platform, there’s a previous blog post about how to build flexible business applications with ERPAL Platform.

This is how we built Drop Guard, a service for automating Drupal updates with integration into development and deployment workflows. As we’ve just started our SaaS business, we’ll keep you posted with updates along our way to becoming a full-fledged, Drupal-based SaaS business. For instance, we plan to add metrics and marketing automation features to drive traffic. We’ll share our experiences with you here and we’d be happy if you’d share yours in the comments!

Categories: Drupal

Cocomore: MySQL - Query optimization

30 June 2015 - 1:02am

Queries are the centerpiece of MySQL and they have high optimization potential (in conjunction with indexes). This is specially true for big databases (whatever big means). Modern PHP frameworks tend to execute dozens of queries. Thus, as a first step, it is required to know what the slow queries are. A built-in solution for that is the MySQL slow query log. This can either be activated in my.cnf or dynamically with the --slow_query_log option. In both cases, long_query_time should be reduced to an appropriate value.

read more

Categories: Drupal


Google+
about seo