Skip to Content


Community Media Agenda

New Drupal Modules - 3 December 2014 - 8:25am

Community Media Agenda can be used to create and present an agenda (incl. speakers, information etc.) related to a video. There are currently two sub-modules: cm_agenda_maker to create an agenda and all the information related to it, and cm_agenda_presenter to present the video with a clickable agenda.

Categories: Drupal

Smart Chart

New Drupal Modules - 3 December 2014 - 7:58am

A content type that implements a tree view chart. It can use different content types for its items, that can be selected through the admin interface.


The Chart is viewable through "node/%nid/chart" as a tab or stand alone on "chart/%nid".

The first thing you have to do is go to "admin/config/content/smart-suite/chart" and add content types that you want as items.

100% ajax based, unless you add/remove the first item. I am working on it.

Other modules based on this one will follow.

Categories: Drupal

Dcycle: What is content? What is configuration?

Planet Drupal - 3 December 2014 - 7:18am

What is content? What is configuration? At first glance, the question seems simple, almost quaint, the kind one finds oneself patiently answering for the benefit of Drupal novices: content is usually information like nodes and taxonomy terms, while content types, views and taxonomy vocabularies are usually configuration.

Content lives in the database of each environment, we say, while configuration is exportable via Features or other mechanisms and should live in the Git repo (this has been called code-driven development).

Still, a definition of content and configuration is naggingly elusive: why "usually"? Why are there so many edge cases? We're engineers, we need precision! I often feel like I'm trying to define what a bird is: every child knows what a bird is, but it's hard to define it. Ostriches can't fly; platypuses lay eggs but aren't birds.

Why the distinction?

I recently saw an interesting comment titled "A heretic speaks" on a blog post about code-driven development. It sums up some of the uneasiness about the place of configuration in Drupal: "Drupal was built primarily with site builders in mind, and this is one reason [configuration] is in the database".

In effect, the primary distinction in Drupal is between code (Drupal core and config), and the database, which contains content types, nodes, and everything else.

As more complex sites were being built, a new distinction had to be made between two types of information in the database: configuration and content. This was required to allow development in a dev-stage-production workflow where features being developed outside of a production site could be deployed to production without squashing the database (and existing comments, nodes, and the like). We needed to move those features into code and we called them "configuration".

Thus the features module was born, allowing views, content types, and vocabularies (but not nodes and taxonomy terms) to be developed outside of the database, and then deployed into production.

Drupal 8's config management system takes that one step further by providing a mature, central API to deal with this.

The devil is in the details

This is all fine and good, but edge cases soon begin to arise:

  • What about an "About us" page? It's a menu item (deployable) linking to a node (content). Is it config? Is it content?
  • What about a "Social media" menu and its menu items? We want a Facebook link to be deployable, but we don't want to hard-code the actual link to our client's Facebook page (which feels like content) -- we probably don't even know what that link is during development.
  • What about a block whose placement is known, but whose content is not? Is this content? Is it configuration?
  • What about a view which references a taxonomy term id in a hard-coded filter. We can export the view, but the taxonomy term has an incremental ID ans is not guaranteed to work on all environments.

The wrong answer to any of these questions can lead to a misguided development approach which will come back to haunt you afterward. You might wind up using incremental IDs in your code or deploying something as configuration which is, in fact, content.

Defining our terms

At the risk of irking you, dear reader, I will suggest doing away with the terms "content" and "configuration" for our purposes: they are just too vague. Because we want a formal definition with no edge cases, I propose that we use these terms instead (we'll look at each in detail a bit further on):

  • Code: this is what our deliverable is for a given project. It should be testable, versioned, and deployable to any number of environments.
  • Data: this is whatever is potentially different on each environment to which our code is deployed. One example is comments: On a dev environment, we might generate thousands of dummy comments for theming purposes, but on prod there might be a few dozen only.
  • Placeholder content: this is any data which should be created as part of the installation process, meant to be changed later on.

This is what our deliverable is for a given project. This is important. There is no single answer. Let's take the following examples:

  • If I am a contributor to the Views contrib project, my deliverable is a system which allows users to create views in the database. In this case I will not export many particular views.

  • For another project, my deliverable may be a website which contains a set number of lists (views). In this case I may use features (D7) or config management (D8) to export all the views my client asked for. Furthermore, I may enable views_ui (the Views User interface) only on my development box, and disable it on production.

  • For a third project, my deliverable may a website with a number of set views, plus the ability for the client to add new ones. In this only certain views will be in code, and I will enable the views UI as a dependency of my site deployment module. The views my client creates on production will be data.


A few years ago, I took a step back from my day-to-day Drupal work and thought about what my main pain points were and how to do away with them. After consulting with colleagues, looking at bugs which took longest to fix, and looking at major sources of regressions, I realized that the one thing all major pain points had in common were our deployment techniques.

It struck me that cloning the database from production to development was wrong. Relying on production data to do development is sloppy and will cause problems. It is better to invest in realistic dummy content and a good site deployment module, allowing the standardized deployment of an environment in a few minutes from any commit.

Once we remove data from the development equation in this way, it is easier to define what data is: anything which can differ from one environment to the next without overriding a feature.

Furthermore, I like to think of production as just another environment, there is nothing special about it.

A new view or content type created on production outside of our development cycle resides on the database, is never used during the course of development, and is therefore data.

Nodes and taxonomy terms are data.

What about a view which is deployed through features and later changed on another environment? That's a tough one, I'll get to it (See Overriden features, below).

Placeholder content

Let's get back to our "About us" page. Three components are involved here:

  • The menu which contains the "About us" menu item. These types of menus are generally deployable, so let's call them code.
  • The "About us" node itself which has an incremental nid which can be different on each environment. On some environments it might not even exist.
  • The "About us" menu item, which should link to the node.

Remember: we are not cloning the production database, so the "About us" does not exist anywhere. For situations such as this, I will suggest the use of Placeholder content.

For sake of argument, let's define our deliverable for this sample project as follows:

"Define an _About us_ page which is modifiable".

We might be tempted to figure out a way to assign a unique ID to our "About us" node to make it deployable, and devise all kinds of techniques to make sure it cannot be deleted or overridden.

I have an approach which I consider more logical for these situations:

First, in my site deployment module's hook_update_N(), create the node and the menu item, bypassing features entirely. Something like:

function mysite_deploy_update_7023() { $node = new stdClass(); $node->title = 'About us'; $node->body[LANGUAGE_NONE][0]['format'] = 'filtered_html'; $node->body[LANGUAGE_NONE][0]['value'] = 'Lorem ipsum...'; $node->type = 'page'; node_object_prepare($node); $node->uid = 1; $node->status = 1; $node->promote = 0; node_save($node); $menu_item = array( 'link_path' => 'node/' . $node->nid, 'link_title' => 'About us', 'menu_name' => 'my-existing-menu-exported-via-features', ); menu_link_save($item); }

If you wish, you can also implement hook_requirements() in your custom module, to check that the About us page has not been accidentally deleted, that the menu item exists and points to a valid path.

What are the advantages of placeholder content?

  • It is deployable in a standard manner: any environment can simply run drush updb -y and the placeholder content will be deployed.
  • It can be changed without rendering your features (D7) or configuration (D8) overriden. This is a good thing: if our incremental deployment script calls features_revert() or drush fra -y (D7) or drush cim -y (D8), all changes to features are deleted. We do not want changes made to our placeholder content to be deleted.
  • It can be easily tested. All we need to do is make sure our site deployment module's hook_install() calls all hook_update_N()s; then we can enable our site deployment module within our simpletest, and run any tests we want against a known good starting point.
Overriden features

Although it is easy to override features on production, I would not recommend it. It is important to define with your client and your team what is code and what is data. Again, this depends on the project.

When a feature gets overridden, it is a symptom that someone does not understand the process. Here are a few ways to mitigate this:

  • Make sure your features are reverted (D7) or your configuration is imported (D8) as part of your deployment process, and automate that process with a continuous integration server. That way, if anyone overrides a feature on a production, it won't stay overridden long.
  • Limit administrator permissions so that only user 1 can override features (this can be more trouble than it's worth though).
  • Implement hook_requirements() to check for overridden features, warning you on the environment's dashboard if a feature has been overridden.
Some edge cases

Now, with our more rigorous approach, how do our edge cases fare?

Social media menu and items: Our deliverable here is the existence of a social media menu with two items (twitter and facebook), but whose links can be changed at any time on production without triggering an overridden feature. For this I would use placeholder content. Still, we need to theme each button separately, and our css does not know the incremental IDs of the menu items we are creating. I have successfully used the menu attributes module to associate classes to menu items, allowing easy theming. Here is an example, assuming menu_attributes exists and menu-social has been exported as a feature.

/** * Add facebook and twitter menu items */ function mysite_deploy_update_7117() { $item = array( 'link_path' => '', 'link_title' => 'Twitter', 'menu_name' => 'menu-social', 'options' => array( 'attributes' => array( 'class' => 'twitter', ) ) ); menu_link_save($item); $item = array( 'link_path' => '', 'link_title' => 'Facebook', 'menu_name' => 'menu-social', 'options' => array( 'attributes' => array( 'class' => 'facebook', ) ) ); menu_link_save($item); }

The above code creates the menu items linking to Facebook and Twitter home pages, so that content editors can put in the correct links directly on production when they have them.

Placeholder content is just like regular data but it's created as part of the deployment process, as a service to the webmaster.

A block whose placement is known, but whose content is not. It may be tempting to use the box module which makes blocks exportable with feature. But in this case the block is more like placeholder content, so it should be deployed outside of features. And if you create your block programmatically, its id is incremental and it cannot be deployed with context, but should be placed in a region directly, again, programmatically in a hook_update_N().

Another approach here is to create a content type and a view with a block display, fetching the last published node of that content type and displaying it at the right place. If you go that route (which seems a bit overengineered to me), you can then place your block with the context module and export it via features.

A view which references a taxonomy term id in its filter: If a view requires access to a taxonomy term nid, then perhaps taxonomy is the wrong tool here. Taxonomy terms are data, they can be deleted, their names can be changed. It is not a good idea for a view to reference a specific taxonomy term. (Your view can use taxonomy terms for contextual filters without a problem, but we don't want to hard-code a specific term in a non-contextual filter -- See this issue for an example of how I learned this the hard way, I'll get around to fixing that soon...).

For this problem I would suggest rethinking our use of a taxonomy term. Rather I would define a select field with a set number of options (with defined keys and values). These are deployable and guaranteed to not change without triggering a features override. Thus, our views can safely use them. If you are implementing this change on an existing site, you will need to update all nodes from the old to the new technique in a hook_update_N() -- and probably add an automated test to make sure you're updating the data correctly. This is one more reason to think things through properly at the onset of your project, not midway through.

In conclusion

Content and configuration are hard to define, I prefer the following definitions:

  • Code: deployable, deliverable, versioned, tested piece of software.
  • Data: anything which can differ from one environment to the next.
  • Placeholder content: any data which should be created as part of the deployment process.

In my experience, what fits in each category depends on each project. Defining these with your team as part of your sprint planning will allow you create a system with less edge cases.

Tags: blogplanet
Categories: Drupal

Zengenuity: Decoupling Your Backend Code from Drupal (and Improving Your Life) with Wrappers Delight

Planet Drupal - 3 December 2014 - 6:48am

If you've ever written a lot of custom code for a Drupal site, then you know it can be a tedious and error-prone experience. Your IDE doesn't know how Drupal's data structures are organized, and it doesn't have a way to extract information about configured fields to do any autocomplete or check data types. This leads to some frustrations:

  • You spend a lot of time typing out by hand all the keys in every array of doom you come across. It's tedious, verbose, and tiring.
  • Your code can contains errors your IDE won't alert you to. Simple typos can go unnoticed since the IDE has no idea how the objects and arrays are structured.
  • Your code is tightly coupled to specific field names, configured in the database. You must remember these, because your IDE can't autocomplete them.
  • Your code is tightly coupled to specific field types. (If you start off with a text field and then decide to switch to an email field, for example, you will find the value is now stored in a different key of the data array. You need to update all your custom code related to that field.)
  • It can be easy to create cross-site-scripting vulnerabilities in your code. You need to keep in mind all the field data that needs to be sanitized for output. It only takes one forgotten spot to open your site to attacks.

Wrappers Delight ( is a development tool I've created to help address these issues, and make my life easier. Here's what it does:

  • Provides wrapper classes for common entity types, with getters and setters for the entities' base properties. (These classes are wrappers/decorators around EntityMetadataWrapper.)
  • Adds a Drush command that generates wrapper classes for the specific entity bundles on your site, taking care of the boilerplate getter and setter code for all the fields you have configured on the bundles.
  • Returns sanitized values by default for the generated getters for text fields. (raw values can be returned with an optional parameter)
  • Allows the wrapper classes to be customized, so that you can decouple your custom code from specific Drupal field implementation.

With Wrappers Delight, your custom code can be written to interface with wrapper classes you control instead of with Drupal objects directly. So, in the example of changing a text type field to an email type field, only the corresponding wrapper class needs to be updated. All your other code could work as it was written.

But wait, there's more!

Wrappers Delight also provides bundle-specific wrapper classes for EntityFieldQuery, which allow you to build queries (with field-level autocomplete) in your IDE, again decoupled from specific internal Drupal field names and formats. Whatever your decoupled CRUD needs may be, Wrappers Delight has you covered!

Getting Started with Wrappers Delight

To generate wrapper classes for all the content types on your site:

  1. Install and enable the Wrapper Delight module.
  2. Install Drush, if you don't already have it.
  3. At the command line, in your Drupal directory, run drush wrap node.
  4. This will generate a new module called "wrappers_custom" that contains wrapper classes for all your content types.
  5. Enable the wrappers_custom module, and you can start writing code with these wrapper classes.
  6. This process works for other entity types, as well: Users, Commerce Products, OG Memberships, Messages, etc. Just follow the Drush command pattern: drush wrap ENTITY_TYPE. For contributed entity types, you may need to enable a submodule like Wrappers Delight: Commerce to get all the base entity properties.
Using the Wrapper Classes

The wrapper classes generated by Wrappers Delight have getters and setters for the fields you define on each bundle, and they inherit getters and settings for the entity's base properties. The class names follow the pattern BundlenameEntitytypeWrapper. So, to use the wrapper class for the standard article node type, you would do something like this:

$article = new ArticleNodeWrapper($node);
$body_value = $article->getBody();
$image = $article->getImage();

Wrapper classes also support passing an ID to the constructor instead of an entity object:

$article = new ArticleNodeWrapper($nid);

In addition to getters that return standard data arrays, Wrappers Delight creates custom utility getters for certain field types. For example, for image fields, these will all work out of the box:

$article = new ArticleNodeWrapper($node);
$image_array = $article->getImage();
$image_url = $article->getImageUrl();
$image_style_url = $article->getImageUrl('medium');
$absolute_url = $article->getImageUrl('medium', TRUE);

// Get a full tag (it's calling theme_image_style
// under the hood)
$image_html = $article->getImageHtml('medium'); Creating New Entities and Using the Setter Methods

If you want to create a new entity, wrapper classes include a static create() method, which can be used like this:

$values = array(
'title' => 'My Article',
'status' => 1,
'promote' => 1,
$article = ArticleNodeWrapper::create($values);

You can also chain the setters together like this:

$article = ArticleNodeWrapper::create();
$article->setTitle('My Article')
->save(); Customizing Wrapper Classes

Once you generate a wrapper class for an entity bundle, you are encouraged to customize it to your specific needs. Add your own methods, edit the getters and setters to have more parameters or different return types. The Drush command can be run multiple times as new fields are added to your bundles, and your customizations to the existing methods will not be overwritten. Take note that Wrappers Delight never deletes any methods, so if you delete a field, you should clean up the corresponding methods (or rewrite them to get the data from other fields) manually.

Drush Command Options

The Drush command supports the following options:

  • --bundles: specify the bundles to export (defaults to all bundles for a given entity type)
  • --module: specify the module name to create (defaults to wrappers_custom)
  • --destination: specify the destination directory of the module (defaults to sites/all/modules/contrib or sites/all/modules)
Packaging Wrapper Classes with Feature Modules or Other Bundle-Supplying Modules

With the options listed above, you can export individual wrapper classes to existing modules by running a command like the following:

drush wrap node --bundles=blog --module=blog_feature

That will put the one single wrapper class for blog in the blog_feature module. Wrappers Delight will be smart enough to find this class automatically on subsequent runs if you have enabled the blog_feature module. This means that once you do some individual exports, you could later run something like this:

drush wrap node

and existing classes will be updated in place and any new classes would end up in the wrappers_custom module.

Did You Say Something About Queries?

Yes! Wrappers Delight includes a submodule called Wrapper Delight Query that provides bundle-specific wrapper classes around EntityFieldQuery. Once you generate the query wrapper classes (by running drush wrap ENTITY_TYPE), you can use the find() method of the new classes to execute queries:

$results = ArticleNodeWrapperQuery::find()
->range(0, 10)

The results array will contain objects of the corresponding wrapper type, which in this example is ArticleNodeWrapper. That means you can immediately access all the field methods, with autocomplete, in your IDE:

foreach ($results as $article) {
$output .= $article->getTitle();
$output .= $article->getImageHtml('medium');

You can also run queries across all bundles of a given entity type by using the base wrapper query class:

$results = WdNodeWrapperQuery::find()
->byTitle('%Awesome%', 'LIKE')

Note that results from a query like this will be of type WdNodeWrapper, so you'll need to check the actual bundle type and re-wrap the object with the corresponding bundle wrapper in order to use the bundle-level field getters and setters.

Wrapping Up

So, that's Wrappers Delight. I hope you'll give it a try and see if it makes your Drupal coding experience more pleasant. Personally, I've used on four new projects since creating it this summer, and it's been amazing. I'm kicking myself for not doing this earlier. My code is easier to read, WAY easier to type, and more adaptable to changes in the underlying architecture of the project.

If you want to help me expand the project, here are some things I could use help with:

  • Additional base entity classes for common core and contrib entities like comments, taxonomy terms, and Workflow states.
  • Additional custom getter/setter templates for certain field types where utility functions would be useful, such as Date fields.
  • Feedback from different use cases. Try it out and let me know what could make it work better for your projects.

Post in the issue queue ( if you have questions or want to lend a hand.

Categories: Drupal

ERPAL: How we automate Drupal security updates

Planet Drupal - 3 December 2014 - 5:05am

During the past few weeks, automated security updates have been one of the hotly debated topics in the Drupal community. Ever since Drupalgeddon, security automation has been one of the issues we should really try to solve in order to ensure Drupal's continued growth, especially in the enterprise world. Whereas content management systems such as Wordpress already run automated updates in a background process, Drupal does not yet have such a feature. There are these and other discussions ongoing at that point out potential pros and cons of this feature. Personally, from the perspective of a Drupal professional, I think running Drupal module updates in the background could lead to several problems. There are a few reasons for this:

  • We somehow need to handle patched modules and cannot just override the complete module with an update
  • Letting Drupal rewrite its own codebase will open other security issues
  • Professionally developed Drupal projects use GIT (or another code versioning system) to maintain their codebase and handle the deployment process. Every update needs to be committed to the repository so that it’s not removed in the next deployment cycle
  • After updating a module, we should run our automated test scripts (for example, behat or selenium) to ensure the site didn't break with the update
  • To ensure quality we shouldn’t just run a complete update containing bug fixes and new features but only apply the patch relevant to security

The issue of applying security updates has become more and more time-sensitive because hackers start to attack vulnerable sites within hours of a security update release. Especially with enterprise web applications and large content sites with lots of users and traffic, this update process is really business critical. Even before the pressure of something like Drupalgeddon, these last two years we had already been thinking about update automation. In this blog post I want to describe the technology and workflows we use to automate security updates in our Drupal projects while ensuring quality with automated and manual tests and the correct handling of patches.

Every site that we support for our clients sends hourly update reports to our internal ERPAL (you can replace “ERPAL” here with any other ticketing system) over an https connection and with additional encryption. For every security update available, we create a new branch in the project's GIT repository and a task that is related automatically.

Once the task has been created, we get all the security-relevant patches and code changes from the active modules' repositories and merge them into the modules of the project. These code changes are committed to the new so-called "feature branch". Using Jenkins and a system to build feature branch instances with a live database, the changes are now ready to test. The status of the ERPAL Task is automatically set to "ready to test". Now all automated tests will run, if any are available for the project. The result is documented with a comment on the initially created task.
Depending on the test mode of the project and the priority of the security update (e.g. “critical” or “highly critical”), the security patches are either deployed directly to live once all tests are passed or the task is assigned to the project manager with the status "ready to test". He can then test the complete patched Drupal installation on a separate feature branch test instance under live conditions. If all tests are passed, the task will be set to "test passed" and the customer receives a notification that the security of his site is up-to-date. The update branch is merged as a hotfix into the master branch and the site is deployed to the live server. After this process, the update branch is deleted and the test instance destroyed to clean up the system. The following graphic describes the behavior.

This system has several benefits, both for us and for our clients:

  • Security-relevant updates are applied within one hour
  • Quality is ensured by automated tests and, if needed, by a notification system indicating manual test steps
  • No need to involve developers to patch and deploy code
  • No website downtime
  • All steps are documented in ERPAL to make the process transparent: customers see it in their ERPAL account
  • No panic on critical updates; all workflows run as in a normal project and are delivered with compliance to our task workflow
  • Instant customer notification once updates are applied gives customers a good feeling ;-)

This system has been working well for 2.5 years. Working in cooperation with other Drupal shops to test the system, we want to make this security update automation system available for others to use as well. Therefore we will soon publish the whole solution as a service. If you want to become one of the few beta testers, or if you want to become a reseller to deliver the same security automation to your clients, you can sign up at our Drop Guard - The Drupal security update automation service page.

Categories: Drupal

Symphony Blog: Continue shopping button on Drupal Commerce cart

Planet Drupal - 3 December 2014 - 12:34am

We had a Drupal project, implementing a commerce site for a local store. We use Drupal Commerce, as always, for this type of websites. You may see that we have alot of Drupal Commerce themes on our portfolio.

During the project, there was a minor request from our customer: add the Continue Shopping button to the cart. This feature is available on Ubercart, especially for Drupal 6 Ubercart users. Most of ecommerce sites have this feature as well. But it is not built-in with Drupal Commerce.

As I searched on the issues, I found a very helpful thread: Continue shopping in cart. Zorroposada presented a custom code to achieve it:

read more

Categories: Drupal

Pronovix: Hosting and playing videos in Drupal: Part 1

Planet Drupal - 3 December 2014 - 12:13am

When you are first faced with the task of hosting and playing videos in Drupal, the number of different approaches and solutions might seem overwhelming. Where to store the videos? What is the difference between CDNs, cloud storage services and hosted video solutions? Which Drupal modules to use with which service? This blog post walks you through the basics of hosting and playing videos in Drupal:

Categories: Drupal

Open Source Training: How to Rewrite the Output of Views with PHP

Planet Drupal - 2 December 2014 - 6:30pm

Views is a very powerful tool that allows you to pull information from your database in many flexible ways.

However, there will be situations where the default options in Views aren't enough. The Views PHP module allows you even more flexibility.

In these two videos, Robert Ring, one of our Drupal teachers, shows you how to re-write a View using PHP.

Categories: Drupal

David Norman: Node access rebuilds have a hard time limit

Planet Drupal - 2 December 2014 - 4:04pm

I thought I'd call attention to a bit of little-known Drupal trivia. The node_access_rebuild() function has a hard-coded 240-second time limit. That means that attempting to rebuild the node_access table with a tool like drush probably won't work on a large site. Having a time limit helps keep the site from hitting memory limits and crashing.

In case you think you can out-smart the problem with time limit removal trickery, I think you'll find that the limit is deeply embedded in Drupal's core code. These drush attempts will fail:

drush php-eval 'set_time_limit(0); node_access_rebuild();' drush php-eval 'ini_set("max_execution_time", 0); node_access_rebuild();'

The node_access_rebuild() function also allows an argument to toggle batch mode, the intended design to bypass the 240 second limit. Trying that with drush probably won't get the results you're hoping for, either.

drush php-eval 'node_access_rebuld(TRUE);'

Don't expect htop to show a bunch of busy threads after running that. A web browser trumps drush when it comes to performing successive HTTP requests for batch mode here.

Instead try telling Drupal that node access controls need a rebuild.

drush php-eval 'node_access_needs_rebuild(TRUE);'


drush vset node_access_needs_rebuild 1

After the rebuild status is set, any of the administration pages will show a link with an error message to prompt administrators to rebuild node permissions.

The link in the prompt goes to admin/reports/status/rebuild, which will prompt with a confirmation form before actually rebuilding permissions.

Your next option is a bit fancier, using a PHP script by shrop.

Run it like this:

time drush php-script rebuild-perms.php

If all that doesn't satisfy your needs, as it should, then hack core.

Hacking core is usually the wrong answer to any question. Nonetheless, here's a patch against Drupal 7.34.

Don't let the irony escape you - the comment before the drupal_set_time_limit() actually says the limit attempts to "allocate enough time."

Post categories Drupal
Categories: Drupal Behat vs. Casper (In Drupal Context)

Planet Drupal - 2 December 2014 - 2:00pm

In my previous blog post Behat - The Right Way I made a statement that I think Behat was a better choice for writing tests even for the frontend. Some good arguments were raised in favor of CasperJS.

@amitaibu @juampy72 it boils down to this: I'm a frontend dev. Writing PHP is something I avoid whenever possible.

— Chris Ruppel (@rupl) November 19, 2014

I believe my comparison was wrong in the sense it was lacking the key point to Behat's stength for us. It's not really about "Behat vs. Casper". The proper comparison should have been "Behat vs. Casper - With a Drupal backend"

And here's the key difference: With Behat you can interact with Drupal's API even when testing using PhantomJS. That is a lot of testing power!

Continue reading…

Categories: Drupal

Mediacurrent: Mediacurrent and Drupal Featured on NBC’s Atlanta Tech Edge

Planet Drupal - 2 December 2014 - 12:33pm

As a leading Atlanta-based Drupal and Digital web agency, Mediacurrent was recently profiled on 11Alive’s (an NBC affiliate) Atlanta Tech Edge. Atlanta Tech Edge was created to highlight selected companies that are leading the charge in Atlanta’s booming tech sector.

Categories: Drupal

Tag1 Consulting: BDD: It's about value

Planet Drupal - 2 December 2014 - 12:11pm

I was drawn to Behavior Driven Development the moment I was pointed toward Behat not just for the automation but because it systematized and gave me a vocabulary for some things I already did pretty well. It let me teach some of those skills instead of just using them. At DrupalCon Amsterdam, Behat and Mink architect Konstantin Kudryashov gave a whole new dimension to that.

read more

Categories: Drupal

Open Source Training: How to Sort Drupal Views Alphabetically

Planet Drupal - 2 December 2014 - 10:43am

Alphabetical sorting is one of the most common ways people want to sort content in Views.

You may want to all sorts of things from A to Z, from staff members to business listings.

Here's how to add alphabetical sorting to your Drupal views.

Categories: Drupal

Drupal Watchdog: Baby Steps

Planet Drupal - 2 December 2014 - 9:59am



RONNIE RAY (post-modern hippie, disheveled) is slouched at his desk, finishing the NY Times crossword puzzle and knocking back his second powerful coffee of the day (“I buy the gourmet, expensive stuff, ‘cause when I drink it, I wanna taste it.”)

He suddenly straightens up and tosses the Times aside.

RONNIE: (muttering morosely) I need to get serious.


Outside a Flatiron District jazz-club, Ronnie chats with JEAN-CLAUDE (French, cheerful), the venue’s sound engineer. Ronnie smokes a cigarette.

RONNIE: I need a website.

JEAN-CLAUDE: But of course, my friend. I can build you a website – not a problem.

RONNIE: How much will it cost?

JEAN-CLAUDE: The website is for free – I only ask that you commit to my server for a year or two. One hundred dollars a year.

RONNIE: Perfect! Do you use Drupal?

JEAN-CLAUDE: Wordpress.

RONNIE: (sagging in disappointment) No! It has to be Drupal. I’m the copy-editor for Drupal Watchdog. It would be weird – disloyal – to use some other software.

JEAN-CLAUDE: (with a shrug) Sorry.

Ronnie lurches off into the night, engulfed in disappointment.

Suddenly he stops and stands stock still – Eureka!


Ronnie paces excitedly while talking on the phone to his PUBLISHER.

RONNIE: Okay, so I’ve been copy-editing Drupal Watchdog for, what, two years?

PUBLISHER: Five issues, yes.

RONNIE: And I still don’t know a module from a cache, MySQL from Behat from –

PUBLISHER: – And that’s fine. All I ever expected was for you to put it into good English, which is what you do.

Categories: Drupal

Chocolate Lily: Open Outreach welcomes new partner Praxis Labs

Planet Drupal - 2 December 2014 - 9:24am

A managed hosting service will be the first fruit of a new partnership between Chocolate Lily Web Projects and the Montreal-based cooperative Praxis Labs aimed at strengthening and expanding the nonprofit-focused Open Outreach Drupal distribution.

Collaboration between Chocolate Lily and Praxis comes out of a community engagement process that began last fall.

Categories: Drupal

Linnovate: Drupal And the Disappearing Images Mystery

Planet Drupal - 2 December 2014 - 8:03am

After working many years with a specific framework, you sometimes face difficulties that in other situations, specifically while learning a new language or framework would not even challenge you.
One example for such a case is one I’ve encountered this past week, and to tackle it, all I’ve needed to do is to actually read the Drupal docs and not just flip through it.
One of my clients came to me and told me that all of the images he’s uploading to his site are deleted from the files directory of his Drupal project after several hours.
After checking that the images are created successfully in the Drupal’s temp directory and are then moved to the files directory as they should, I begun checking for any file/image related modules and any Drupal configurations that could hint a relation to the problem.
Checking those off I’ve started to look at custom code developed by our programmers, as this is a more time-consuming task I’ve not started with it but knew from the beginning that this is probably where the culprit could be found.
While carefully combing the code I’ve landed upon a form api piece of code related to an image field similar to this:

<!--?php // Use the #managed_file FAPI element to upload an image file. $form['image_example_image_fid'] = array(   '#title' => t('Image'),   '#type' => 'managed_file',   '#description' => t('The uploaded image will be displayed on this page using the image style chosen below.'),   '#default_value' => variable_get('image_example_image_fid', ''),   '#upload_location' => 'public://image_example_images/', ); ?>

This piece of code will add a nice file/image field to the page and will allow you to attach an image to the current entity.
After finding the “managed_file” type documentation the problem and the solution was clear.

Note: New files are uploaded with a status of 0 and are treated as temporary files which are removed after 6 hours via cron. Your module is responsible for changing the $file objects status to FILE_STATUS_PERMANENT and saving the new status to the database. Something like the following within your submit handler should do the trick.

 <!--?php // Load the file via file.fid. $file = file_load($form_state['values']['my_file_field']); // Change status to permanent. $file->status = FILE_STATUS_PERMANENT; // Save. file_save($file); // Record that the module (in this example, user module) is using the file. file_usage_add($file, 'user', 'user', $account->uid); ?>

So in order to prevent the (weird – in my opinion) automatic 6 hour cron deletion of the uploaded images you have to add a submit handler and inside it add that piece of code.
To clarify and help those in need, this is an expanded example of a form and submit functions.

$form = drupal_get_form('my_module_example_form'); ... function my_module_example_form($form, &$form_state) { $form['image_example_image_fid'] = array(   '#title' => t('Image'),   '#type' => 'managed_file',   '#description' => t('The uploaded image will be displayed on this page using the image style chosen below.'),   '#default_value' => variable_get('image_example_image_fid', ''),   '#upload_location' => 'public://image_example_images/', ); $form['submit'] = array( '#type' => 'submit', '#value' => t('Submit'), ); return $form; } function my_module_example_form_validate($form, &$form_state) { // Validation logic. } function my_module_example_form_submit($form, &$form_state) { // Submission logic. // Load the file via file.fid. $file = file_load($form_state['values']['my_file_field']); // Change status to permanent. $file->status = FILE_STATUS_PERMANENT; // Save. file_save($file); // Record that the module (in this example, user module) is using the file. file_usage_add($file, 'user', 'user', $account->uid); // a more generic example of file_usage_add // file_usage_add($file, 'my_module_name', 'user or node or any entity', 'that entity id'); // you don't need to use "file_usage_add" if you're not attaching the image to an entity }

Originally posted on my personal blog.

Categories: Drupal

Amazee Labs: DrupalCamp Moscow 2014

Planet Drupal - 2 December 2014 - 8:00am
DrupalCamp Moscow 2014

It must be weird, but living in Russia I have never attended a russian Drupal event. I was at DrupalCon Prague 2013 and have attended ukrainian DrupalCamps several times before. The Ukraine is located much closer to the town where I live than to the russian capital, so for me it’s faster to get to the neighbour country rather than to visit Moscow.

Amazee Labs travels back to the USSR

This time I decided to visit a russian Drupal event, DrupalCamp Moscow 2014. I didn’t know what to expect. Another country means another people. But with the Drupal community, this rule never works. Drupal folks are pretty much the same all around the world: sociable, nice, friendly, and always ready to help! If you are a professional drupalist, or a newbie, or even if you know nothing about Drupal… You are always welcome!

And this is exactly what Boris and Corina were talking about in their blog posts “Be a part of the community” and “Being part of the community - a non-techie perspective”.

@duozersk рассказывает про #angularJS и #drupal #dcmsk

— Nikolay Shapovalov (@RuZniki) 29. November 2014


The sessions I attended were good. I learned how russian drupalists work with Solr, which we use a lot at Amazee Labs, learned some new techniques for high-performance sites, went to Drupal 8 theming, and even learned some cases of using AngularJS with Drupal.

Speaking at Moscow State University, feeling like a professor

For us at Amazee Labs it is an essential part of our company culture to contribute back to the community. We are a proud sponsor of the event, and I shared our knowledge and know-how by giving a presentation about Drupal 8 configuration management. My next post will be about it. Subscribe to our RSS FeedTwitter, or Facebook page to not miss it ;)

Categories: Drupal

Drupalize.Me: Drupal 8 Core, Now with More Fields

Planet Drupal - 2 December 2014 - 7:00am

One of the the things I like most about Drupal 8 as a site builder is how quickly you can get up and running on creating a new site. Although the installer takes a tad (insert jokes here) longer than Drupal 7, you get so much more out of the box. No need to install Drupal and head to Drush to download/enable a handful of modules just to get your site ready. For example, just to get something like an email field was yet another download. Of course, there is Views in Core, but another great thing is a much larger plethora of field types. Now in Drupal 8 there are a handful of useful fields in Core:

Categories: Drupal

Another Drop in the Drupal Sea: It's Giving Tuesday, Drupal community!

Planet Drupal - 2 December 2014 - 6:57am

Today is Giving Tuesday, "a global day dedicated to giving back."

Drupal's tagline is "Come for the software, stay for the community." I'd like to ask you to consider supporting the Teach Yourself Drupal Kickstarter campaign to create a free totally open source online Drupal training product. By backing this project, you will be giving a gift that keeps on giving back to the Drupal community.

Categories: Drupal

3C Web Services: Creating custom Contextual links in Drupal 7

Planet Drupal - 2 December 2014 - 6:46am
How to add your own, custom context links to Drupal Views and Nodes using the Custom Context Links module.
Categories: Drupal
Syndicate content

about seo