New Pre-orders Available For Bolt Action

Tabletop Gaming News - 17 June 2016 - 6:00am
The chaps over at Warlord Games have been rather busy lately. But then again, they’ve been working on some new tanks, and anyone that’s been in an auto shop knows what it takes to get a car put together. So now just think what it takes to get a tank put together. Anyway, there’s two […]
Categories: Game Theory & Design

Context Panels condition

New Drupal Modules - 17 June 2016 - 4:04am

Provides a condition for the Context module if a Panel variant is rendered.

This module acts as a bridge between the Panels module and the Context module.
The Context-condition will be fired if a specific Panel-variant is rendered.

Categories: Drupal

Drop Guard: The risk of ignoring Drupal updates

Planet Drupal - 17 June 2016 - 3:30am

There is no question about the importance of regularly updating your Drupal installation, including core, contribs, and libraries.

No matter how you manage the workflow - by using dedicated tools, custom scripts, or just update the codebase via FTP - keeping the application's 3rd party code always up-to-date is a must for every open source project.

Without getting into the details of why this is important (in fact we believe our readers don't need to be convinced at all), we decided to imagine the consequences of intentionally ignoring all updates in your project or updating the codebase selectively, when some modules get their new versions regularly and the rest remains outdated.

workflow Drupal Planet Security
Categories: Drupal

Permission To Fail

Gnome Stew - 17 June 2016 - 1:05am

I know a local artist who says “I thrive on failure.” He’s accepted the fact that every drawing and painting won’t work out as expected, that failed attempts are just part of the process. Similarly to grow as gamemasters (GM’s), we need to give ourselves permission to fail from time to time. Trying new games, new genres, or even creating our own rule systems all provide the opportunity to fail. However, if we don’t try those kinds of things, we risk stagnation. In this article, we’ll look at three situations where we might fail, but where we might also find surprising insights as well. (And as an aside, if I sound preachy, I’m preaching to myself as much or more as to anyone else. I get stagnant too. Eww.)


Running a new game or even a new edition is just asking for trouble. Even if you’ve done your best to memorize the rulebook, you may still stumble once you’re at the table with real players. Bad calls, fumbling with the book, and perhaps even some do-overs are to be expected. The good news is that these are temporary problems. When you make a bad call, you’ll remember the accepted ruling that much better. Your future sessions will run much more smoothly because of your early mistakes.

Occasionally, we may find that a particular ruleset isn’t going to work for us in the long run. That’s not failure. You’ve eliminated a game that doesn’t suit you or your group, and probably learned some rule or technique that you can apply to your old standby.


You can set yourself up for failure even if you don’t plan to change systems anytime soon. You can step outside your comfort zone by incorporating different genres into your ongoing campaign. For example, perhaps a session of a fantasy campaign could be run as a police procedural. Think “Gotham” rather than “Lord of the Rings.” Provide leads to track down, some alleys to skulk through, and lowlifes to rough up or threaten. You may not want to run that sort of session every time, but you’ll certainly learn a great deal about running NPC’s and providing social challenges. Plus it may help you get that itch out of your system. The author tried a “Fantasy in Space” twist for a few sessions, and, well, it didn’t work out. But at least he knows better now (maybe).

If you’re going to push the boundaries of your ongoing game, you may want to limit that to a session or two. That way if it doesn’t quite work out, you can move on to more familiar ground quickly. Artists often train themselves by doing timed drawings, some as short as 1 or 2 minutes. They can experiment with their methods without getting bogged down or discouraged.


Failure is an essential tool when designing your own system or house ruling an existing one. No new ruleset will survive first contact with the players, nor should it. For example, I recently pruned Dungeon World way down and applied it to Star Wars. I wrote the new rules out clearly, and even ran it through a short dry run without players. However, during the first session I still had to adjudicate on the fly, and I took notes on things to fix later. One suggestion I’d give to fellow GM’s who want to try their own system is to wait to make any major changes. Try to end the session gracefully and give them a good time. Odds are the warts in the system stand out mostly to you anyway. Even the old masters made many sketches and had failed attempts. They just quietly fixed them when no one was looking.


Obviously I am not advocating that we actively seek out failure or unsatisfying sessions. When it does happen, however, we can learn and grow from the experience. Otherwise we end up like an artist who draws the same thing in the same way for their whole lives. Make some wild sketches, loosen up and slop the color on your campaign. See how it goes and laugh at your mistakes. And you’ll grow in the process.

How about you? What have you learned by giving yourself permission to fail? Have you ever been pleasantly surprised by something that you tried that worked out better than expected? Let us know below.

Categories: Game Theory & Design

Feminists/social progressives: stop making excuses for violence glorification - by Keith Burgun Blogs - 17 June 2016 - 12:19am
For too long, we social progressives have turned a blind eye to violence glorification in media, despite increased awareness on other kinds of bad messages.
Categories: Game Theory & Design

Address Algolia

New Drupal Modules - 16 June 2016 - 10:32pm

This module provides integration with the Algolia Places library to improve usability of your Address field types.

Currently this module only provides autocompletion of some address form components. Patches to implement new features are always welcome.


The module Address and its dependencies.

Categories: Drupal

PreviousNext: Native PHPStorm Drupal Test Runner

Planet Drupal - 16 June 2016 - 8:08pm

PHPStorm has a lot of built in test runners, but it doesn't support Drupal's Simpletest test runner. In this blog post we'll see how we can execute Drupal tests inside PHPStorm using Drupal test runner.

Categories: Drupal

Chromatic: Backup Your Drupal 8 Database to S3 with Drush & Jenkins

Planet Drupal - 16 June 2016 - 4:06pm

There are many different ways to handle offsite database backups for your Drupal sites. From host provider automations to contrib modules like Backup and Migrate and everywhere in between. This week, I was looking to automate this process on a Drupal 8 site. Since Backup and Migrate is being rewritten from the ground up for Drupal 8, I decided to whip up a custom shell script using Drush.

I knew I wanted my backups to not only be automated, but to be uploaded somewhere offsite. Since we already had access to an S3 account, I decided to use that as my offsite location. After doing a bit of Googling, I discovered s3cmd, a rather nifty command line tool for interacting with Amazon S3. From their

S3cmd (s3cmd) is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.

It works like a charm and basically does all of the heavy lifting needed to interact with S3 files. After installing and setting it up on my Drupal 8 project's server, I was able to easily upload a file like so: s3cmd put someDatabase.sql.gz s3://myBucket/someDatabase.sql.gz.

With that bit sorted, it was really just a matter of tying it together with Drush's sql-dump command. Here's the script I ended up with:

# Switch to the docroot. cd /var/www/yourProject/docroot/ # Backup the database. drush sql-dump --gzip --result-file=/home/yourJenkinsUser/db-backups/yourProject-`date +%F-%T`.sql.gz # Switch to the backups directory. cd /home/yourJenkinsUser/db-backups/ # Store the recently created db's filename as a variable. database=$(ls -t | head -n1) # Upload to Amazon S3, using s3cmd ( s3cmd put $database s3://yourBucketName/$database # Delete databases older than 10 days. find /home/yourJenkinsUser/db-backups/ -mtime +10 -type f -delete

With the script working, I created a simple Jenkins job to run it nightly, (with Slack notifications of course) and voilà: automated offsite database backups with Jenkins and Drush!

Categories: Drupal

Chromatic: Drupal 8 Deployments with Jenkins, GitHub & Slack

Planet Drupal - 16 June 2016 - 4:06pm

We recently launched our first Drupal 8 site--actually it’s this very site that you’re reading! While this wasn’t our first time using or developing for Drupal 8, it was our first full site build and launch on the new platform. As such, it was the first time we needed to handle Drupal 8 code deployments. While I’ve previously covered the benefits of using Jenkins, this post will take you through the steps to create a proper Drupal 8 deployment and how to integrate GitHub and Slack along the way. In other words, you’ll see our current recipe for deploying code automatically, consistently and transparently.

First Things First: Some Assumptions

This post assumes you already have a Jenkins server up and running with the following plugins installed:

If you don’t yet have these things ready to go, getting a Jenkins server setup is well documented here. As is how to install Jenkins plugins. For us, we typically use a Linode instance running Ubuntu LTS for our Jenkins servers. This post also assumes that the external environment you’re trying to deploy to already has Drush for Drupal 8 installed.

Example Deployment Script with Drush

Before we dive into setting up the Jenkins job to facilitate code deployments, I’d like to take a look at what exactly we’re trying to automate or delegate to our good friend Jenkins. At the heart of virtually any of our Drupal deployments (be them Drupal 7, 8 or otherwise) is a simple bash script that executes Drush command in succession. At a macro level, this typically means doing the following, regardless of version:

  1. SSH to the server
  2. Change directory to repository docroot
  3. Pull down latest code on the master branch
  4. Clear Drush cache
  5. Run database updates
  6. Update production configuration
  7. Clear Drupal caches

In Drupal 7, where we relied heavily on Features to deploy configuration, we would typically do something like this:

echo "" echo "Switching to project docroot." cd /var/www/drupal-7-project/docroot echo "" echo "Pulling down latest code." git pull origin master echo "" echo "Clearing drush cache" drush cc drush echo "" echo "Run database updates." drush updb -y echo "" echo "Reverting features modules." drush fra -y echo "" echo "Clearing caches." echo "" drush cc all echo "" echo "Deployment complete."

In Drupal 8, we have the magical unicorn that is the Configuration Management System, so our deployments scripts now look something like this:

If you’re familiar with creating Jenkins jobs already and are just looking for a Drupal 8 deploy script, these next lines are for you.

echo "" echo "Switching to project docroot." cd /var/www/ echo "" echo "Pulling down the latest code." git pull origin master echo "" echo "Clearing drush caches." drush cache-clear drush echo "" echo "Running database updates." drush updb -y echo "" echo "Importing configuration." drush config-import -y echo "" echo "Clearing caches." drush cr echo "" echo "Deployment complete."

Seriously, configuration management in Drupal 8 is amazing. Hat tip to all of those who worked on it. Bravo.

Another notable difference is that with Drupal 8, clearing caches uses the cache-rebuild Drush command or drush cr for short. drush cc all has been deprecated. R.I.P. little buddy. ⚰

If you have a site that needs to be put into "Maintenance mode" during deployments, you can handle that in Drupal 8 with drush sset system.maintenance_mode 1 to enable and drush sset system.maintenance_mode 0 to disable.

Creating our Jenkins Slave & Job

Now that we’ve covered what it is we want Jenkins to handle automatically for us, let’s quickly run down the punch list of things we want to accomplish with our deployment before we dive into the actual how-to:

  1. Automatically kickoff our deployment script when merges to the master branch occur in GitHub
  2. Run our deployment script from above (deploys latest code, imports config, clears caches, etc.)
  3. Report deployment results back to Slack (success, failure, etc.)
Create Your Jenkins Slave

For Jenkins to orchestrate anything on a remote box, it first needs to know about said box. In Jenkins parlance, this is known as a "node". In our case, since we’re connecting to a remote machine, we’ll use a “Dumb Slave”. Navigate to Manage Jenkins > Manage Nodes > New Node

At Chromatic our naming convention matches whatever we’ve named the machine in Ansible. For the purposes of this article, you can just name this something that makes sense to you. Example:** Drupal-Prod-01**

As part of the creation of this node, you’ll need to specify the Host and the Credentials Jenkins should use to access the box remotely. If you don’t yet have credentials added to Jenkins, you can do so at Jenkins > Credentials > Global credentials (unrestricted). From there things are pretty self-explanatory.

Setup the Basics for our Jenkins Job

Now that we have a way for Jenkins to target a specific server (our slave node) we can start building our deployment job from scratch. Start by navigating to: Jenkins > New Item > Freestyle Project.

From there press "OK" and move on to setting up some basic information about your job, including Project Name, Description and the URL to your GitHub repository. Pro tip: take the time to add as much detail here, especially in the Description field as you can. You’ll thank yourself later when you have loads of jobs.

Configure Slack Notification Settings (optional)

Assuming you’re interested in tying your deployment status messages to Slack and you’ve installed the Slack Notification Plugin, the next step is to tell Jenkins how/where to report to Slack. You do this under the Slack Notifications options area. As far as notifications go, we prefer to use only the "Notify Failure", “Notify Success” and “Notify Back To Normal” options. This is the right mix of useful information without becoming noisy. To allow Jenkins to connect to your Slack channels, you’ll need to follow these steps for adding a Jenkins integration. Then just fill in your Slack domain, the integration token from Slack and the channel you’d like to post to. These settings are hidden under “Advanced…”.

Configure Where this Job Can Run

This is where we instruct Jenkins on which nodes the job is allowed to run. In this case, we’ll limit our job to the slave we created in step one: Drupal-8-Prod-01. This ensures that the job can’t run, even accidentally, on any other nodes that Jenkins knows about. Jenkins allows this to be one node, multiple nodes, or a group.

Configure GitHub Repository Integration

Under "Source Code Management" we’ll specify our version control system, where are repository lives, the credentials used to access the repo and the branches to “listen” to. In our example, the settings look like this:

Here we’re using a jenkins system user on our servers that has read access on our GitHub repositories. You’ll want to configure credentials that make sense for your architecture. Our "Branch Specifier" (*/master) tells Jenkins to look for changes on the master branch or any remote name by using the wildcard, “*”.

Configure Your Build Triggers

This is where the rubber meets the road in terms automation. At Chromatic, we typically opt for smaller, more frequent deployments instead of larger releases where there is a higher probability of regressions. Since we rely heavily on the GitHub pull request model, we often have many merges to master on any given day of development for an active project. So we configure our deployments to coincide with these merges. The following setup (provided via the GitHub Jenkins Plugin) allows us to automate this by selecting "Build when a change is pushed to GitHub".

Setup Your Deployment Script

Here’s where we’ll implement the example deployment script I wrote about earlier in the post. This is the meat and potatoes of our job, or simply put, this is what Jenkins is going to do now that it finally knows how/when to do it.

Under "Build" choose “Add build step” and select “Execute shell”. Depending on your installed plugins, your list of options might vary.

Then add your deployment script to the textarea that Jenkins exposed to you. If you want somewhere to start, here is a gist of my job from above. When you’ve added your script it should look something like this:

Last Step! Enable Slack Notifications

Although earlier in the job we configured our Slack integration, we still need to tell Jenkins to send any/all notifications back to Slack when a build is complete. You do this sort of thing under the "Add post-build action" menu. Select “Slack Notifications” and you’re good to go.

Our deployment job for Drupal 8 is now complete! Click "Save" and you should be able to start testing your deployments. To test the job itself, you can simply press “Build Now” on the following screen OR you can test your GitHub integration by making any change on the master branch (or whichever branch you configured). With the setup I’ve covered here, Jenkins will respond automatically to merges and hot-fix style commits to master. That is to say, when a PR is merged or when someone commits directly to master. Of course no one on your team would ever commit directly to master, would they?!

Wrapping Up

Assuming everything is setup properly, you should now have a robust automatic deployment system for your Drupal 8 project! Having your deployments automated in this way keeps them consistent and adds transparency to your entire team.

Categories: Drupal

Chromatic: The Anatomy of a Good Ticket

Planet Drupal - 16 June 2016 - 4:06pm

We previously wrote about how to write a great commit message, but before that commit message is ever written, there was (hopefully) a great ticket that was the impetus for the change. Whether or not a ticket is great is subjective, and it is often how well thought out the details are that makes the difference. The ticket might have everything a developer needs to complete the task, but not provide any insight into how to test the final product. Conversely, it might provide all the details the QA team needs to verify, but not provide any insight into the technical requirements for implementation.

Before we examine the specific things that make up a great ticket let’s first examine some of the best practices that further enhance communication and efficiency:

  • Having all stakeholders using the ticket management system and not allowing any communication of requirements via email or other communication tools that silo information.

  • Ensuring that requirements which are discussed one-on-one or during a meeting are added back into the ticket so nothing is lost or forgotten.

  • Having any message conversations in open channels that allow visibility into the decisions for others working on related issues.

  • Continuously keeping tickets updated with the current status so the whole team is aware of where everyone else is with their tasks.

  • Ensure everyone is familiar with the internal jargon and acronyms used, or ensure they are provided with the tools to decipher the terms.

With some ground rules established, let’s investigate the factors that make for a great ticket.

A User Story

A user story provides a high-level description of functionality from a user’s perspective, such as, "when a user logs in they should be able to see a list of past purchases." Great tickets often start with a user story that answers what is needed and why at a high level.

Clearly Defined Goals & Scope

Clearly stated goals from the business allows the ticket to be resourced correctly. Also, a full understanding of the requested change’s scope will inform what brands, sites, regions, pages, etc. a new feature or change will affect, and is crucial to planning a proper implementation.

Accurate Estimates

A well thought through estimation of the level of effort from developers and other stakeholders will make sprint planning/resourcing much more accurate and sprint reviews/evaluations more insightful.

Understanding of Priority

A clear understanding of the business priorities will ensure timely completion and allow the team to plan ahead, avoiding late nights and weekends.

Knowledge of Blockers

Exposing any potential barriers or blockers during the estimating of the ticket will allow them to be accounted for and even potentially solved before development starts.


Providing screenshots and adding arrows and text for clear communication that makes it very apparent what "this" and “that” are, thus avoiding pronoun trouble.

Documented Designs

Providing detailed requirements with exact values, sizes, states, etc. with considerations for edge cases will make any developer forever grateful. Style guides with documented font names, sizes, weights, etc. are another tool that will improve design workflow efficiency. Additionally, designs can provide:

  • Where every piece of information comes from.

  • How items should appear if a given field or other information source is empty.

  • How long or short variants of a given data point are handled and what logic controls programmatic truncation if applicable.

Contact Information

Providing the names and contact information for other team members who may hold key pieces of information that didn’t make it onto the ticket will prevent blockers and help new developers learn which team members have expertise in other areas. Additionally, providing contact information for external parties when working on third-party integrations will prevent communication gaps and middlemen. Sending a quick introduction when rolling a new person into the mix will get you bonus points.

Code Context

Taking the first step is the hardest part, but often a lead developer will know right where and how to proceed. Providing other developers with the names of functions or files to look for from someone with deeper knowledge of the codebase can save an immense amount of time up front and avoids potential refactoring down the road. It also reduces guesswork and more importantly, might reinforce a best practice when there are multiple approaches that would technically work. Examples of previous implementations or before-and-after code samples are also great things to consider providing.

Reliable Information

Up-to-date acceptance criteria that is verified as accurate and updated if/when requirements changed, and the ability for a developer to have 100% confidence in this information makes everyone’s life better.

Complete Logical Requirements

Thinking through default values or fallback values/logic if a given piece of data is empty by default, instead of treating it as an edge case, allows for cleaner code and reduces emergencies and "bugs" down the road.

Component Driven Separation

Giving everyone on the team discrete chunks of work that can be documented, developed, tested, and completed will allow everyone to feel much better about the progress of the project. Providing clearly defined subtasks that allow chunks of the work to be checked off and delineated clearly when working on larger tickets will help with this. Another key to success is having properly managed dependencies for blocking work so effort is not wasted until all blockers are resolved.

This can be accomplished by separating tickets by components, not pages. For example, "social share bar" and "comment widget" tickets that probably appear on every content type rather than a "article page" ticket that includes building and styling all of those components before it can be considered complete.

Exact URLs

When possible, always provide the exact URLs of pages where the offending bug can be found or the new functionality should be placed. This keeps anyone from making assumptions and when properly paired with a nice screenshot, it really takes the guesswork out of recreating bugs or finding where new features should live.

Reproducible Steps

In addition to exact URLs, a thorough lists of steps to reproduce the bug, the expected result, and the actual result will all help a developer quickly pinpoint and understand a problem. It is just as important for developers to provide QA and business stakeholders with steps to test functionality that is not intuitive or simple to test.

Assumptions & Changes

Finally, not all developers are the same. If you ask two developers to solve the same problem, you might get very different solutions, but the more details you provide them, the better the chance of a successful outcome. Additionally, a developer with a wealth of institutional knowledge might need significantly less information to have a clear picture of what needs to be done, while a new hire, internal transfer or external contractor will likely need more information.

However, I would argue that regardless of the expected assignee’s knowledge, the extra time spent to write a good ticket is rarely wasted. Tickets often get passed around as people take vacations, flat tires happen, and children get sick. When these things inevitably occur, we don’t want to rely upon assumptions, which, no matter how good, only need to be wrong once to cause potentially large amounts of wasted time, embarrassment, liability, etc.

Ways to Work This Into Your Project
  • Create a blocked status for your Agile or Kanban boards and utilize it liberally to ensure high visibility into the unknowns and show others how much time is lost due to blockers.

  • Loop project managers into conversations with the person who filed the initial ticket so they see first hand the level of effort required to track down answers.

  • Talk through the issues during sprint review and review what did and did not go well.

  • Allow developers to review and estimate issues before they are included in a sprint and require clear requirements before accepting a ticket.

  • Don’t just complain; be a part of the solution. Educate stakeholders on the structure you want to see for requirements and provide them with tools to help communicate more clearly such as logic maps, data flow diagramming tools, etc.

What Now?

I encourage you to think of a good ticket not just as a means to an end; be that a fixed bug, a new feature, or new design. Instead, treat it as an opportunity to show respect to each of the many professionals that will touch the ticket through its lifecycle by providing them the information and tools they need to perform their job to the best of their abilities. This respect goes in all directions, from ensuring good requirements for developers, to writing good testing instructions for QA, writing quality code for a great product and providing stable hosting infrastructure. This all culminates in a well-built project that shows respect to what this is ultimately all about: the user.

Categories: Drupal

Cheeky Monkey Media: D8 You’re Great! My first experience with Drupal 8

Planet Drupal - 16 June 2016 - 3:35pm
D8 You’re Great! My first experience with Drupal 8 shabana Thu, 06/16/2016 - 22:35

This is one of the blogs that I was really thrilled to write because I got to document my first ever installation of the production version of Drupal 8.  And boy, was I excited!  As you know, Drupal has made some significant changes under the hood in this version. I am not going to go over those changes in this feature, but I thought, I would give an overview of some of the superficial changes that it has undergone.  More than anything, as a developer, I was really interested to see how much it has changed in terms of the UI and operability.

Setting Up

I setup Drupal 8 on my Windows 10 machine running Acquia Dev Desktop (yes, you read that right!).  The installation was pretty straight forward and appeared to be somewhat quicker than all the versions I have tried before.  Once I got it setup, the front page looked extremely familiar, with it running Bartik as the default theme.  But right off the bat, I noticed the nice admin toolbar across the top of the page with pretty much the same menu links that we're accustomed to.  The one new change, 'Modules' is now called 'Extend'.

The Modules Page

Going to the modules page, we see that we have a module filter by default, making module searches a lot easier when our list starts building up. As we scroll down the list, we notice a couple of inclusions that are finally getting their dues, as you might already know, Views and Views UI are in Core!  The Views UI pretty much looks the same but the one thing I really liked is the ability to now clone a View display type as any other display type. To elaborate, in D7, if we wanted to clone a Page display, we were only allowed to clone to the same display type, ‘Page’.  However, the Views in D8, now allows us to clone a display type like Page, as any other display type like ‘Block’, ‘Entity Reference’, ‘Feed’, ‘Attachment’, etc (views.png). This is one change I’m really happy about because there has been one too many instances where I’ve had to fiddle with code after cloning a display to transform it to another display type.

More welcomed goodies in Core

The other noticeable changes: we've got CKEditor, Contextual Links, Quick Edit, and Configuration Manager all enabled and running as part of Core.  They've also added multilingual capabilities, and web services as part of the release.  These are definitely welcomed additions and makes Drupal much more relevant and powerful from the get-go.  One additional change that I noticed, the removal of the PHP Filter module.  It has now moved to a contributed module at

Display Modes with More Juice

Moving along, Drupal 8 now incorporates some of the functionality of Display Suite and offers us more control over the way nodes, user profiles, comments, taxonomies and blocks are displayed.  There is a new admin section under 'Structure' called 'Display Modes' from where we can add more display modes to control the look and feel of content as well as forms.  So not only can we easily manage the display that the end-user sees, but we can also modify the look of forms while data is inputted.

In-Place Content Editing

One of the other features that is in Core now is the ability to edit content in-place. In Drupal 7, this task was accomplished by contributed modules.  With the inclusion of it in Core now, it just makes life a lot easier for us, especially when we’ve got tons of stuff to edit and test.  There’s an ‘Edit’ button at the top of the toolbar and that displays all the places on the page that we can edit in-place.  Editing and displaying of the updated content is now a cinch.

Configurations made easier

As I began to delve further into the configuration section, the one thing that caught my eye was the new 'Configuration synchronization' section under Development.  I briefly played around with the feature  and my initial reaction is, it's a mild-dose of the Features module. Basically, this new feature allows us to export all the configuration changes made on a site into a nice little tar.gz file.  In the scenario above, I just created a new user role called 'Super User' and immediately exported my configurations. Once you extract the file, you can see that there are quite a few .yml format text files , and among them is a file called, 'user.role.super_user.yml', which is the config for my new role.  Basically, I can take this and import it into my Staging or Production site, provided they're the same site and share the same UUID.  As the Features modules allows us to bundle related configurations and re-use it on ANY site, Drupal's core configuration feature allows us to keep configuration consistent between different environments of the SAME site.  As you can see, it won't replace the Features module just yet, but it does offer a great starting step towards a cleaner and more reliable way to transfer and transport configuration among different environments and sites.

In conclusion

In a nutshell, my first encounter with Drupal 8 has been extremely positive. The guys at Drupal have taken stock of what users really need and have tried to incorporate those changes into this new version.  I've got lots of roads to traverse with Drupal 8 and I hope to share with you some of those experiences during my future trails.

Visit our Why Drupal 8? site!

  Categories Drupal Planet
Categories: Drupal

Cheeky Monkey Media: Working with Bootstrap’s New Responsive Utility Classes

Planet Drupal - 16 June 2016 - 3:34pm
Working with Bootstrap’s New Responsive Utility Classes denis Thu, 06/16/2016 - 22:34

As an html/css purist at heart, my school of thought has always been having your representational layer separated from your markup. Sites like CSS Zen Garden taught us that you should never have to mix your design styles inside your markup with the idea being that well structured html will never have to change even on a complete re-design.


2 years ago, I was forced to use a css framework for one of my project. Being the control freak that I am, I was reluctant to try new things and bloat my beautiful handmade custom css.

It took me a little over a year for me to embrace using a css framework and rely on existing css to style my markup, but I’ve learned so much by doing so from people a lot smaller than I am. And I’ve reduced more than half the need to write custom css as well as lowered production time.

Disclaimer: Some of these new classes will only work on the current alpha-2 release of bootstrap 4 and might change in the future since it’s still in heavy development. Use at your own risk.

Responsive Floats

Responsive floats are great for header elements among other things. I often come across designs that have search box right aligned on desktop, but are left aligned on tablet. Or main navigation that are floated to the left but move to the right and collapse on mobile.

Responsive floats works by using the pull-<breakpoint>-<direction> pattern.


  <form class=”pull-xs-left pull-md-right”>

     <input type="text" placeholder="Search">



The above code will float the search bar left until the screen width reaches the “md” breakpoint, floating it to the right.

In conjunction with other components from the framework like navbar, spacing and setting your own variables in sass, we can easily come very close to the original design mockups without writing a single line of custom css. Like I said earlier, it took me a while to develop using these concepts but it opened my mind to write my code in a more modular fashion and reuse these components on every site I work on.

Responsive Text Alignments

This one I use often for content. On Desktop, the design’s text is aligned center inside articles but on mobile this makes for a weird looking effect and we’de much rather have it left aligned.

Bootstrap v4 introduces new responsive text alignment classes like this:

<article class=”text-md-center”>

 <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit ex, semper quis eros sed,</p>


This will leave the text at its default left alignment on smaller breakpoints, but will align the text center on the medium breakpoint and up, removing the need to write a media query in your css.

If you prefer to have your text centered on all breakpoints you simply need to use text-xs-center


This is just the tip of the iceberg for new features that ships with the Bootstrap 4. Some other things that I’m really loving is the new card component and contextual colors and backgrounds creating custom ones is as easy as using the “bg-variant($parent, $color)” mixin.

If you haven’t already, I urge you to look at the documentation , but whatever css framework you chose to use, make sure you really dig in the documentation and use all the tools that are available to you. The investment will save you time in the long run.

  Categories Drupal Planet
Categories: Drupal

Cheeky Monkey Media: IntentionJS and RequireJS - How monkeys do it (the sfw edit)

Planet Drupal - 16 June 2016 - 3:32pm
IntentionJS and RequireJS - How monkeys do it (the sfw edit) micah Thu, 06/16/2016 - 22:32


  • RequireJS (and some knowledge of how to use it)
  • IntentionJS
  • A brain
  • Bananas

There are plenty of reasons you shouldn’t HAVE to use IntentionJS, but it’s just so good when you NEED your fix, of DOM manipulation.

Normally a responsive website should be designed so that when you expand or collapse the viewport (effective screen), the DOM elements flow naturally from left to right, and top to bottom. The order is preserved, and it was designed so that those elements follow that flow in terms of importance and usability.

Admittedly, this does limit us at times, and we need elements to be in completely different placements in the DOM, depending on the device used. Sure we can mess with duplicated content by hiding and showing the right elements based on the screen size [please don’t, it’s bad], or doing some fancy schmancy css floating and absolute positioning, but then you start to get other fun issues, that go beyond the scope of this banana log.

So, we’re left with manually moving the elements around. You could start using the append() and after() functions say, from jQuery, but that also gets complicated.. setting screen widths, window resize, or using (media query), etc. All messy in some form or another.

We like clean, even though we like poo, we still like clean.

Our Hero, IntentionJS

From the mouths of the intentionJS magicians: (

“Intention.js offers a light-weight and clear way to dynamically restructure HTML in a responsive manner.”

Good right? riiight.

Okay, so we all have our own methods for including JS, and writing libraries and code. Here’s what we monkeys do.

Starting with RequireJS

We like using requireJS. A lot. In fact I get a banana every time I do. So in the main.js file loaded by require we have:

requirejs.config({ paths: { // vendor 'intention': 'vendor/intention', 'underscore': 'vendor/underscore-min', 'viewportsize': 'vendor/viewportSize-min', // custom 'jquery': 'modules/jquery-global', 'intentcontext': 'modules/intentcontext', 'homepage': 'modules/homepage', 'initintent': 'modules/initintent' }, });

Here we are just telling require where to find all our ‘required’ files when we ‘require’ them. Yeah.. there is probably a better way of saying that. *puts on deal with it glasses*

Intention requires underscore, so we’re including that. Also, we’re using a little library called ‘viewportsize’. Why? well because different browsers report different viewport sizes, based on whether or not it’s showing the scroll bar. That’s a problem, this fixes that problem. (

Then we include jQuery, cause we need it. Then comes some magical code with unicorns.. and monkeys.

require([ 'jquery', 'underscore', 'intentcontext', ], function ($, _, IntentContext) { 'use strict'; // DOM ready $(function() { // js loaded only on homepage if ($('body').hasClass('front')) { require(['homepage']); } if ($('html').length > 0) { require(['initintent']); } }); // DOM ready });

So here, we’re just including the needed libraries for the site in general, and then checking if we’re on the homepage, if so, include the homepage module. Then the very last that we include in the initialization of the intent. Think of it, as intentions’s big red “go” button. We’ll get to these a bit later. For now, just know, that we are making sure that the initinent file is included last, since we’re doing all ‘intention’ setup first. Since require loads these in ORDER, of the code inclusion, we’re able to do all the setup first, then lastly initialize it.


This is where we setup our ‘contexts’. A context is basically a ‘switch point’ - a point a which stuff is supposed to happen. Each ‘context’ is associated to a screen size (these values should match your CSS media queries for major layout changes)..

IntentContext.bp_desktop = 1025; IntentContext.bp_tabletlandscape = 769; IntentContext.bp_tablet = 641; IntentContext.bp_mobilelandscape = 321; IntentContext.bp_mobile = 0;

These are the breakpoints were major layout changes happen (for the purpose of this blog). Yours would match that of your CSS breakpoint values.

Next up, making our contexts. As you will see, each context has a name, and I’m setting the “min” value to the breakpoint value that I set in the above code. So basically, the ‘desktop’ context will get triggered every time the browser hits the “1025 pixel” viewport width or above. (It won’t keep re-triggering events though, as you increase viewport width above that, which is nice.) All the other ‘contexts’ will get triggered at their respective screen width values.

IntentContext.horizontal_axis = IntentContext.intent.responsive({ ID: 'width', contexts: [{ name: 'desktop', min: IntentContext.bp_desktop }, { name: 'tabletlandscape', min: IntentContext.bp_tabletlandscape }, { name: 'tablet', min: IntentContext.bp_tablet }, { name: 'mobilelandscape', min: IntentContext.bp_mobilelandscape }, { name: 'mobile', min: IntentContext.bp_mobile }], matcher: function (measure, context) { return measure >= context.min; }, measure: function () { IntentContext.v_width = viewportSize.getWidth(); return IntentContext.v_width; } });

So, there is a thing. It’s thing you may need. Normally intention won’t activate the context on first page load, which you may need. We will get to that. (This is what that initintent.js file is for).


Now we need to tell which elements where to be placed in the dom, according to whatever ‘context’ is triggered. You can either go directly in the HTML and add all the special intention attributes to the elements, or do it via JS. I like doing it in the JS, i find it cleaner.

So in our Homepage.prototype.intent = function () function:

var footer = $('.l-footer'); footer.attr('intent', ''); footer.attr('in-desktop-after', '.l-header'); footer.attr('in-tabletlandscape-after', '.l-main'); footer.attr('in-tablet-after', '.l-main'); footer.attr('in-mobilelandscape-after', '.l-header'); footer.attr('in-mobile-after', '.l-header'); IntentContext.intent.on('desktop', function() { footer.attr('style', 'border: 4px solid red;'); }); IntentContext.intent.on('tabletlandscape', function() { }); IntentContext.intent.on('tablet', function() { }); IntentContext.intent.on('mobilelandscape', function() { footer.attr('style', 'border: 4px solid white;'); }); IntentContext.intent.on('mobile', function() { footer.attr('style', 'border: 4px solid blue;'); });

First line we just get the element we want to target. The next lines are key.

I’m now manually adding all the required contexts on that elements, so for each ‘breakpoint’ context, we know where to place the footer. The syntax is as follows

footer.attr(‘in-[your-breakpoint-name]-[move-function], ‘[dom element]’)’

‘your-breakpoint-name’ is just the name you associated to the breakpoint, up in the IntentContext.js file.

‘move-function’ is the method in which you want to place that element. They work just like jQuery’s manipulation functions [append(), before(), after(), prepend()]

‘dom-element’ is just the element you are specifying to be “moved to”.

So in this case, when the browser hits the ‘desktop’ layout screen width, we are putting the ‘.l-footer’ element just after the ‘.l-header’ element in the DOM. The next lines all work the same and specify where the element needs to go, for whichever context (screen size).

Then, we have some more magical code.

IntentContext.intent.on('desktop', function() { footer.attr('style', 'border: 4px solid red;'); }); IntentContext.intent.on('tabletlandscape', function() { footer.attr('style', ''); });

So, for each context, we can run some custom code of any kind. In this case, every time we hit the ‘desktop’ context, we are going to add a border to the footer element. Everytime we hit the ‘tabletlandscape’ context, we make sure remove any lingering styles. Etc..

I normally like to use these methods to ‘reset’ certain things that may have been triggered on an alternate layout.

Lastly, the Initilization of Intent. This will allow us to use those .on() [in above code] functions on page load as well.

Know that all this will only happen on the homepage though. If you need this to happen on all pages, you can create a separate module that can handle site wide context changes, and just include it in the main.js require section.

InitIntent.js IntentContext.horizontal_axis.respond(); IntentContext.intent.elements(document); $(window).on('resize', IntentContext.horizontal_axis.respond);

So these 3 lines just get everything going. Check out the intention.js website for further detail, but suffice to know, they get intention up and running.

That should be it to have it all working nicely and being able to do some sexy DOM manipulation without to much pain.

Categories Drupal Planet
Categories: Drupal

Acquia Developer Center Blog: 5 Mistakes to Avoid on your Drupal Website - Number 2: Security

Planet Drupal - 16 June 2016 - 2:49pm

Good security practices protect your site from hacker attacks. In this article we'll look at some methods for reducing security risks on your site. 

Drupal Security Best Practices

Drupal has good security built in if used correctly. However, once you begin to configure your site you might introduce new security issues. Plan configuration so that only trusted users have permissions that involve security risks.

Tags: acquia drupal planet
Categories: Drupal

DrupalCon News: Sharing the secrets of your success!

Planet Drupal - 16 June 2016 - 2:07pm

Welcome to Dublin, stranger. Why don't you come and warm yourself round our campfire? There. That's better.

Help yourself to stew, it's all we have, but you're welcome to share it.

It's good stew, warms all the right parts in all the right ways. The only thing we ask in return is that you share with us your secrets. You know, the secrets of your success.

Don't be shy now, I can see from the way you walk that you're a superstar project manager. Seeing that sort of thing is just a gift of mine, I guess.

Categories: Drupal

Planetarium Board Game Up On Kickstarter

Tabletop Gaming News - 16 June 2016 - 2:00pm
Now here’s a board game that Dr. Neal Degrasse-Tyson could get behind. Planetarium is a board game that’s got a bit different a premise than most. You’re not fantasy world adventurers. You’re not sci-fi mercenaries. Nope, you’re clumps of dust and some gas. And that’s not some statement about the hygiene status of gamers. I […]
Categories: Game Theory & Design

ImageX Media: Higher Education Notes and Trends

Planet Drupal - 16 June 2016 - 1:47pm

In this week’s higher education notes and trends, predictive behavior technology comes to the education sector, for-profit schools see sharp declines and a closer look at how the University of Southern California is differentiating itself from other prestigious private schools by becoming a leader in recruiting minorities. 

Categories: Drupal

Taxonomy Bootstrap Accordion

New Drupal Modules - 16 June 2016 - 1:17pm

Provides a Bootstrap accordion for taxonomy vocabularies. This module is compatible only with Bootstrap 3 since significant changes were made between version 2 and 3.


* Please see Bootstrap documentation for which version of jQuery is required.

Categories: Drupal

OrganATTACK Card Game Up On Kickstarter

Tabletop Gaming News - 16 June 2016 - 1:00pm
You know those cute cartoons of an anthropomorphic heart and brain? Like, the brain is at work, but the heart is sooooooo boooooooored (even though it’s only been like 10min). So they compromise and surf the internet for a while? Yeah, I’m telling it terribly, but I love those comics. They’re done by The Awkward […]
Categories: Game Theory & Design

Lullabot: Lullabot Project Manager Roundtable

Planet Drupal - 16 June 2016 - 1:00pm
Matt & Mike sit around with several Lullabot project managers, and talks about the ins, outs, and hows of PMing.
Categories: Drupal


Subscribe to As If Productions aggregator