Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 3 hours 4 min ago

Jacob Rockowitz: It is okay for you to plan an exit strategy, we should be okay with off-boarding contributors, and everyone says goodbye

7 May 2018 - 4:32pm

People come and go from open source projects and communities

Most people would agree that everyone should contribute something back to Open Source at some point in their careers. We have to realize that an ongoing Open Source contribution to a project can't be sustained forever. We might graduate from college, get a new job, need a break to travel, have kids, help raise grandkids, retire and even get bored with a project. While we need to improve open source sustainability, we also need to accept the reality that people continually come and go from open source projects.

Developer burnout should not be part of open source

Up until recently, I felt that 'burnout' was the only talked about method when people left the Drupal community. Up until DrupalCon, I thought that once I committed to supporting something, like a module, I was obligated indefinitely or at least until I burned-out from supporting it.

In my new year blog post title, Webform, Drupal, and Open Source...Where are we going?, I stated…

...and I don't think I am the only one.

Are we expecting too much?

Jeff Eaton's presentation titled "You Matter More Than The Cause" addresses burnout and how it impacts the sustainability of teams and he says…

I think we need to stomp out the concept of developer burnout in Open Source and equate developer burnout to poorly-managed companies and organizations.

Planning an exit strategy can prevent burnout

One of many valuable lessons I learned at Adam Goodman's Teamwork and Leadership Workshop at DrupalCon Nashville was that it’s okay to plan an exit strategy, it’s even something that can ultimately help the community and potentially...Read More

Categories: Drupal

Palantir: Supporting Innovation Through Contribution

7 May 2018 - 12:18pm
Supporting Innovation Through Contribution brandt Mon, 05/07/2018 - 14:18 George DeMet May 8, 2018

Companies, agencies, and organizations that contribute to the Drupal project and community play a key role in supporting and sustaining a culture of innovation.

Drupal has a long and rich history of supporting and sparking innovation. Drupal 8 in particular represents a fundamental shift in thinking about how websites and other digital experiences are built. With its modular architecture, improved APIs, configuration management, and native web services support, Drupal 8 is well-positioned to help connect people, technology, and information in ways that have never before been possible.

Companies, agencies, and organizations that contribute to the Drupal project and community play a key role in supporting and sustaining a culture of innovation. This contribution can take on many forms, including setting aside time for employees to contribute to the Drupal project and community, sponsoring people to work exclusively on Drupal, and donating money to sponsor Drupal initiatives and events.

Impact of Contribution on Innovation

An ever-growing body of research into open source ecosystems is shedding light into the ways that different forms of contribution have on innovation for firms who contribute as well as the projects that benefit from those contributions. Firms that contribute to Drupal are generally driven by extrinsic motivators, such as the belief that working with the community will help them develop better products, or provide them with increased visibility and status within the community, which in turn helps drive sales and/or recruit talent.

Jonathan Sims, a professor of strategy at Babson College, has spent years studying how firms in the Drupal ecosystem engage with each other and the project to promote open innovation. In a 2016 paper published in the Oxford Journal of Industrial and Corporate Change, he found that while the impacts of contribution on a firm’s productivity are usually marginal, contribution does help expand social ties and can shift strategic posture and promote innovation.

While contributing code is associated with stronger social ties and more incremental innovations, providing help or support to others in the community is associated with a more conservative strategic posture, but more radical innovations. Firms that primarily contribute code to projects like Drupal are more likely to be building on top of someone else’s work and/or collaborating with someone else to solve a shared problem. Providing help on the other hand, is much more context-dependent and is more likely to lead to new questions and possible new insights, thus providing more opportunities for radical innovation within a given domain.

The Virtuous Cycle

Regardless of what form contribution takes, participating in an open source ecosystem like Drupal requires that firms be open and willing to share their knowledge and intellectual property with others. Drupal project lead Dries Buytaert has discussed how companies and organizations like Pfizer and Hubert Burda Media are not only sharing Drupal contributions with their competitors, but also challenging those competitors to contribute back as well. He argues that by working together, these organizations not only gain a competitive edge, but also reap the benefits of accelerated innovation:

“Those that contribute to open source are engaging in a virtuous cycle that benefits their own projects. It is a tide that raises all boats; a model that allows progress to accelerate due to wider exposure and public input.”

We’ve seen this virtuous cycle play out countless times at Palantir. One example is from several years ago, when we found that out that on many of the projects that we worked on, clients often had a specific set of expectations around content workflow and editorial access based on their experience with other platforms, and that all too often, Drupal didn’t meet those expectations out of the box. In response to this business need, we created and released a suite of modules called Workbench that provided a unified interface and tools to enable authors and editors to focus on managing their content.

While Palantir team members did the initial heavy lifting on the code development for Workbench, over time, other firms (including some of our competitors) started using and extending the system, building on top what we had released. Thanks to the efforts of those involved in the Drupal Workflow Initiative, the moderation functionality of Workbench was added to Drupal core as the Content Moderation module, making the software better for everyone. This in turn makes Drupal a more attractive choice than competing platforms and expands the market for the firms that work with it.

Extrinsic and Intrinsic Motivation

In contrast to the external incentives that drive most firms to contribute to open source projects like Drupal, individuals are more likely to be driven by intrinsic motivators to contribute. Not only do they get to feel like they’re part of something bigger than themselves, but participating in the Drupal community is also a good way to form social ties with other like-minded people who want to see their contributions make a difference in the world.

Despite the large number of individual contributors to the Drupal project, a very small number do the majority of the work. Contribution data on Drupal.org reveals that nearly half of the people who contributed code to the project got just one credit, while the top .4% of all contributors (30 people) accounted for over 17% of the total credits.

One likely reason for this imbalance is Drupal’s reputation for having a steep learning curve. User research conducted by Whitney Hess and the Drupal Association in 2014 found that while the project is good at onboarding people at the entry level of engagement, the transition to higher levels is much more challenging and is where many people end up dropping out of the project.

Providing resources and support to help more people move up the contribution ladder helps spread the burden across more shoulders, introducing new perspectives and reducing burnout, particularly within the core developer community. Having more engaged community members also helps mitigate one of the historical hurdles to Drupal adoption, which is the shortage of skilled developer talent.

Firms that work in the Drupal ecosystem can both address the talent shortage problem and support innovation within their own organizations by supporting professional development opportunities that help their employees “level up” existing skills and pass on knowledge to less experienced team members. For many organizations, this is also a much more economical and sustainable way to build and grow a Drupal team than relying exclusively on hiring from a limited and increasingly in-demand pool of existing “rockstar” talent.

Removing Barriers to Contribution

It is vitally important for any open source project to remove barriers to contribution, whether real or perceived, because they undermine both the intrinsic motivations of individual contributors and the extrinsic motivations of companies, agencies, and other organizations. Likewise, it’s important for projects not to place too much emphasis on extrinsic motivators, as that can also undermine intrinsic motivation. In this way, recognizing different kinds of contribution can be a delicate balancing act.

Over the last few years, the Drupal Association and others have worked to help track and acknowledge more forms of contribution on Drupal.org by improvements to user and organizational profile pages, adding the ability for organizations to receive credit for work on projects and issues, and tying case studies directly to organizations as well as individual contributors. Along with paid sponsorships, these improvements enable companies and organizations who contribute to the project and community to receive greater visibility on Drupal.org, which benefits both sales and recruiting efforts.

Other forms of contribution, such as local event and user group sponsorship and organization, writing documentation, and providing mentorship are less easy to measure, but also critically important to the health of the project. In a paper presented at DrupalCon Barcelona in 2015, David Rozas, a sociologist and computer scientist who studies the technical and social aspects of technology, argued that these kinds of “community-oriented” contributions are actually more important to a project’s long-term sustainability than code contributions because they are emotional experiences that serve to strengthen the project’s sense of community.

Firms that are not in a position to contribute code to Drupal can contribute time and/or money toward efforts that help promote the project and community, such as local and regional events or Drupal Association partnership programs and special initiatives. These kinds of contributions can often have a greater impact on innovation than code alone.

Thank You for Your Support!

Drupal boasts one of the largest and most diverse communities of any open source project, which along with a culture that supports and values contribution, has enabled it to become a leading platform for digital innovation. With the support of the companies, organizations, and individuals that use and contribute back to it every day, Drupal is poised to inspire innovation for many years to come.

Community Drupal Open Source People Workbench
Categories: Drupal

Palantir: Conscious Decoupling: The Case of Palantir.net

7 May 2018 - 10:31am
Conscious Decoupling: The Case of Palantir.net brandt Mon, 05/07/2018 - 12:31 Ken Rickard May 9, 2018

Our new site uses VueJS to produce single page applications. By tying these applications to content creation, our content editors can create dynamic new pages without needing an engineer.

Alex Brandt recently wrote about the new redesign of the new Palantir.net site: what the goals were, what we wanted to improve, and the process by which we approached the project. I want to speak more from a development viewpoint around how we decoupled the new www.palantir.net site in order to create better relationships between content.

A major goal of our 2018 redesign was to feature more content and make it easier for people to surface topics that interest them. The strategic plan and design called for a system that allows people to filter content from the home page and other landing pages (such as our Work page).


In the modern web, people expect this filtering to take place without a page refresh. Simply select the filter and the content should update immediately.

This presents an opportunity to explore methods for achieving this effect. In addition, the following features were desired:

  • The ability to feature specific content at the top of the page
  • A process to insert content other than Drupal pages into the list display
  • A way to select what types of content appear on the page
  • A method to restrict the total count of items displayed on the page
  • The ability to add one or two filters to the page; or none at all

From the developer’s point-of-view, we also added:

  • The ability to allow editors to create and configure these dynamic pages without additional programming

The design and development of the new site followed our understanding of what “content” means to different teams. We know that understanding how to implement the design requirements isn’t enough. We had to think through how editors would interact with (and control) the content.

There is a lot of talk around “decoupled” Drupal these days — the practice of using Drupal as an editing environment and then feeding data to a front-end JavaScript application for rendering. Certainly we could have chosen to decouple the entire site. That process, however, brings extra development time and overhead. And in our case, the site isn’t large enough to gain any advantage from rapidly changing the front-end.

So instead we looked at ways to produce a dynamic application within Drupal’s template system. Our technical requirements were pretty standard:

  • A template-driven JavaScript content engine
  • Rendering logic (if/else and simple math)
  • Twig compatibility
  • A single source file that can be served from CDN or an application library

This last requirement is more a personal preference, as I don’t like long, fixed dependency chains during development. I specifically wanted a file I could drop in and use as part of the Drupal front-end.

Based on these requirements and a few basic functionality tests, we settled on the VueJS library. Vue is a well-documented, robust framework that can be run server-side or client-side. It provides DOM-manipulation, templated iteration, and an event-driven interaction API. In short, it was perfect for our needs.

Even better, you can start using it immediately:

 

At the core of Vue.js is a system that enables us to declaratively render data to the DOM using straightforward template syntax. Vue uses the handlebars syntax -- familiar to Twig users -- to define and print variables set by the application:


  {{ message }}

var app = new Vue({
  el: '#app',
  data: {
    message: 'Hello Vue!'
  }
})

When working with Twig, which also uses handlebars to wrap variables, we must wrap VueJS variables in the {% verbatim %} directive like so:


  {% verbatim %}{{ item.slug }}{% endverbatim %}

Unless you hardcode variables, Vue pulls all of its data via JSON, which can be provided out-of-the-box by Drupal’s Views module.

To make the application work, we needed to provide the following elements:

  • A JSON feed for content to display, with featured items at the top
  • A JSON feed for each filter to use
  • A JSON feed for insert content

The first elements -- content -- consists of two JSON directives controlled by editors. First, there is a Featured Content field that can be used to select content for the top of the page:

Editors may choose to only populate this content, and they may choose as many items as they wish. Below this content, we optionally add additional content based on type. Editors may select what types of content to add to the page. For the homepage, we include four content types:

Editors may then select which filters to use, if any are desired. Two options are available, but one can be selected. The filters may be sorted as desired.

Further, if the editor selects the Filter by Type option, the first filter will be replaced by a Content Type list taken from the selections for the current page. This technique is made easier by the dynamic nature of the VueJS application, which is expecting dynamic content. It has the added bonus of making the work future proof, as editors can add and adjust pages without additional coding.

Lastly, editors can add “insert” content to the page. These inserts are Drupal Paragraphs -- custom fielded micro-content -- that optionally appear sprinkled through the content.

These inserts leverage Vue’s logic handling and template system. Here is a code snippet for the inserts:

 
   
     
       
          {% verbatim %}{{ item.link_text | decode }}{% endverbatim %}
          {% verbatim %}{{ item.slug | decode }}{% endverbatim %}
       
     

   
   
     
       
         
            {% verbatim %}{{ item.slug | decode }}{% endverbatim %}
         
         
            {% verbatim %}{{ item.author | decode }}{% endverbatim %}
             
               
                {% verbatim %}{{ item.link_text | decode }}{% endverbatim %}
               

             
             
                {% verbatim %}{{ item.link_text | decode }}{% endverbatim %}
             
           
         
       
     
   
   
     
       
          {% verbatim %}{{ item.slug | decode }}{% endverbatim %}
          {% verbatim %}{{ item.link_text | decode }}{% endverbatim %}
       
     
   
 

The v-if directive at the start tells the application to only render the entire template if the parent_id property is present. Since that property is unique to Paragraphs, this template is skipped when rendering a blog post or case study.

In cases of content types, we have a standard output from our JSON feed, so one template covers all use-cases:

 
 
   
   
      {% verbatim %}{{ item.type }}{% endverbatim %}
      {% verbatim %}{{ item.title | decode }}{% endverbatim %}
     
   
   
      {% verbatim %}{{ item.dates | decode }}{% endverbatim %}
      {% verbatim %}{{ item.location | decode }}{% endverbatim %}
   
   
     
      {% verbatim %}By {{ item.author_display }}{% endverbatim %}
   
   
      {% verbatim %}{{ getLinkText(item.type) }}{% endverbatim %}
   
   

 
 

Note the v-bind directive here. Vue cannot parse variables directly in HTML tag properties, so it uses this syntax to interact with the DOM and rewrite the element.

Other nice features include adding method calls like {{ getLinkText(item.type) }} that let Vue perform complex calculations on data elements. We also use Vue’s extensible filter system {{ item.summary | decode }} to perform actions like HTML escaping.

For instance, we pass the index position, content type, and background image (if present) to the getClass() method of our application:

// Get proper class for each cell.
getClass: function(index, type, image) {
  var $class = '';
  if (index == 2 || index == 5 || (index > 12 && index % 5 == 1)) {
    $class = $class + ' grid-item--lg';
  }
  if (type == 'Case Study') {
    $class = $class + ' grid-item--cs grid-item--dark';
  }
  else if (type == 'Collection') {
    $class = $class + ' grid-item--collection';
    if (index % 2 == 1) {
      $class = $class + ' grid-item--dark';
    }
  }
  else {
   $class = $class + ' grid-item--default';
  }
  if (image === undefined || image.length === 0) {
    if (index % 2 == 1) {
      return $class + ' grid-item--dark';
    }
    return $class;
  }
  return $class + ' grid-item--dark';
},

This technique lets us provide the light and dark backgrounds that make the design pop.

The end result is exactly what we need to deliver the experience the audience expects. And we get all that within a sustainable, extensible platform.

Is a decoupled Drupal site right for you? For us, a dynamic (yet not fully decoupled) instance made the most sense to improve the experience for our site users. We’d love to discuss whether or not the same flexibility would be beneficial for your site. Drop us a line via our contact form, or reach out via Twitter (@palantir).

Design Development Drupal Site Building
Categories: Drupal

Lucius Digital: Lucius launches Drupal platform for SOS Children's Villages

7 May 2018 - 8:03am
The new platform for SOS Children's Villages was recently launched, after 3 sprints with a lead time of 3 months and a team of 6 people on average, we are proud to announce this new platform.
Categories: Drupal

Oliver Davies: Creating a Custom PHPUnit Command for Docksal

5 May 2018 - 5:00pm

This week I’ve started writing some custom commands for my Drupal projects that use Docksal, including one to easily run PHPUnit tests in Drupal 8. This is the process of how I created this command.

Categories: Drupal

Oliver Davies: Creating a Custom PHPUnit Command for Docksal

5 May 2018 - 5:00pm
What is Docksal?

Docksal is a local Docker-based development environment for Drupal projects and other frameworks and CMSes. It is our standard tool for local environments for projects at Microserve.

There was a great talk recently at Drupaldelphia about Docksal.

Why write a custom command?

One of the things that Docksal offers (and is covered in the talk) is the ability to add custom commands to the Docksal’s fin CLI, either globally or as part of your project.

As an advocate of automated testing and TDD practitioner, I write a lot of tests and run PHPUnit numerous times a day. I’ve also given talks and have written other posts on this site relating to testing in Drupal.

There are a couple of ways to run PHPUnit with Docksal. The first is to use fin bash to open a shell into the container, move into the docroot directory if needed, and run the phpunit command.

fin bash cd /var/www/docroot ../vendor/bin/phpunit -c core modules/custom

Alternatively, it can be run from the host machine using fin exec.

cd docroot fin exec '../vendor/bin/phpunit -c core modules/custom'

Both of these options require multiple steps as we need to be in the docroot directory where the Drupal code is located before the command can be run, and both have quite long commands to run PHPUnit itself - some of which is repeated every time.

By adding a custom command, I intend to:

  1. Make it easier to get set up to run PHPUnit tests - i.e. setting up a phpunit.xml file.
  2. Make it easier to run the tests that we’d written by shortening the command and making it so it can be run anywhere within our project.

I also hoped to make it project agnostic so that I could add it onto any project and immediately run it.

Creating the command

Each command is a file located within the .docksal/commands directory. The filename is the name of the command (e.g. phpunit) with no file extension.

To create the file, run this from the same directory where your .docksal directory is:

mkdir -p .docksal/commands touch .docksal/commands/phpunit

This will create a new, empty .docksal/commands/phpunit file, and now the phpunit command is now listed under "Custom commands" when we run fin.

You can write commands with any interpreter. I’m going to use bash, so I’ll add the shebang to the top of the file.

#!/usr/bin/env bash

With this in place, I can now run fin phpunit, though there is no output displayed or actions performed as the rest of the file is empty.

Adding a description and help text

Currently the description for our command when we run fin is the default "No description" text. I’d like to add something more relevant, so I’ll start by adding a new description.

fin interprets lines starting with ## as documentation - the first of which it uses as the description.

#!/usr/bin/env bash ## Run automated PHPUnit tests.

Now when I run it, I see the new description.

Any additional lines are used as help text with running fin help phpunit. Here I’ll add an example command to demonstrate how to run it as well as some more in-depth text about what the command will do.

#!/usr/bin/env bash ## Run automated PHPUnit tests. ## ## Usage: fin phpunit <args> ## ## If a core/phpunit.xml file does not exist, copy one from elsehwere. ## Then run the tests.

Now when I run fin help phpunit, I see the new help text.

Adding some content Setting the target

As I want the commands to be run within Docksal’s "cli" container, I can specify that with exec_target. If one isn’t specified, the commands are run locally on the host machine.

#: exec_target = cli Available variables

These variables are provided by fin and are available to use within any custom commands:

  • PROJECT_ROOT - The absolute path to the nearest .docksal directory.
  • DOCROOT - name of the docroot folder.
  • VIRTUAL_HOST - the virtual host name for the project. Such as myproject.docksal.
  • DOCKER_RUNNING - (string) "true" or "false".

Note: If the DOCROOT variable is not defined within the cli container, ensure that it’s added to the environment variables in .docksal/docksal.yml. For example:

version: "2.1" services: cli: environment: - DOCROOT Running phpunit

When you run the phpunit command, there are number of options you can pass to it such as --filter, --testsuite and --group, as well as the path to the tests to execute, such as modules/custom.

I wanted to still be able to do this by running fin phpunit <args> so the commands can be customised when executed. However, as the first half of the command (../vendor/bin/phpunit -c core) is consistent, I can wrap that within my custom command and not need to type it every time.

By using "$@" I can capture any additional arguments, such as the test directory path, and append them to the command to execute.

I’m using $PROJECT_ROOT to prefix the command with the absolute path to phpunit so that I don’t need to be in that directory when I run the custom command, and $DOCROOT to always enter the sub-directory where Drupal is located. In this case, it’s "docroot" though I also use "web" and I’ve seen various others used.

DOCROOT_PATH="${PROJECT_ROOT}/${DOCROOT}" DRUPAL_CORE_PATH="${DOCROOT_PATH}/core" # If there is no phpunit.xml file, copy one from elsewhere. # Otherwise run the tests. ${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@"

For example, fin phpunit modules/custom would execute /var/www/vendor/bin/phpunit -c /var/www/docroot/core modules/custom within the container.

I can then wrap this within a condition so that the tests are only run when a phpunit.xml file exists, as it is required for them to run successfully.

if [ ! -e ${DRUPAL_CORE_PATH}/phpunit.xml ]; then # If there is no phpunit.xml file, copy one from elsewhere. else ${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@" fi Creating phpunit.xml - step 1

My first thought was that if a phpunit.xml file doesn’t exist was to duplicate core’s phpunit.xml.dist file. However this isn’t enough to run the tests, as values such as SIMPLETEST_BASE_URL, SIMPLETEST_DB and BROWSERTEST_OUTPUT_DIRECTORY need to be populated.

As the tests wouldn't run at this point, I’ve exited early and displayed a message to the user to edit the new phpunit.xml file and run fin phpunit again.

if [ ! -e ${DRUPAL_CORE_PATH}/phpunit.xml ]; then echo "Copying ${DRUPAL_CORE_PATH}/phpunit.xml.dist to ${DRUPAL_CORE_PATH}/phpunit.xml." echo "Please edit it's values as needed and re-run 'fin phpunit'." cp ${DRUPAL_CORE_PATH}/phpunit.xml.dist ${DRUPAL_CORE_PATH}/phpunit.xml exit 1; else ${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@" fi

However this isn’t as streamlined as I originally wanted as it still requires the user to perform an additional step before the tests can run.

Creating phpunit.xml - step 2

My second idea was to keep a pre-configured file within the project repository, and to copy that into the expected location. That approach would mean that the project specific values would already be populated, as well as any customisations made to the default settings. I decided on .docksal/drupal/core/phpunit.xml to be the potential location.

Also, if this file is copied then we can go ahead and run the tests straight away rather than needing to exit early.

If a pre-configured file doesn’t exist, then we can default back to copying phpunit.xml.dist.

To avoid duplication, I created a reusable run_tests() function so it could be executed in either scenario.

run_tests() { ${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@" } if [ ! -e ${DRUPAL_CORE_PATH}/phpunit.xml ]; then if [ -e "${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml" ]; then echo "Copying ${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml to ${DRUPAL_CORE_PATH}/phpunit.xml" cp "${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml" ${DRUPAL_CORE_PATH}/phpunit.xml run_tests "$@" else echo "Copying ${DRUPAL_CORE_PATH}/phpunit.xml.dist to ${DRUPAL_CORE_PATH}/phpunit.xml." echo "Please edit it's values as needed and re-run 'fin phpunit'." cp ${DRUPAL_CORE_PATH}/phpunit.xml.dist ${DRUPAL_CORE_PATH}/phpunit.xml exit 1; fi else run_tests "$@" fi

This means that I can execute less steps and run a much shorter command compared to the original, and even if someone didn’t have a phpunit.xml file created they could have copied into place and have tests running with only one command.

The finished file #!/usr/bin/env bash #: exec_target = cli ## Run automated PHPUnit tests. ## ## Usage: fin phpunit <args> ## ## If a core/phpunit.xml file does not exist, one is copied from ## .docksal/core/phpunit.xml if that file exists, or copied from the default ## core/phpunit.xml.dist file. DOCROOT_PATH="${PROJECT_ROOT}/${DOCROOT}" DRUPAL_CORE_PATH="${DOCROOT_PATH}/core" run_tests() { ${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@" } if [ ! -e ${DRUPAL_CORE_PATH}/phpunit.xml ]; then if [ -e "${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml" ]; then echo "Copying ${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml to ${DRUPAL_CORE_PATH}/phpunit.xml" cp "${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml" ${DRUPAL_CORE_PATH}/phpunit.xml run_tests "$@" else echo "Copying phpunit.xml.dist to phpunit.xml" echo "Please edit it's values as needed and re-run 'fin phpunit'." cp ${DRUPAL_CORE_PATH}/phpunit.xml.dist ${DRUPAL_CORE_PATH}/phpunit.xml exit 0; fi else run_tests "$@" fi

It’s currently available as a GitHub Gist, though I’m planning on moving it into a public GitHub repository either on my personal account or the Microserve organisation, for people to either use as examples or to download and use directly.

I’ve also started to add other commands to projects such as config-export to standardise the way to export configuration from Drupal 8, run Drupal 7 tests with SimpleTest, and compile front-end assets like CSS within custom themes.

I think it’s a great way to shorten existing commands, or to group multiple commands into one like in this case, and I can see a lot of other potential uses for it during local development and continuous integration. Also being able to run one command like fin init and have it set up everything for your project is very convenient and a big time saver!

Resources
Categories: Drupal

Drupal Association blog: Investing In the Promote Drupal Fund

4 May 2018 - 9:13am

Donate today

Drupal has so much to be proud of:

Together, let's show the world just how amazing Drupal - and your business - is for organizations.

Invest today in the Promote Drupal Initiative. The Promote Drupal Initiative

The Promote Drupal Initiative is your opportunity to make Drupal - and your business - known and loved by new decision makers. Led by the Drupal Association, we will work with the Drupal business community to hone Drupal’s messaging and create the promotional materials we can all use to amplify the power of Drupal in the marketplace.

Step one is lining up the resources to make this initiative impactful and long lasting. 

Donate to the Promote Drupal Fund today. Help us help you grow your business. $100,000 - the Promote Drupal Fund

We need your support now to get started.

To launch the Promote Drupal Initiative, the right resources need to be in place. $100,000 will support:

  • Staff to coordinate  work

  • Marketing sprints

  • Resource support

If we all give a little, we can make a big impact promoting Drupal, together.

Donate today

Categories: Drupal

Lullabot: Eat This, It’s Safe: How to Manage Side Effects with Redux-Saga

4 May 2018 - 8:01am

Functional programming is all the rage, and for good reason. By introducing type systems, immutable values, and enforcing purity in our functions, to name just a few advantages, we can reduce the complexity of our code while bolstering our confidence that it will run with minimal errors. It was only a matter of time before these concepts crept their way into the increasingly sophisticated front-end technologies that power the web.

Projects like ClojureScript, Reason, and Elm seek to fulfill the promise of a more-functional web by allowing us to write our applications with functional programming restraints that compile down to regular ol’ JavaScript for use in the browser. Learning a new syntax and having to rely on a less-mature package ecosystem, however, are a couple roadblocks for many who might be interested in using compile-to-JS languages. Fortunately, great strides have been made in creating libraries to introduce powerful functional programming tenets directly into JavaScript codebases with a gentler learning curve.

One such library is Redux, which is a state-management tool heavily inspired by the aforementioned Elm programming language. Redux allows you to create a single store that holds the state of your entire app, rather than managing that state at the component level. This store is globally-available, allowing you to access the pieces of it that you need in whichever components need them without worrying about the shape of your component tree. The process of updating the store involves passing the store object and a descriptive string, called an action, into a special function called a reducer. This function then creates and returns a new store object with the changes described by the action.

This process is very reliable. We can be sure that the store will be updated in exactly the same way every single time so long as we pass the same action to the reducer. This predictable nature is critical in functional programming. But there’s a problem: what if we want our action to fire-off an API call? We can’t be sure what that call will return or that it’ll even succeed. This is known as a side effect and it’s a big no-no in the FP world. Thankfully, there’s a nice solution for managing these side effects in a predictable way: Redux-Saga. In this article, we’ll take a deeper look at the various problems one might run into while building their Redux-powered app and how Redux-Saga can help mitigate them.

Prerequisites

In this article, we’ll be building an application to store a list of monthly bills. We’ll focus specifically on the part that handles fetching the bills from a remote server. The pattern we’ll look at works just the same with POST requests. We’ll bootstrap this app with create-react-app, which will cover most of the code I don’t explicitly walkthrough.

What is Redux-Saga?

Redux-Saga is a Redux middleware, which means it has access to your app’s store and can dispatch its own actions. Similar to regular reducers, sagas are functions that listen for dispatched actions. Additionally, they perform side effects and return their own actions back to a normal reducer.

undefined

By intercepting actions that cause side effects and handling them in their own way, we maintain the purity of Redux reducers. This implementation uses JS generators, which allows us to write asynchronous code that reads like synchronous code. We don’t need to worry about callbacks or race conditions since the generator function will automatically pause on each yield statement until complete before continuing. This improves the overall readability of our code. Let’s take a look at what a saga for loading bills from an API would look like.

1 import { put, call, takeLatest } from 'redux-saga/effects'; 2 3 export function callAPI(method = 'GET', body) { 4 const options = { 5 headers, 6 method 7 } 8 9 if (body !== undefined) { 10 options.body = body; 11 } 12 13 return fetch(apiEndpoint, options) 14 .then(res => res.json()) 15 .catch(err => { throw new Error(err.statusText) }); 16 } 17 18 export function* loadBills() { 19 try { 20 const bills = yield call(callAPI); 21 yield put({ type: 'LOAD_BILLS_SUCCESS', payload: bills }); 22 } catch (error) { 23 yield put({ type: 'LOAD_BILLS_FAILURE', payload: error }); 24 } 25 } 26 27 export function* loadBillsSaga() { 28 yield takeLatest('LOAD_BILLS', loadBills); 29 }

Let’s tackle it line-by-line:

  • Line 1: We import several methods from redux-saga/effects. We’ll use takeLatest to listen for the action that kicks-off our fetch operation, call to perform said fetch operation, and put to fire the action back to our reducer upon either success or failure.
  • Line 3-16: We’ve got a helper function that handles the calls to the server using the fetch API.
  • Line 18: Here, we’re using a generator function, as denoted by the asterisk next to the function keyword.
  • Line 19: Inside, we’re using a try/catch to first try the API call and catch if there’s an error. This generator function will run until it encounters the first yield statement, then it will pause execution and yield out a value.
  • Line 20: Our first yield is our API call, which, appropriately, uses the call method. Though this is an asynchronous operation, since we’re using the yield keyword, we effectively wait until it’s complete before moving on.
  • Line 21: Once it’s done, we move on to the next yield, which makes use of the put method to send a new action to our reducer. Its type describes it as a successful fetch and contains a payload of the data fetched.
  • Line 23: If there’s an error with our API call, we’ll hit the catch block and instead fire a failure action. Whatever happens, we’ve ended up kicking the ball back to our reducer with plain JS objects. This is what allows us to maintain purity in our Redux reducer. Our reducer doesn't get involved with side effects. It continues to care only about simple JS objects describing state changes.
  • Line 27: Another generator function, which includes the takeLatest method. This method will listen for our LOAD_BILLS action and call our loadBills() function. If the LOAD_BILLS action fires again before the first operation completed, the first one will be canceled and replaced with the new one. If you don’t require this canceling behavior, redux-saga/effects offer the takeEvery method.

One way to look at this is that saga functions are a sort-of intercepting reducer for certain actions. We fire-off the LOAD_BILLS action, Redux-Saga intercepts that action (which would normally go straight to our reducer), our API call is made and either succeeds or fails, and finally, we dispatch an action to our reducer that handles the app’s state update. Oh, but how is Redux-Saga able to intercept Redux action calls? Let’s take a look at index.js to find out.

1 import React from 'react'; 2 import ReactDOM from 'react-dom'; 3 import App from './App'; 4 import registerServiceWorker from './registerServiceWorker'; 5 import { Provider } from 'react-redux'; 6 import { createStore, applyMiddleware } from 'redux'; 7 import billsReducer from './reducers'; 8 9 import createSagaMiddleware from 'redux-saga'; 10 import { loadBillsSaga } from './loadBillsSaga'; 11 12 const sagaMiddleware = createSagaMiddleware(); 13 const store = createStore( 14 billsReducer, 15 applyMiddleware(sagaMiddleware) 16 ); 17 18 sagaMiddleware.run(loadBillsSaga); 19 20 ReactDOM.render( 21 <Provider store={store}> 22 <App /> 23 </Provider>, 24 document.getElementById('root') 25 ); 26 registerServiceWorker();

The majority of this code is standard React/Redux stuff. Let’s go over what’s unique to Redux-Saga.

  • Line 6: Import applyMiddleware from redux. This will allow us to declare that actions should be intercepted by our sagas before being sent to our reducers.
  • Line 9: createSagaMiddleware from Redux-Saga will allow us to run our sagas.
  • Line 12: Create the middleware.
  • Line 15: Make use of Redux’s applyMiddleware to hook our saga middleware into the Redux store.
  • Line 18: Initialize the saga we imported. Remember that sagas are generator functions, which need to be called once before values can be yielded from them.

At this point, our sagas are running, meaning they’re waiting to respond to dispatched actions just like our reducers are. Which brings us to the last piece of the puzzle: we have to actually fire off the LOAD_BILLS action! Here’s the BillsList component:

1 import React, { Component } from 'react'; 2 import Bill from './Bill'; 3 import { connect } from 'react-redux'; 4 5 class BillsList extends Component { 6 componentDidMount() { 7 this.props.dispatch({ type: 'LOAD_BILLS' }); 8 } 9 10 render() { 11 return ( 12 <div className="BillsList"> 13 {this.props.bills.length && this.props.bills.map((bill, i) => 14 <Bill key={`bill-${i}`} bill={bill} /> 15 )} 16 </div> 17 ); 18 } 19 } 20 21 const mapStateToProps = state => ({ 22 bills: state.bills, 23 error: state.error 24 }); 25 26 export default connect(mapStateToProps)(BillsList);

I want to attempt to load the bills from the server once the BillsList component has mounted. Inside componentDidMount we fire off LOAD_BILLS using the dispatch method from Redux. We don’t need to import that method since it’s automatically available on all connected components. And this completes our example! Let’s break down the steps:

  1. BillsList component mounts, dispatching the LOAD_BILLS action
  2. loadBillsSaga responds to this action, calls loadBills
  3. loadBills calls the API to fetch the bills
  4. If successful, loadBills dispatches the LOAD_BILLS_SUCCESS action
  5. billsReducer responds to this action, updates the store
  6. Once the store is updated, BillsList re-renders with the list of bills
Testing

A nice benefit of using Redux-Saga and generator functions is that our async code becomes less-complicated to test. We don’t need to worry about mocking API services since all we care about are the action objects that our sagas output. Let’s take a look at some tests for our loadBills saga:

1 import { put, call } from 'redux-saga/effects'; 2 import { callAPI, loadBills } from './loadBillsSaga'; 3 4 describe('loadBills saga tests', () => { 5 const gen = loadBills(); 6 7 it('should call the API', () => { 8 expect(gen.next().value).toEqual(call(callAPI)); 9 }); 10 11 it('should dispatch a LOAD_BILLS_SUCCESS action if successful', () => { 12 const bills = [ 13 { 14 id: 0, 15 amountDue: 1000, 16 autoPay: false, 17 dateDue: 1, 18 description: "Bill 0", 19 payee: "Payee 0", 20 paid: true 21 }, 22 { 23 id: 1, 24 amountDue: 1001, 25 autoPay: true, 26 dateDue: 2, 27 description: "Bill 1", 28 payee: "Payee 1", 29 paid: false 30 }, 31 { 32 id: 2, 33 amountDue: 1002, 34 autoPay: false, 35 dateDue: 3, 36 description: "Bill 2", 37 payee: "Payee 2", 38 paid: true 39 } 40 ]; 41 expect(gen.next(bills).value).toEqual(put({ type: 'LOAD_BILLS_SUCCESS', payload: bills })); 42 }); 43 44 it('should dispatch a LOAD_BILLS_FAILURE action if unsuccessful', () => { 45 expect(gen.throw({ error: 'Something went wrong!' }).value).toEqual(put({ type: 'LOAD_BILLS_FAILURE', payload: { error: 'Something went wrong!' } })); 46 }); 47 48 it('should be done', () => { 49 expect(gen.next().done).toEqual(true); 50 }); 51 });

Here we’re making use of Jest, which create-react-app provides and configures for us. This makes things like describe, it, and expect available without any importing required. Taking a look at what this saga is doing, I’ve identified 4 things I’d like to test:

  • The saga fires off the request to the server
  • If the request succeeds, a success action with a payload of an array of bills is returned
  • If the request fails, a failure action with a payload of an error is returned
  • The saga returns a done status when complete

By leveraging the put and call methods from Redux-Saga, I don’t need to worry about mocking the API. The call method does not actually execute the function, rather it describes what we want to happen. This should seem familiar since it’s exactly what Redux does. Redux actions don’t actually do anything themselves. They’re just JavaScript objects describing the change. Redux-Saga operates on this same idea, which makes testing more straightforward. We just want to assert that the API was called and that we got the appropriate Redux action back, along with any expected payload.

  • Line 5: first we need to initialize the saga (aka run the generator function). Once it’s running we can start to yield values out of it. The first test, then, is simple.
  • Line 8: call the next method of the generator and access its value. Since we used the call method from Redux-Saga instead of calling the API directly, this will look something like this:
{ '@@redux-saga/IO': true, CALL: { context: null, fn: [Function: callAPI], args: [] } }

This is telling us that we’re planning to fire-off the callAPI function as we described in our saga. We then compare this to passing callAPI directly into the call method and we should get the same descriptor object each time.

  • Line 11: Next we want to test that, given a successful response from the API, we return a new action with a payload of the bills we retrieved. Remember that this action will then be sent to our Redux reducer to handle updating the app state.
  • Line 12-40: Start by creating some dummy bills we can pass into our generator.
  • Line 41: Perform the assertion. Again we call the next method of our generator, but this time we pass-in the bills array we created. This means that when our generator reaches the next yield keyword, this argument will be available to it. We then compare the value after calling next to a call using the put method from Redux-Saga with the action.
  • Line 44-46: When testing the failure case, instead of plainly calling the next method on our generator, we instead use the throw method, passing in an error message. This will cause the saga to enter its catch block, where we expect to find an action with the error message as its payload. Thus, we make that assertion.
  • Line 48-50: Finally, we want to test that we’ve covered all the yield statements by asserting that the generator has no values left to return. When a generator has done its job, it will return an object with a done property set to true. If that’s the case, our tests for this saga are complete!
Conclusion

We’ve achieved several objectively useful things by incorporating Redux-Saga into our project:

  • Our async code has a more synchronous look to it thanks to the use of generators
  • Our Redux reducers remain pure (no side effects)
  • Our async code is simpler to test

I hope this article has given you enough information to understand how Redux-Saga works and what problems it solves, and made a case for why you should consider using it.

Further Reading

Header photo by Becky Matsubara

Categories: Drupal

Tim Millwood: Drupal core Workspace module

4 May 2018 - 7:46am
Drupal core Workspace module

The Workspace entity was first seen in the contrib module Multiversion on 1st June 2014. Back then the entity type was called "Content repository", it was renamed to "Workspace" in September 2014.

On 22nd Febuary 2016 the Workspace module was created, which built upon the Multiversion module.

The Workflow Initiative was announced in Dries' keynote DrupalCon New Orleans.

Today the Workspace module landed in Drupal core as a new experimental module. This module is very different from the contrib Workspace module. It has no dependencies and now actually has a lot in common with the Drupal 7 module CPS.

Please give the module a try, join us in the issue queue, and help us get Workspace module beta ready for 8.6.0-alpha1 in just over 2 months time.

timmillwood Fri, 04/05/2018 - 15:46 Tags drupal planet drupal-planet drupal drupal8 drupal 8 drupal core Add new comment
Categories: Drupal

Agaric Collective: Creating a New Social Simple Button

4 May 2018 - 6:11am

Sharing an article via a social network is a super common task requested on a project.

Fortunately for Drupal 8 there is a module for that called Social Simple. This module allows you to display the most popular networks in a node so the user can just click any of the buttons and share the article.

By default this module provides the following buttons:

  • Twitter
  • Facebook
  • Linkedin
  • Google plus

This will cover 90% of use cases, but what if we need to add a button for a new network?

Creating a Custom Social Simple Button

The Social Simple module is already supports custom buttons, we just need to let the module know that we want to add one.

Basically what we need to do is:

  • Create a class that implements SocialNetworkInterface.
  • Register this class in our services file.
  • Add the tag social_simple_network to our service.

For our example we are going to create a basic Mail button. We start by creating a custom module. Inside our module let's create a Mail php file inside of the src/SocialNetwork folder:

mkdir -p src/SocialNetwork cd src/SocialNetwork touch Mail.php

The next step is to create a class and implement the SocialNetworkInterface which interface has the following methods:

  • getShareLink: This is the most important method. It must return a rendered array which later Drupal will use to create the button.
  • getLabel: Here we will need to provide the name of our button. In our case Mail.
  • getId: The ID of the button. We can choose any ID here, we just need to make sure that it is unique. Let's use mail for our example.
  • getLinkAttributes: These attributes are going to be passed to the link. We can add custom parameters to the link in this part.

Our class looks like this:

namespace Drupal\social_simple\SocialNetwork; use Drupal\Core\Entity\EntityInterface; use Drupal\Core\StringTranslation\StringTranslationTrait; use Drupal\Core\Url; /** * The Mail button. */ class Mail implements SocialNetworkInterface { use StringTranslationTrait; /** * The social network base share link. */ const MAIL = 'mailto:'; /** * {@inheritdoc} */ public function getId() { return 'mail'; } /** * {@inheritdoc} */ public function getLabel() { return $this->t('Mail'); } /** * {@inheritdoc} */ public function getShareLink($share_url, $title = '', EntityInterface $entity = NULL, array $additional_options = []) { $options = [ 'query' => [ 'body' => $share_url, 'subject' => $title, ], 'absolute' => TRUE, 'external' => TRUE, ]; if ($additional_options) { foreach ($additional_options as $id => $value) { $options['query'][$id] = $value; } } $url = Url::fromUri(self::MAIL, $options); $link = [ 'url' => $url, 'title' => ['#markup' => '' . $this->getLabel() . ''], 'attributes' => $this->getLinkAttributes($this->getLabel()), ]; return $link; } /** * {@inheritdoc} */ public function getLinkAttributes($network_name) { $attributes = [ 'title' => $network_name, ]; return $attributes; } }

The next step is to let the social network know about our new button and we do this by adding this class as a service in our module.services.yml. If you are not familiar with this file, you can read the structure of a service file documentation..

Basically we need to add something like this:

services: social_simple.mail: class: Drupal\custom_module\SocialNetwork\Mail tags: - { name: social_simple_network, priority: 0 }

Next, we just need to rebuild the cache. Now when we visit the social simple configuration we will see our new button there, ready to be used.

The only thing that we need to pay extra attention to is that the Social Simple module will just search the services with the tag social_simple_network otherwise our class will not be found

If you want to see how the whole thing is working, you can check this patch that I made as a part of a project: https://www.drupal.org/project/social_simple/issues/2899517. As a bonus, I made an initial integration with the Forward module.

Categories: Drupal

Amazee Labs: Slack integration as a debugging tool

4 May 2018 - 5:19am
Slack integration as a debugging tool

We use Slack to communicate internally as well as with our clients, but we also make use of it in different ways that help us deliver a better service to both groups of stakeholders.

Fran Garcia-Linares Fri, 05/04/2018 - 14:19

Slack vs Email

Whilst we still use emails when the situation requires, we always try to move the communication related to our projects to slack. Most of our clients are in slack in their own dedicated channel, which happens to be the same one that all designers, developers, project managers, etc. use for communications related to the project. 

This way, everybody involved in the project is aware of what’s going on. Information gets passed easily across the team and we avoid multiple “broken telephone” situations. Also, if the person who is usually responsible for something happens to be sick or on holiday, the rest of the team can assist instead of getting an unhelpful “Out of Office” reply.

Different Integrations

Slack and its bots are part of our Global Maintenance team too (I guess they can be considered remote workers). They help us with our day to day tasks in a myriad of ways. 

  • Activity Channel: we use slack integrations that will pull any message or activity related to the tickets that the team is taking care of in the current sprint. This can get a bit noisy sometimes but it’s a wonderful way to stay informed of what’s going on on our team board. No more email and Jira issue watching.


     
  • Think about Blaize: it’s not just a reminder to think about Blaize (which we also do) who lives in New Zealand, it’s also a reminder for the UTC timezone team to starting wrapping up the day, commit everything not yet committed, update tickets not yet updated and leave things ready for Blaize, who will tell us “Good morning” at our 8~9pm (his 7~8am).


     
  • Information about important events: we have multiple integrations for regular but important events on certain projects, for peace of mind, and to make people aware that something has happened. The following examples are to inform everyone that an automatic Mailchimp list was created (this is crucial for the client) and to inform developers that a deployment has happened (and whether it went well or not - red vs green).




     
  • Instant bug reporting: this is probably the most important one and the one that is making the biggest difference to our Global Maintenance team. We use it whenever there are bug reports that we can’t replicate because the data changed, or when we don’t have enough information to take an action. If we can’t fully resolve a ticket request, we’re very likely to create an integration that will “spot” a problem and give us useful realtime information so that we can debug knowing further information about the issue. Over the past few months we’ve done this in multiple projects and it not only gives us instant feedback, it also informs everyone on the channel that something is happening, so we can be alert and take an action if needed. Below are two examples of those situations, again on critical parts of our clients’ systems, that allowed us to take quick action. 



These are just a few samples of the multiple integrations we have. If you want to know a bit more about the technical part, just keep reading.

How to do the integration?

  • Create the slack webhook: here.

  • Use the Drupal slack module (recommended) or code your own function, which could be as simple as:

  • Call the desired function:

    • Using slack module:


      Using custom module:

That’s it really, as you can see it’s not too complex but it adds huge value to our day to day work.

Categories: Drupal

ComputerMinds.co.uk: Creating multilingual variables

4 May 2018 - 12:54am

A super quick blast from the past today; a Drupal 7 based article!

I had some work recently to create a new "setting" variable for one our Drupal 7 multilingual sites, which meant creating multilingual versions of those variables. I soon found out that there is very much a correct way - or order - to achieve this as I got this one very wrong (I had to re-instate my DB!). So here I am writing a very quick guide to help those from my wrong doings.

(This guide assumes you have a multilingual site setup with i18n's Variable translation module.)

Four simple steps to achieve a multilingual variable:

  1. Declare your new variables via hook_variable_info
function your_module_name_variable_info($options = array()) { $variables['your_variable_name'] = array( 'title' => t('Foo'), 'description' => t('A multi-lingual variable'), 'type' => 'string', 'default' => t('Bar'), 'localize' => TRUE, ); }

The options you can set in this hook are well documented - start reading from the Variable module's project page.

  1. Flush the variable cache and get your new variables registered using an update hook. The meat of the update hook is below -- note that this assumes you want all all of the possibly-localizable variables to be made translatable:
variable_cache_clear(); /** @var VariableRealmControllerInterface $controller */ if ($controller = variable_realm_controller('language')) { $variables = $controller->getAvailableVariables(); $controller->setRealmVariable('list', $variables); } else { throw new DrupalUpdateException('Could not set up translatable variables. Try manually setting them.'); }
  1. Create or alter your settings form (I'm assuming it uses system_settings_form() or is already recognised by the i18n/variable systems as a form containing translatable variables) and add your new form elements. Make sure the element(s) are the same as your newly created variable(s) - I use a $key variable to avoid any mistakes there!
$key = 'your_variable_name'; $form[$key] = array( '#type' => 'textfield', '#title' => t('Foo'), '#default_value' => variable_get($key, 'Bar'), );

Head over to /admin/config/regional/i18n/variable or your settings form to see your new multilingual variable in all it's glory!

Categories: Drupal

Valuebound: How to integrate Google Assistant with Drupal 8

3 May 2018 - 11:01pm

The demand for Voice technology is rising and it is likely to revolutionize the way publishing websites engage with their audience. The Internet-connected virtual assistant is seeing a significant rise, but the question is how publishers can use this tech to grow their audience base and ultimately increase revenue? Here, we will explore how to use Actions on Google for a new project and an existing one followed up by an integration with Drupal 8 website.

Let’s have a look.

Integrating Actions on Google with a device

Integrating Actions on Google with an electronic gadget or smart speakers allow us to trigger voice command to control various Drupal commands such as:

  • Clearing cache
  • Count number of node 
  • Sending…
Categories: Drupal

Agiledrop.com Blog: AGILEDROP: How to Integrate Google Analytics with Drupal 8

3 May 2018 - 6:17pm
Are you a Drupal website owner? Are you a content marketer? Are you a digital marketer or a Drupal developer? If your answer to any of these questions is true, then you might know how important it is to be able to keep track of the statistics of your websites. One tool that stands out and probably beats all others in terms of popularity when it comes to website analytics is Google Analytics. The case for Google Analytics’ popularity stands even when you look at its usage amongst Drupal sites only. As is the case with most of Drupal’s extendable functionality, there’s a module for integration… READ MORE
Categories: Drupal

Acro Media: Drupal Commerce 2: How to Manage Customer Accounts

3 May 2018 - 7:45am

Most customer will manage their own accounts on an ecommerce website just fine. However, sometimes you need to create new accounts for your customers, or edit their existing account. For example, if you have both an online store and a brick-and-mortar store running on the same platforms (which Drupal Commerce can do), your in-person cashiers may have reasons for creating or updating your customers account. Likewise, if you offer support by phone, a customer service rep may also need to create or update accounts.

In this Acro Media Tech Talk video, we user our Urban Hipster Commerce 2 demo site to show you how you can manage your customers online accounts. These are things like finding specific users, adding new users, blocking users, modifying a users payment methods and viewing their previous orders, etc. It's super simple.

Its important to note that this video was recorded before the official 2.0 release of Drupal Commerce. You may see some differences between this video and the current releases. The documentation is also evolving over time.

Urban Hipster Commerce 2 Demo site

This video was created using the Urban Hipster Commerce 2 demo site. We've built this site to show the adaptability of the Drupal 8, Commerce 2 platform. Most of what you see is out-of-the-box functionality combined with expert configuration and theming.

More from Acro Media Drupal modules in this demo

Categories: Drupal

Hook 42: DrupalCon Nashville - More than just hot chicken!

3 May 2018 - 7:16am

Every year at DrupalCon our team has an amazing time. This year was no different. We rounded up everyone who attended and asked them about their favorites this year.

Technical information was being readily exchanged and processed, while the humans who make up the community shared many stories as well. On top of all of the good information there was good food, so much food!

Categories: Drupal

Drop Guard: How you can handle your patches easily - Drop Guard recipe

3 May 2018 - 7:15am
How you can handle your patches easily - Drop Guard recipe

Update automation sounds nice as long as you don’t think about your (heavily) patched Drupal project, right?

In this “recipe” I will explain how Drop Guard handles custom patches within an fully or partly automated update process.
 

1. Update release

An update got released on Drupal.org. Only a few minutes later, Drop Guard detects the update release information, such as update type and version.

 

Drupal Planet Drupal Drop Guard recipes
Categories: Drupal

Jeff Geerling's Blog: Converting a non-Composer Drupal codebase to use Composer

3 May 2018 - 7:05am

A question which I see quite often in response to posts like A modern way to build and develop Drupal 8 sites, using Composer is: "I want to start using Composer... but my current Drupal 8 site wasn't built with Composer. Is there an easy way to convert my codebase to use Composer?"

Unfortunately, the answer to that is a little complicated. The problem is the switch to managing your codebase with Composer is an all-or-nothing affair... there's no middle ground where you can manage a couple modules with Composer, and core with Drush, and something else with manual downloads. (Well, technically this is possible, but it would be immensely painful and error-prone, so don't try it!).

Categories: Drupal

ComputerMinds.co.uk: Rebranding ComputerMinds - Part 4: Pattern Lab

3 May 2018 - 5:47am

We didn’t see this project solely as a chance to rebrand and rebuild for ourselves, it was also an opportunity to try something new and expand our collective knowledge with the potential for using with clients in the future. We had been discussing using Pattern Lab for Front End development for some time and this was the perfect opportunity to try it out.

Patten Lab allows the creation of component-driven user interfaces using atomic design principles. This means we can create modular patterns all packaged up nicely that can be assembled together to build a site. Plus we can use dynamic data to display the patterns in a live style guide which makes viewing each component quick and easy. And nobody is viewing out of date designs or code - an issue we had on a recent project. With Pattern Lab, we would have a central place to view all of the actual components that will be used on the live site.

The guys over at Four Kitchens made implementing a Pattern Lab with Drupal super easy, by creating Emulsify. Serving as a Drupal 8 starter kit theme, Emulsify allows you to build and manage components without using Drupal's template names but by using custom template names that make sense to everyone, instead. When you're ready, you can easily connect them up to Drupal.

Building the frontend in this way allows it to be built separately from the backend development. It's possible to create the whole of the front end before even touching Drupal. If needs be, it also allows developers to work on the frontend and backend at the same time without stepping on each other's toes.

Because we were to be using Emulsify, we quickly set up a Drupal codebase (using composer) which allowed us to then jump in and clone the Emulsify theme and begin working on the Front End. Once we had the theme, it was real easy to get the sass and javascript compiling, set up a local server (to view the style guide) and watch for changes, with one command:

npm start

As well as compiling, this command also provides a url for the local server. Open it up in a browser and you can see your live style guide!

Now for the actual components. These are filed in the theme, inside:

components/_patterns/

As we're working with Atomic Principles, the smallest components are filed first building up to the biggest. The beauty of pattern lab is nesting - it's possible to include patterns inside each other to make larger components. Although it's not necessary to use Atomic Design naming conventions when organising patterns, it does make sense. These levels are:

  1. Atoms
  2. Molecules
  3. Organisms
  4. Templates
  5. Pages

Atoms are the basic elements of the site like HTML tags and buttons. Molecules combine these Atoms to create larger components like a card and then Organisms can combine Molecules to create more complex page components and so on...

Numerics are added to ensure they are listed in the correct order and also a Base folder is added to house variables, breakpoints, layouts and global sass mixins. So, this is how our file structure looks inside that _patterns folder:

- _patterns/ - 00-base - 01-atoms - 02-molecules - 03-organisms - 04-templates - 05-pages

Within each of the Atomic Design folders is a set of components. Each component comprises of a Twig file, a Sass file and in some cases a Javascript file. These files contain code specific to that component. Having all the code for each component organised this way makes it really easy and fast to edit components, and also add new ones.

Having just the code that makes the component isn't enough for us to view it in our style guide. In addition to the files that make up the component, we can also include files to give the component context. A Markdown file allows us to give the component a title and description, which are used in the navigation and style guide. To each component folder we can also add a YML file which holds filler content solely for use for the style guide. We basically just take each variable name from the twig file and provide some content for each.

So, a typical component file structure might look like this:

- card - _card.scss - card.md - card.yml - card.js

Once we had a full understanding of the structure and had added our colour palette and set up our grid and breakpoints it was a case of working through the designs to determine what the components were and of which size. Then starting with Atoms and working up we could build each component. We'll look at the actual development in the next article in the series.

Categories: Drupal

Pages